A baller move from OpenAI, but who will end up paying for all this ‘free’ technology?


Welcome to an end of week update from Unmade. Today: AI’s most anguished champion asks when the bills will start arriving for ChatGPT; and a solid day on the Unmade Index.

If you’ve been thinking about upgrading to an Unmade membership, this is the perfect time. Your membership includes:

  • Member-only pricing for our HumAIn (May 28) and REmade (October 1) conferences;

  • A complimentary invitation to Unmade’s Compass event (November);

  • Member-only content and our paywalled archives;

  • Your own copy of Media Unmade

Upgrade today.



ChatGPTuh-oh Spaghettios

Open AI’s DALL-E: doesn’t know what a Molotov cocktail is

This week, OpenAI unveiled a major update to its AI operating system ChatGPT.

After spending time with GPT-4o Cat McGinn, curator of Unmade’s AI event HumAIn, argues the the most significant thing about the update is not the flashy new features but a bold strategy of widespread user capture

The true impact of this week’s Open AI update lies not in its much-hyped video capabilities or image recognition, but in the audacity of its deployment. These new extensions – the “natively” multimodal input and outputs – or the ability to upload voice, text images and videos and receive responses in kind – are not yet widely available, but will roll out over the coming weeks.  

However, the real significance of OpenAI’s latest model, the awkwardly named GPT-4o (oh!), is the rapaciousness of a model of such sophistication being deployed, for free, to everyone all at once. 

This aggressive move mirrors Uber’s early loss-leader pricing strategy, which allowed the startup to swiftly dominate the ride-sharing market before eventually putting up the prices.

For most people, the experience of using the free OpenAI model GPT 3.5 was underwhelming. OpenAI gained a meteoric 180 million users within a month of launch, but estimates are that the returning monthly users were around half of that total. Once users of the free version had become accustomed to the novelty of the interaction with a chatbot offering an Oracle-like ability to answer any question, the cracks began to show. Mistakes, bugs and outages are rampant. 

Qualitatively and quantitatively, it’s significantly less good than GPT4. Any business using AI technology would be unlikely to implement it on the strength of the 3.5 model.

Open AI is tight-lipped about the number of paying customers accessing the paid versions of ChatGPT, but estimates indicate it was probably only around 5% of all ChatGPT users.

The 4o update means all users can now access the GPT Store to use custom GPT bots – fine-tuned chatbots trained for specific purposes, giving organisations and publishers a way to engage with their customers via the platform. 

Some experts think this will become one of the revenue-generating avenues for OpenAI, as robot-adjacent CEO Sam Altman has continually made clear his distaste for integrating ads into the chat client.   

The other big changes OpenAI now offers include the “memory” feature, so users can store their conversations with the bot, rather than the frustratingly amnesiac experience of starting from scratch with every interaction. 

Of course, this provides the tech company with more of that sweet, sweet data to train future iterations of the LLM; we’re entering a new relationship with big tech which is either synergistic or co-dependent, depending on your point of view.   

Users can now upload photos and documents, and ask ChatGPT to analyse them, as well as access the Bing integration which allows the chatbot to browse the web for current information (GPT 3.5’s training data only runs to January 2022).

Despite OpenAI’s deep pockets, with funding of $13bn from Microsoft alone, the company is at pains to present itself as a scrappy underdog. The latest announcement, led by the incredibly acclaimed CTO Mira Murati, was deliberately set in a homey tech lounge, accompanied by a series of painfully earnest yet “fun” demo videos by OpenAI workers in their signature hoodies with dangling white cords. 

If OpenAI’s brand were a person, they would be a fanatical yet amateur revolutionary, a Molotov cocktail in each hand, but their shoelaces untied.    

The announcement was provocatively timed to come out just before Google I/O, the tech giant’s flagship conference. Google, amidst much more formality, and to much less fanfare, launched its new AI offerings. One of the updates with most relvance to our world is the increased size of the context window, from 1 million to 2 million tokens. 

A context window works like an AI’s short-term memory; it determines how much of your previous conversation it can remember – a bigger context window creates better, less amnesiac responses. Larger context windows enable more complex interactions, longer responses, and the ability to analyse larger inputs. 

That token size is equivalent to being able to upload two hours of video footage, or the entirety of the Australian government’s budget data, tables and documents.

It’s extraordinarily powerful. In terms of enterprise applications, being able to input this quantity of data is unprecedented in the development of these large language models, meaning the capabilities and use cases are vast.

It’s worth mentioning once again that, as I’ve previously written, we’re no closer to any sort of resolution on the issue of copyright – and there is still no transparency on how and where the data has been sourced to train these upscaled models.

And secondly, when we talk about the cost of these models, while the cost to the user may be free, or even a mere $20 a month, in reality using these large language models comes with a hefty price tag, one that is currently not being passed onto the user. As a brief flash forward, I invite you to remember Uber when it was cheap…

As the old saying goes, if you’re not paying for something, you’re not the customer, you’re the product.

The cost of training future large models has been estimated to be in the region of USD $10 bn. At some point, those investments will need to deliver returns. If venture capitalists typically look for returns of more than 10X on their winning investments, and we know that the cost of the GPUs used to train GPT 3 alone was approximately $5 million, (and estimates of training the next generation of LLMs exceed one billion USD…) – it’s not a complex dot-joining exercise to predict that someone is going to need to pay for this free product.  

The other, perhaps more insidious, impact of Open AI’s new model is the way it offers a deeply anthropomorphised interaction with AI.

It is true that by role-playing with a chatbot, assigning it an objective in which it is motivated to succeed, or allocating a persona, the results improve markedly. But asking it to respond, like a peculiar parlour game, in the manner of Ruth Bader Ginsberg, David Attenborough or your CEO, is still some distance from accepting that one is interacting with – and I use this term advisedly – a conscious entity, or even a friend.

The emphasis on GPT-4o’s ability to modulate its voice, shift the register of language and emotion gave me pause. By designing a chatbot to respond with some semblance of a personality, even with rather stiff banter, I wonder if we are opening the door to a future in which an already pathologically-online generation ceases to interact with other humans, instead developing intense relationships with a large language model which has been designed to please and appease the user.

Anyone with a passing interest in civil liberties will probably find the facial recognition capabilities cause them to break out in a cold sweat. ChatGPT being able to translate seamlessly between 50 languages is extraordinary from an audience engagement standpoint, but are we also setting ourselves up a future in which no one learns other languages, where we lose the ability to think differently that language acquisition delivers?

AI technologies hold tremendous possibility, but the focus of these applications should be on empowering humanity, elevating and democratising creativity and driving efficiencies (particularly in media and marketing), rather than trying to replace what makes us human: relationships, empathy, connection with others.

As we navigate this uncharted territory, it’s important to remember that the AI landscape is far from settled. OpenAI’s baller move has undeniably accelerated the widespread adoption of AI, but also intensified the need for critical discussions about ethics, data ownership, and the potential consequences of our growing reliance on these powerful tools.

Disclosure: I asked Gemini 1.5 Pro to analyse this piece for ease of reading, then ignored all of its suggestions.



Seven West Media leads on an up day for Unmade Index

Tim Burrowes writes:

The Unmade Index followed the wider ASX All Ords into positive territory on Thursday, rising by 0.62% to 536.8 points.

The best performer of the day was Seven West Media which gained 2.56%, while outdoor company Ooh Media was up 1.85%.

Nine was up 0.97%, despite its majority owned real estate platform Domain losing 1.98%.



Time to leave you to your Friday.

I’ll be back tomorrow morning with Best of the Week written from London where I’m in town for theAdvertising Week Europe conference.

Have a great day

Toodlepip…

Tim Burrowes

Publisher – Unmade


Unmade: media and marketing analysis is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Get the latest media and marketing industry news (and views) direct to your inbox.

Sign up to the free Mumbrella newsletter now.

"*" indicates required fields

 

SUBSCRIBE

Sign up to our free daily update to get the latest in media and marketing.