A week in Generative AI: Instant Checkout, Sonnet 4.5 & Sora 2
News for the week ending 5th October 2025
Itās felt a bit like weāve been gearing up for a mad week of AI news for a while, and this week certainly provided! However, with OpenAIās annual DevDay tomorrow, Iām sure next week will be a jam-packed one too.
Deep breathā¦.
For me the most significant news, was also the smallest - OpenAI announcing Instant Checkout in ChatGPT. Itās US only for now, but I think itāll have the biggest long-term impact of all . Anthropic launched Claude Sonnet 4.5, but again, the smaller announcement is more significant. āImagine with Claudeā creates software on the fly and points to how weāll create more useful software in the future. Lastly, the big headline grabber this week was Sora 2 that OpenAI launched inside its own social app and allows people to generate their likeness, and the likeness of other people, in videos.
In Web 4.0 news, thereās a report on the impact AI platforms are having on independent websites, Opera releases their AI-powered browser Neon, and Google rolled out Gemini across their smart home devices.
In Ethics news, California signed their landmark AI safety bill, OpenAI rolled out safety routing and parental controls, and there were reports that Google is blocking AI searches for Trump and dementia.
Thereās also a great Long Read from Ethan Mollick about Real AI Agents and Real Work - worth a look if youāre interested in how more capable AI agents are likely to impact knowledge work.
And I couldnāt send this weekās newsletter without mentioning Tilly Norwood. If you havenāt heard about āherā this week, itās worth checking out.
OpenAI launches ChatGPT: Instant Checkout
In any other normal week, the launch of Anthropicās āImagine with Claudeā or OpenAIās Sora 2 would be getting the headlines. But for me, the smaller feature that OpenAI released at the beginning of the week, Instant Checkout, is far more interesting and significant.
Instant Checkout gives US users (it will expand to other markets soon) the ability to purchase products directly inside ChatGPT. This is the third pillar of Web 4.0 - āzero-clickā purchasing - that Iāve been writing about a lot this year and I think will have the biggest impact over the long term.
With Instant Checkout users can now search, research, decide, and purchase a wide range of products all with the help of ChatGPT. This completely collapses the traditional purchase funnel and drastically reduces the opportunities that marketers have to influence consumer decision making. It also makes purchases even more frictionless than they currently are, which will set OpenAI up well to generated the $1tr of annual sales they aim to process by 2029.
This is also how OpenAI monetises the 90% of users who are on the free tier. Theyāll take a cut of every sale processed through Instant Checkout, probably in the 2ā3% range. Given the scale of their free user base and the size of the revenue opportunity, I expect them to aggressively expand and enhance Instant Checkout over the coming months. What weāre seeing now is just the start.
Whatās easy to miss is that Instant Checkout is a strategic move for OpenAI to take ownership of the entire customer journey. This means that OpenAI, not the brand, holds the data, the relationship, and the insight into how and why the purchase happened. For marketers, this marks the beginning of the Web 4.0 era, where influence shifts from search and social platforms to AI platforms.
Anthropic releases Claude Sonnet 4.5
With OpenAIās release of GPT-5 in the summer, Anthropicās Sonnet temporarily lost its crown as the most capable coding model. But now theyāre back with Sonnet 4.5 and that crown is theirs again, at least until Google DeepMind releases Gemini 3 which is likely to be announced in the coming weeks.
To illustrate how far Anthropicās models have come, they released the video above which shows that Sonnet 4.5 can code and run a complete clone of Claude, complete with attachments and artefacts. In another example, it spent 30 hours autonomously coding a chat app, writing 11,000 lines of code. Very impressive!
But itās not all about coding. If a model can do 30 hours of work autonomously in any domain thatās a huge step forward. Back in March, Sonnet 3.7 was completing tasks that were only 15 minutes long on average according to research by METR with a trend line that pointed towards the task length doubling every 7 months. This trend line would put AI models on track to hit the 30 hour mark sometime after 2030, so seeing that in 2025 is incredibly surprising!
Anthropic also released another new feature alongside Sonnet 4.5 - āImagine with Claudeā. This for me is more interesting and exciting - it builds software on the fly to solve any problem you throw at it. Itās essentially software building itself based on what the user needs.
This is a big shift - weāve been designing software/digital products for years based on what we think users want and all the features are pre-determined. This completely flips this on its head and generates features in the moment based on the context of what the user is trying to do. I think this is how we get to more sophisticated, flexible, and visual chat interfaces for the models themselves as well as creating more useful software for users in the future. Its's worth paying attention to how this whole area progresses over the coming years.
OpenAI releases Sora 2 with new social app
The launch of Sora 2, in the US only to start with, was always going to be a big deal, and was always going to raise questions. Coming less than 12 months after the first model, Sora 2 is higher quality, more realistic, and better at following instructions. With that comes greater potential for deepfakes and generated videos that are indistinguishable from reality at first glance.
These issues would have surfaced simply because Sora 2 is a more capable video model. But OpenAI went further. They wrapped the experience in a social-like mobile app that lets people generate their own likeness and lets others generate it too. It also raises copyright concerns, with the model able to reproduce popular IP, echoing their earlier approach with image generation.
And I think this is the point. OpenAI are deliberately releasing technology that challenges our assumptions about whatās acceptable for it to do. Their strategy seems to be to push past the boundaries of public comfort, observe the reaction, and only then decide whether to dial things back.
On one hand, I admire this. Technology should challenge our assumptions. But if recent history tells us anything, itās that we readily adopt technology with unintended consequences that are almost impossible to reverse. With AI moving this fast, thereās no time to pause, reflect, or weigh the societal trade-offs. The problem is that AI is now asking questions we no longer have time to answer.
If you want to dig into this more deeply hereās some more coverage and commentary around Sora 2ās launch:
OpenAI staff grapples with the companyās social media push
The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More
Sam Altman says Sora will add āgranular,ā opt-in copyright controls
OpenAIās new social video app will let you deepfake your friends
Reddit stock falls for second day as references to its content in ChatGPT responses plummet
Reddit had a bad week this week with its stock falling by 15% following reports that it was no longer featuring as prominently in ChatGPTās responses. The data the report was based on is here.
It looks like OpenAI made a big change to their citations around the 9th Sept and some of the most prominent data sources are now more inline with the other data sources. For example, Reddit used to make up c.12% of social data sources, but since the changes has been tracking at around 4%. Similarly, Wikipedia used to make up c.7% of knowledge base data sources, but since the changes has been tracking around 3%. Both Reddit and Wikipedia are both still top data sources, theyāre just not as prominent as they used to be.
My initial gut feel is that OpenAI have realised that there were some data sources (Reddit, Wikipedia etc.) that were just too prominent in their citations and what they did around Sept 9th is a correction to get them more inline with other data sources so that ChatGPTās responses are more balanced in terms of where they get their data from.
Seems sensible, but significant changes like this are having a real impact on how marketers are thinking about their strategies around AI platforms and so have unintended downstream effects. Hopefully things like this will settle down as this whole space matures.
Web 4.0
Google is destroying independent websites, and one sees no choice but to defend it anyway
Opera releases Neon, its AI-powered browser with a built-in agent
Perplexityās Comet AI browser now free; Max users get new ābackground assistantā
PayPalās Honey to integrate with ChatGPT and other AIs for shopping assistance
AI Ethics News
California Governor Newsom signs landmark AI safety bill SB 53
OpenAI rolls out safety routing system, parental controls on ChatGPT
Ex-OpenAI researcher dissects one of ChatGPTās delusional spirals
Meta plans to sell targeted ads based on data in your AI chats
OpenAI is the worldās most valuable private company after private stock sale
Tilly Norwood: how scared should we be of the viral AI āactorā?
DeepSeek releases āsparse attentionā model that cuts API costs in half
Long Reads
One Useful Thing - Real AI Agents and Real Work
Simon Willison - Claude Sonnet 4.5 is probably the ābest coding model in the worldā (at least for now)
Tim Berners-Lee - Why I gave the World Wide Web away for free
āThe future is already here, itās just not evenly distributed.ā
William Gibson




