A week in Generative AI: Strawberries, Artifacts & Robots
News for the week ending 1st September 2024
Lots of news from OpenAI this week and an abundance of strawberries! OpenAI’s next advancements look to tackle some of the issues with today’s GenAI models and based on reports ChatGPT now has a health 200m weekly users and the company is valued at more than $100m. Anthropic has also made their Artifacts feature generally available to everyone, so I highly encourage everyone to go and try it out if they haven’t done so already. We also had the World Robot Conference in Beijing where a huge variety of robots were shown off 🤖.
In ethics news, both OpenAI and Anthropic have agreed to let U.S. AI Safety Institute test and evaluate new models. This is great step forward in regulation and oversight of the most capable frontier GenAI technologies. Anthropic also published their system prompt and there was lots more debate about California’s AI safety bill.
OpenAI to launch new advanced "Strawberry" AI product this fall
I suspect we’ll be getting lots of rumours and glimpses of what OpenAI has install next for ChatGPT over the coming months in the run up to the US election, after which I think we’ll see some big launches either at the end of the year or Jan/Feb 2025. According to a pay-gated article by The Information, “Strawberry“ (that Sam Altman has been teasing) will bring us better reasoning, better maths and enable more complex problem solving. The article also mentions “Orion“ which is potentially whatever GPT-5 is called (it won’t be GPT-5, I’m sure!)
Both of these advancements will tackle some of the current limitations of today’s GenAI models and bring us a step much closer to models that can do tasks and becoming more like digital companions.
There were also reports this week that ChatGPT now has more than 200m weekly active users, a decent increase from the 100m monthly active users they achieved at record pace when the platform first launched.
Lastly, there are reports from the New York Times that OpenAI are running a new investment round, valuing the company at over $100bn with both Apple and Nvidia in talks to invest.
Lots of news from OpenAI this week!
Anthropic makes Artifacts generally available
I shifted 99% of my generative AI use over to Anthropic’s Claude away from ChatGPT earlier this year and one of the main reasons was Artifacts. They’re a fantastic bit of UI that really builds on the chat experience and adds a lot of utility.
If you haven’t tried Artifacats, or even Claude yet I highly encourage you to do so. Anyone can now access Artifacts and Claude’s most advance model (Claude 3.5 Sonnett) for free, so it’s a great opportunity to give it a try and experience what the best of large language models has to offer right now.
Why AI can’t spell ‘strawberry’?
This is a great little video from Alberta Tech on Instagram explaining a question that’s been flying around the internet this week - why can’t generative AI spell the word ‘strawberry’?
The answer is to do with how the technology works, understanding tokens not words or letters and so is a good simple example that really explains what’s going on under the hood of large language models.
The World Robot Conference in Beijing shows off a huge variety of humanoid robots
I’m sure this video doesn’t quite get across the huge variety and sheer craziness that was probably seen at the World Robot Conference in Beijing this week. China have definitely been leading the way in designing, producing, and importantly shipping new humanoid robot technologies and it’s great to see some of them shown off here!
AI Ethics News
OpenAI and Anthropic agree to let U.S. AI Safety Institute test and evaluate new models
Anthropic publishes the 'system prompt' that makes Claude tick
Elon Musk unexpectedly offers support for California’s AI bill
Anthropic’s CEO has some thoughts about California’s AI safety bill, too
Stephen Wolfram thinks we need philosophers working on big questions around AI
“The future is already here, it’s just not evenly distributed.“
William Gibson