A week in Generative AI: Search, Agents & Atlas
News for the week ending 3rd November 2024
I have three big stories for you this week, with one much bigger and more significant than the others. The big one is OpenAIās launch of ChatGPT Search, and like Sam Altman, I think this is a big deal and will disrupt the search industry over the coming months. There was also lots of news around AI agents coming to various platforms in 2025 and Boston Dynamics showed off their latest generation of Atlas performing manual labour.
In ethics news a former OpenAI researcher that claims the company broke copyright law when training itās models, an official definition for āopen sourceā generative AI models is agreed, and OpenAI introduced a new benchmark for fact-checking called SimpleQA.
Thereās also a great long read from Stratechery on why Meta is perfectly placed to āwinā the AI race.
OpenAI launches ChatGPT Search
On Friday OpenAI announced the launch of ChatGPT Search following the SearchGPT beta its been running for the last few months. Sam Altman called it his āfavourite feature theyāve launched since the original launchā and I think he might be right.
ChatGPT Search is a really nice user experience. Iāve never liked the cluttered UI of Perplexity and have become tired over the years of the ever increasing clutter of Googleās search results. In stark contrast, ChatGPT search just gives you a text response with additional data only when it enhances the user experience such as tables, graphs, images, and maps. Itās a great pure search experience.
But one of the most interesting things for me is how having search in ChatGPT takes search results beyond just finding information and into something far more useful. For example, I asked ChatGPT to show me the Premier League table today, which it did. I then asked it to show me the table if Aston Villa beat Tottenham later on. As you can see here it was able to calculate the new table by giving Villa three more points, moving them up to third place. Interestingly if you look at the goal difference it predicted a 3-0 Villa win as well š¤.
What this demonstrates is that you can take search results and immediately start working with them like you would work with any other data or content in a ChatGPT chat. This is really useful and interesting now, but will be even more useful as we start to see AI agents that can perform more complex sequences of tasks on our behalf (more on that below).
I think it will take a bit of time to change the search behaviours we have all learned over the last 25+ years, but I do think this is a much better future for search. The interface is cleaner, you get to what youāre after faster, and you can then do things with the results. Itās not perfect, and there are still some hallucinations, but itās a much better user experience which I think will win out longer term.
Because of this I really think that publishers and brands need to start taking generative search seriously and start experimenting with how their content, and information on their products/services is surfaced in the platform. Itās relatively easy to opt in by adding OAI-SearchBot to a websiteās robots.txt file (more information here) but I think the most important thing to figure out is what content youāre happy for OpenAI to have access to.
File this under āBig Dealā.
AI Agents will be big in 2025
Thereās been a lot in the news this week about Generative AI agents. Google says its next-gen AI agents won't launch until 2025 at the earliest, Kevin Weil, Chief Product Officer at OpenAI said āI think 2025 is going to be the year that agentic systems finally hit the mainstream" in a Reddit AMA and Amazon CEO Andy Jassy hinted at an āagenticā Alexa coming soon.
For anyone who has read my Beyond Chatbots series, youāll know that I think AI agents (or digital companions as I like to call them) will be a big deal. Following last weekās Computer Use announcement from Anthropic and all of the news this week, I am confidently declaring 2025 as the year of agents!
I think weāll see a lot of news from Apple on this front next summer as well and I think in the second half of next year many of us will be starting to get used to the idea of agents being able to perform digital tasks for us. The really complex, clever stuff probably wonāt come until 2026 at the earliest, but there will be plenty to get excited about on this front before then.
Exciting times!
Boston Dynamicsā electric Atlas humanoid executes autonomous automotive parts picking
This video from Boston Dynamics on the surface isnāt any different from the hundreds of demo videos now floating around of humaniod robots performing tasks. However, what I really like about Atlas and the approach Boston Dynamics is taking is whilst theyāre trying to mimic the human form theyāre also not being restricted by it either.
Atlas moves in a very different way to how weāre used to seeing a humanoid form move and itās a little unsettling at first. But when you think about it, it makes total sense. Why wouldnāt we design a robot that takes advantage of being able to move in different, more efficient ways than the human body can?
As humanoid robots become more commonplace in our world over the next 5-10 years I think weāre going to have to get used to the idea that they might have a similar form factor to us, but will be able to move in very unique ways.
AI Ethics News
Former OpenAI Researcher Says the Company Broke Copyright Law
We finally have an āofficialā definition for open source AI
Keir Starmer says media firms should have control of output used in AI
Googleās DeepMind is building an AI to keep us from hating each other
Long Reads
Stratechery - Metaās AI Abundance
āThe future is already here, itās just not evenly distributed.ā
William Gibson
How does the atlas robot differ from existing robots that can already perform manual labour such as the robots at the mini factory in Oxford? How has generative been incorporated into Atlas?