A week in Generative AI: Google, Search & The Physical Turing Test
News for the week ending 11th May 2025
This week there werenât many new product announcements, but lots of interesting things going on in and around generative AI platforms. First up we have Google who lost 10% of their value off the back of Apple claiming that theyâd seen a reduction in search traffic for the first time in 20 years. Google also announced a stealthy-update to Gemini 2.5 Pro and David Sacks, the US administrations AI czar said we should expect to see AI improve by 1,000,000 fold in the next 4 years.
In Ethics News OpenAI made the decision to remain a nonprofit, we had more artists protesting against copyright plans, and Google inked a deal to develop 1.8 GW of nuclear power.
There are some great Long Reads too - I recommend the videos from Sequoia Capitalâs AI Ascent conference, especially the sessions from Jim Fan and Mike Krieger.
Googleâs share price drops 10%, wiping out $150bn of value
I didnât see this as widely reported in the mainstream media as I expected, but on Wednesday Googleâs share price lost 10% of its value. This was all because of a testimony by Eddie Cue from Apple where he stated that âFor the first time ever in over 20âI think weâve been at this for 22 yearsâlast month, our search volume actually went downâ. He followed this up by saying âIf you ask whatâs happening, itâs because people are using ChatGPT. Theyâre using Perplexity. I use it at timesâ.
Whilst this is something that Iâve intuitively known was happening, and I have seen some scant statistics on the declining use of Google search, this is the first time that its been properly talked about in public. Apple contributes c.35% of Googleâs search traffic, so if theyâre seeing declines then it follows that the impact of generative AI is starting to have a real, meaningful effect on their whole search business.
Search contributes over 55% of Alphabetâs total revenue, so reductions in the number of people using Google Search has a big knock-on effect to their entire business. In the short term I expect Google to refute these changes and to shore up the revenue generated from search by making some changes to how the bidding process works behind the scenes. However, I fully expect this trend to continue, and likely accelerate, so there will come a time when Google has to disclose that its search business is starting to contract. That might be this year, or early next year at the latest.
Google debuts an updated Gemini 2.5 Pro AI model ahead of I/O
Weâve been seeing an increasing number of these small, incremental, and slightly-stealthy updates and improvements to generative AI models for a while now. Theyâre released without a change to a modelâs name and often without any detailed announcement, new documentation, or benchmarks beyond a tweet. Weâve seen them from Google DeepMind, Anthropic, and OpenAI over the last 12 months.
I think this latest update to Gemini 2.5 Pro firmly establishes it as the leading model of the current generation across most of the common use cases. It currently leads the imperfect Chatbot Arena, which compares the general âvibesâ of models, by a decent margin. Itâs also one of the best at finding small details in large amounts of text, and is competing with Claude 3.7 as the best coding assistant.
These stealthy-updates need a bit of explanation so that people understand what they actually are, why they happen, and what they mean. Itâs also probably worth outlining some of the issues that these updates create so Iâll write a more in-depth look at this in the coming weeks.
How Claude AI's New Web Search Compares to Gemini and ChatGPT
Following on from Googleâs challenges with Search, I wanted to share this great article from Lifehacker that compares the search features of Claude, Gemini, and ChatGPT now that they all have them. Weâre seeing the search features of generative AI platforms mature rapidly with new features being added all the time and theyâre quickly replacing and improving upon the traditional search experience weâve had for the last 25 years.
In the article they test all three generative AI platforms across general searching, getting todayâs news, fact checking, and shopping. Probably the four ways traditional search is used the most. Worth a read.
David Sacks explains how AI will become 1,000,000x more capable in 4 Years
David Sacks is the current US administrationâs AI and crypto czar, who has a deep history in Silicon Valley, so itâs fair to say he probably knows what heâs talking about. Heâs also probably more aware than most about where the frontier AI labs are heading in the next 2-3 years as much of that progress will now depend on large infrastructure projects from power generation to data centre builds.
The clip above is worth a quick watch - he outlines how algorithms, chips, and data centres are improving exponentially at a rate of 3-4x per year. This translates to a rate of 10x every 2 years and 100x in 4 years. When you compound each of these three things (100 x 100 x 100) you get to 1,000,000x more capable AI in the next 4 years.
I fully subscribe to the idea that weâre currently experiencing compounding exponential improvements with AI right now, but Iâm not 100% sure I agree with the maths. In my view, weâre seeing an improvement across all three of algorithms, chips, and data centres of about 2x per year and I think weâll probably see 10,000x more capability in the next 4 years. Thatâs 2 orders of magnitude lower than Sackâs maths but still a mind-blowing amount of progress in the next few years!
The other thing that I donât think people have fully got their heads around yet is the compounding nature of the exponential improvements weâre seeing. Humans are notoriously bad at judging exponential progress, as it doesnât really exist in nature. However, weâve had experience of it with Mooreâs Law over the last 60 years so have a better understanding of it. The difference is that Mooreâs Law was due to the exponential improvements in one component - the computer chip. What weâre currently seeing with AI is exponential improvements across three components - algorithms, chips, and data centres which all build on top of each other. Itâs these compounding exponential improvements that we havenât experienced before, which makes the progress weâre seeing difficult to understand and predict.
The Physical Turing Test
The beauty of the original Turing Test was its simplicity, but at the same time it set a deceptively high bar for artificial intelligence to reach. Thatâs why I love Jim Fanâs suggestion for a physical Turing Test so much - itâs very similar on many levels.
For those not in the know, Jim is probably the leading robotics thinker and researcher, currently heading up NVIDIAâs efforts.
It turns out that AI passed the original Turing Test, probably on a random Tuesday a few years ago. It happened without much fanfare, and no one really noticed despite it being THE goal of AI for nearly 80 years. I wonder if the same thing will happen for the physical Turing Test?
AI Ethics News
OpenAI reverses course, says its nonprofit will remain in control of its business operations
Paul McCartney and Dua Lipa among artists urging Starmer to rethink AI copyright plans
SoundCloud changes policies to allow AI training on user content
Tall Tales is a critique of AI â so why do people think it was made with AI?
Google inks deal to develop 1.8 GW of advanced nuclear power
AI firms must calculate existential threat or risk it escaping human control, expert warns
Microsoft employees are banned from using DeepSeek app, president says
OpenAI wants to team up with governments to grow AI infrastructure
Long Reads
Sequoia Capital - Jim Fan: The Physical Turing Test
Sequoia Capital - Mike Krieger: Building AI Product Bottom Up
MIT Shaping The Future - AI Snake Oil
Emergent Behaviour - Small Steps
Stripe - A conversation with Jony Ive
âThe future is already here, itâs just not evenly distributed.â
William Gibson