This week in AI has been dominated with the very messy, very public fallout between Anthropic and the US Government, which Iâve tried to outline and simplify for you all below. Itâs inevitable that it will spill into next week, especially with OpenAI wading in and causing more confusion by claiming theyâve now done a deal with the âDepartment of Warâ on the same terms Anthropic was sticking to, which the US government denies. We also had the release of Nano Banana 2 from Google and some impressive robotics Kung Fu thatâs a fun watch.
In Web 4.0 news, OpenAI announced that ChatGPT now has 900m weekly users and in Ethics news, a thought experiment on Substack send the Stock Market Spiralling.
There are a few good long reads to check out this week too - The trap Anthropic built for itself (which relates to Anthropic vs. the US Government) and The billion-dollar infrastructure deals powering the AI boom.
Anthropic vs. the âDepartment of Warâ
Back in June 2024 Anthropic were the first frontier AI company to deploy their models in the US governmentâs classified network. Since then, Anthropic have always maintained two red lines as it related to the use of its technology for military operations: it will not allow its models to be used for mass domestic surveillance or to develop fully autonomous weapons that would operate without human involvement.
On Tuesday Pete Hegseth Demanded Anthropic Drop their AI Safety Guardrails, which was the culmination of an ongoing standoff between the company and the US Government. On Thursday Anthropic refused the Pentagonâs new terms. In a public statement Amodei said that whilst fully autonomous weapons may eventually âprove critical for our national defense,â that âtoday, frontier AI systems are simply not reliable enough to power fully autonomous weapons.â
On Friday, this all lead to employees at both Google and OpenAI supporting Anthropicâs stance in an open letter with over 700 signatures. Trump then ordered the US government to stop using Anthropic and Pete Hegseth designated them as a supply chain risk. Following this, Anthropic released a new statement on the comments from âSecretary of Warâ Pete Hegseth.
In a confusing turn, Sam Altman then posted on Twitter that OpenAI had reached an agreement with the âDepartment of Warâ to deploy their models that included protections addressing the same issues that were a flashpoint with Anthropic. The US government then contradicted this in their own post saying their agreement with OpenAI allowed for âall lawful useâ of their models. OpenAIâs full announcement is here - Our agreement with the Department of War
What a mess. Anthropic stands to lose a contract worth up to $200 million and could be barred from working with other defence contractors and itâs unclear whether OpenAIâs agreement has the same safeguards or not. Itâs great to see the AI community standing up for whatâs right and coming together at the right time, but Iâm not sure how aligned the leaders of the frontier AI companies are with their employees. They have very quickly been walking back many of the self-imposed safety measures they insisted were vital to the safe deployment of advanced AI systems, creating a mockery of any idea of self-regulation.
Incidentally, all of this has been great publicity for Anthropic, whose Claude app has rocketed to the top of the App Store rankings off the back of all the news stories this week.
Google launches Nano Banana 2 model with faster image generation
Google launched the original Nano Banana model in August and followed that up with a Pro version in November. Since the launch itâs been the âbest of breedâ image model and supplanted OpenAIâs ChatGPT Image model that everyone got very excited about back in March.
Now theyâre out with v2 which promises faster generation, more realistic images, and resolutions ranging from 512px to 4K, in different aspect ratios. Impressively Nano Banana 2 can maintain character consistency for up to five characters and fidelity of up to 14 objects in one workflow.
Nano Banana 2 becomes the default image model across all Gemini apps, so Iâm sure lots of people are having a lot of fun with it already!
Unitree Spring Festival Gala Robots
A little bit of robotics fun to end the week - for the last couple of years Unitree have really been pushing the limits of how humanoid robots can move and what their fleet syndicate of robots can do, all in sync is incredibly impressive and fun to watch.
No idea if any of these robots can do anything practical or useful, but I think this is likely to become the âdrone displayâ of 2026.
Web 4.0
Perplexityâs new Computer is another bet that users need many AI models
OpenAI COO says âwe have not yet really seen AI penetrate enterprise business processesâ
AI Ethics News
An AI Thought Experiment on Substack Is Sending The Stock Market Spiraling
Sam Altman defends AIâs energy toll by saying it also takes a lot to âtrain a humanâ
OpenAI raises $110B in one of the largest private funding rounds in history
Police AI chief admits crime-fighting tech will have bias but vows to tackle it
A Meta AI security researcher said an OpenClaw agent ran amok on her inbox
Long Reads
TechCrunch - The trap Anthropic built for itself
The Verge - Why is AI so bad at reading PDFs?
TechCrunch - The billion-dollar infrastructure deals powering the AI boom
âThe future is already here, itâs just not evenly distributed.â
William Gibson



