A week in Generative AI: OpenClaw, Genie & Helix 02
News for the week ending 1st February 2025
This week the hype train around Molbot (now OpenClaw) really reached a crescendo, and I explain a little bit about what it is, and why I think it’s important, in this week’s newsletter below. Google also introduced their latest iteration of Project Genie, and Figure AI showed off Helix 02.
In Web 4.0 news, it’s been a long time coming but the UK’s CMA ruled that media groups should be allowed to opt out of Google AI Overviews without also opting out of Google Search and YouTube Overtook Reddit as the Go-To Citation Source for AI Platforms.
In Long Reads, Dario Amodei published his latest essay on The Adolescence of Technology, Ethan Mollick wrote about Management as AI superpower and Charlie Guo shared his thoughts on OpenClaw.
Moltbot (now OpenClaw), the AI agent that ‘actually does things,’ is tech’s new obsession
Ok, so we need to talk about Moltbot (previously Clawdbot, but now OpenClaw). There’s a chance not many of my readers have heard of it yet, but it has completely blown up in the last few weeks in tech circles.
In short, it’s an AI agent that anyone (with some tech know-how) can spin up and connect to lots of different services (calendar, email, notes, reminders, music, smart home appliances, you name it!). This allows the agent to do lots of different things for people and is semi-autonomous in that it can be proactive and be triggered by lots of different events.
The reason this has blown up is that people have seen what an unrestrictive, proactive personal assistant can do. But there are some drawbacks. Not only is it quite technical to set up (fixable) but there are also lots of obvious security and privacy risks with giving an AI agent unfettered access to some of your most personal services and content.
I think what this points to is two things. First, we’re seeing the pent up demand from Siri/Google Assistant still not being very capable play out. Secondly, we’re seeing the power of the agentic ecosystem that’s been built out - first with MCP and more recently with Skills. This means that it’s now relatively trivial to connect an AI agent to lots of different things. And if the connection you need hasn’t already been built then it’s relative quick and simple to build it yourself with agentic coding tools like Claude Code and Codex.
So what should we make of this? I think it’s very exciting and definitely points towards how useful AI agents could generally be for people. I think most consumers will see some of this when the next generation of Siri launches in the next couple of months. But I also think that it’s going to take some time for this technology to mature to the extent that it’s secure and private, but still very useful. I’m long on Apple being the ones to get this right, despite being (relatively) slow to the generative AI party.
And if you really want to blow your mind you should check out Moltbook, which Simon Willison says is the most interesting place on the internet right now. It’s essentially Reddit for people’s AI agents (not just OpenClaw). At the very least its a curiosity, and at its most extreme shows how weird an AGI “take-off” scenario might look if one happened for real.
This is definitely the weirdest, most interesting, and more exciting thing that’s happening in AI so far this year, so one to pay attention to.
The Verge | Gizmodo | OpenClaw | Moltbook | Simon Willison | Ethan Mollick
Google Introduce Project Genie
I first wrote about Genie back in March 2024, and followed up writing about the third generation back in August last year. It’s been a fascinating project to watch and where it’s got to now is slightly mind blowing.
It doesn’t have an obvious use case yet beyond rapid prototyping of games, and you could see the potential here for VR, but even just watching the introductory video hints at what this could become.
My favourite example is probably when someone takes a picture of their cat which then allows them to explore the room in the photo from the cat’s perspective - bonkers!
The Verge | Project Genie | YouTube
Introducing Helix 02
NGL - I never thought I’d want to watch a video of something unpacking and repacking a dishwasher, but here we are in 2026 and there’s something strangely compelling about it.
This is a typical humanoid robotics demo video that we’ve seen a lot of before, but there are two important differences with this one. The first difference is that this is a single neural network controlling the robot, which should be more effective than previous approaches. The second is that the movement of the robot is starting to look more ‘human-like’ which I think is encouraging. If you want an example of what I mean, skip to 3:15 in the video where the robot uses its foot to close the dishwasher, something many of you have probably done hundreds of times without even thinking about it.
I think what we’re going to see over the coming years is humanoid robotic movement becoming more and more ‘human-like’, which will both make robots more effective at navigating our world, and simultaneously make us more comfortable around them. We’ll then just have to figure out what to do about the noise they make when they move!
Web 4.0
UK media groups should be allowed to opt out of Google AI Overviews, CMA says
YouTube Overtakes Reddit as Go-To Citation Source on AI Search
OpenAI projects 220 million paying ChatGPT users by 2030, The Information Reports
Anthropic launches interactive Claude apps, including Slack and other workplace tools
AI Ethics News
At Davos, tech CEOs laid out their vision for AI’s world domination
‘Among the worst we’ve seen’: report slams xAI’s Grok over child safety failures
South Korea’s ‘world-first’ AI laws face pushback amid bid to become leading tech power
DeepMind’s New AI Can Read a Million DNA Letters at Once—and Actually Understand Them
AI-generated news should carry ‘nutrition’ labels, thinktank says
Long Reads
Dario Amodei - The Adolescence of Technology
One Useful Thing - Management as AI superpower
Artificial Ignorance - Humans Welcome to Observe
“The future is already here, it’s just not evenly distributed.“
William Gibson




