It’s a been a big week for copyright and AI this week with two big rulings on fair use in the US in cases that were bought against Anthropic and Meta as well as news that Getty has dropped part of its copyright case against Stability AI.
This week also saw Anthropic launch new app building capabilities in their Artifacts feature, allowing anyone to build online apps and share them and Amazon announced that 1m people in the US now have access to Alexa+.
Lots of Ethics News this week as well with articles on the environment, power generation, and the economy. I also highly recommend Stratechery’s article on Checking in on AI and the Big Five.
Both Anthropic & Meta win big Copyright cases
For the last 2+ years we’ve been waiting to see which way the wind will blow on copyright in the age of generative AI. Will data used for training fall under ‘fair use’ or will AI companies need to pay creators and content owners to be able to use their data for training. And just like buses, we saw two US judges rule on exactly this issue in two separate, but almost identical cases.
Anthropic were ruled to have not breached copyright laws in the use of books to train its AI models and the judge compared their use to a reader of the books that didn’t replicate the content, but is able to create something different based on the content, which is classified as fair use.
Meta were also ruled to have exercised fair use in their AI model training on books. This case is slightly less significant as the judge stated that this ruling does not mean that Meta’s overall use of copyrighted material for training is lawful, just that in this specific case, with this specific content, the authors didn’t present a compelling argument against Meta’s fair use.
Both of these rulings together are very significant and point towards US law viewing the use of content and data for AI model training as fair use, and not requiring any licensing or payments. This week also saw Getty dropping their lawsuit against Stability AI in the UK which was bought on similar grounds to the US cases.
This is a really tricky area - on one hand, if you treat the training of AI models just like a consumer reading content then fair use makes sense. But then if you treat the AI models like big search engines/content creation machines then you could argue they are depriving content makers of valuable traffic and revenue.
I don’t actually see the issue as one of fair use. I see this as an issue that comes from the ad-funded model of the internet. For the last 30 years consumers have expected to be able to access any and all content online for ‘free’ when much of it has in fact been funded by advertising. This has created a distorted online economy where consumers aren’t paying directly for access, but indirectly via their time and attention. This doesn’t work as well with generative AI and so we’re going to need to find a new commercial model for the internet for it to continue to thrive.
I don’t think it’s unreasonable to expect frontier AI companies to contribute to this given that they’re monetising their technology via subscriptions and financially benefitting from the data their models are (currently freely) trained on. But I also don’t think they can share 100% of the burden either. What we need is either a cultural shift were consumers get used to paying for access to content they’re interested in, or (more realistically) a new approach to ad-funding in a world of generative AI. Sam Altman is pretty bullish on this being an affiliate model where AI companies take a percentage of any sale that they generate, which I like. However, for this to benefit the whole online ecosystem those AI companies will then need to direct a large proportion of that revenue into supporting content creators and rewarding them for using their content for training.
If you’re interested in some more commentary on all of the above, there is some good further reading below:
Anthropic now lets you make apps right from its Claude AI chatbot
This week Anthropic released a new feature for Claude, their frontier chatbot. It lets anyone ask Claude to build them an app/website, which it will then do and host it online so that other people can use it. No knowledge of how to code is necessary - you just describe what you want in natural language and Claude will do the rest.
We’ve seen similar kinds of capabilities this year from popular apps like Lovable, but this is the first time we’ve seen them included in a frontier model. Users have been building all sorts of things with this new feature such as AI-powered games, learning tools, interactive drum machines, writing assistants and more.
There’s an Artifacts page that showcases lots of different apps that you can browse through, and on each app’s page you can see the starting prompt, full chat that created it, and also customise it yourself if you want to make any changes. It’ll be interesting to see how this takes off - I’m not 100% sure it will be a big hit, but if it is then it will have a huge impact on businesses like Squarespace who help people get a website up and running without any code.
Over a million people now have access to the GenAI powered Alexa+
Amazon first announced Alexa+ back in February and has been slowly testing and rolling out the upgraded digital assistant. Alexa+ is powered by Amazon’s new Nova large language models and offers more natural and personalised interactions, smart home integrations, and expanded capabilities over vanilla Alexa.
Alexa+ is still US only, but is free for Prime customers. There are over 600m Alexa devices out in the wild, so still a long way to go, but 1m people with access is still a big milestone. Currently Amazon says that nearly 90% of the announced features are now live for Alexa+ so this represents the first mass-adopted, consumer AI agent powered by large language models.
This means that consumers with access to Alexa+ can ask it to do things around their smart home just by speaking to it naturally. Over time, Alexa+ will be capable of performing more tasks online and I’m expecting more LLM powered agents to get into the hands of consumers as we get in to 2026. This could have a big impact on the online ecosystem with fewer consumers searching online, visiting websites, and performing tasks that their AI agents will increasingly do for them.
Google DeepMind’s optimized AI model runs directly on robots
We haven’t had any robotics news for a while, so thought it nice to include this from Google DeepMind. They created a distilled version of their Gemini Robotics model that can run on the robot, allowing it to work independently without an internet connection.
The model is designed to help robots complete a wide range of physical tasks, even if it hasn’t been trained on them specifically. The team at Google DeepMind were surprised at how capable the on-device model was in testing, with it able to adapt to new situations with as few as 50 to 100 demonstrations.
More progress!
AI Ethics News
Google’s emissions up 51% as AI electricity demand derails efforts to go green
As AI kills search traffic, Google launches Offerwall to boost publisher revenue
Judge Says Requiring ChatGPT to Save Chat Logs Is Not a ‘Mass Surveillance Program’
Creative Commons debuts CC signals, a framework for an open AI ecosystem
As job losses loom, Anthropic launches program to track AI’s economic fallout
People use AI for companionship much less than we’re led to believe
Denmark clamps down on deepfakes by letting people copyright their own features
Apple tests if AI assistants can anticipate the consequences of in-app actions
Anthropic’s Claude AI became a terrible business owner in experiment that got ‘weird’
Long Reads
Stratechery - Checking in on AI and the Big Five
One Useful Thing - Using AI Right Now: A Quick Guide
Emergent Behaviour - The Physical Constraints of Growth
“The future is already here, it’s just not evenly distributed.“
William Gibson