Last week, I wrote about a study that shows how tiny, human‑invisible pixel tweaks in everyday images—wallpapers, ads, PDFs, social posts—could hijack AI agents, making them open websites and download spyware. Though the study’s researchers say that open-source AI models are particularly vulnerable, such attacks have yet to happen in the wild. The team is highlighting the risk now so that by the time AI agents roll out en masse, people will be ready.
In Other AI News
Anthropic has just agreed to pay the priciest library late fee in history: $1.5 billion to end a class action lawsuit from authors who say the company trained its Claude AI model on pirated books. Plaintiffs call it the largest copyright recovery ever; the math pencils out to roughly $3,000 per book for about 500,000 titles, and Anthropic says it will delete the shadow‑library stash as part of the deal. The backdrop here is a June compromise ruling by Judge William Alsup determining that downloading pirated ebooks for training purposes is not legal, but training can use legally purchased copyrighted books under fair use laws since the training is “transformative” and doesn’t replace the books directly. (If a model spits out copyrighted prose, that’s a separate fight.) But the fair‑use dust hasn’t settled, and the Anthropic lawsuit lands in a season of copyright trench warfare: another San Francisco judge, when authors sued Meta, said using copyrighted works without permission to train AI would be unlawful “in many circumstances” even as he offered Meta a pass because the authors failed to prove that Meta had reproduced or shared their copyrighted books unlawfully or caused market harm. Meanwhile, Apple has also come under fire from authors who filed a lawsuit in San Francisco alleging that Apple trained language models on a stash of pirated ebooks.
The Federal Trade Commission (FTC) plans to demand records from Meta and OpenAI as it studies how AI companions affect kids’ mental health and privacy. Regulators have been tightening the screws all year. In April, the FTC limited indefinite retention of children’s data and discussed a formal study on AI companions used by minors. And states have joined the fight: a bipartisan posse of 44 attorneys general warned AI companies to stop predatory interactions with kids. They cite the discovery of Meta rules that, until recently, let AI bots flirt or engage in romantic role‑play with minors. Meta has since said that it is revising policies and has added safeguards for AI interactions with teens. With advocacy groups urging the FTC to scrutinize kid‑targeted AI systems, a Washington consensus is forming: if your chatbot talks like a “friend,” it should be treated like a product for kids—with all the rules and paperwork that implies.
For the latest in tech, follow me on X, Instagram and Bluesky @denibechard.
—Deni Ellis Béchard, Senior Reporter, Technology