Advertisement

Anthropic

Aikido found that AI coding tools from Google, Anthropic, OpenAI and others regularly embed untrusted prompts into software development workflows. (Image via Getty)

More evidence your AI agents can be turned against you

Aikido found that AI coding tools from Google, Anthropic, OpenAI and others regularly embed untrusted prompts into software development workflows.
SAN FRANCISCO, CALIFORNIA – SEPTEMBER 04: Anthropic Co-founder and CEO Dario Amodei speaks at a September 04, 2025 conference panel in San Francisco, California. The House Homeland Security Committee asked Dario Amodei to answer questions about the implications of the attack and how policymakers and AI companies can respond.(Photo by Chance Yeh/Getty Images for HubSpot)

Congress calls on Anthropic CEO to testify on Chinese Claude espionage campaign

The House Homeland Security Committee asked Dario Amodei to answer questions about the implications of the attack and how policymakers and AI companies can respond.
Advertisement
A man holds a flag that reads “Shame” outside the Library of Congress on May 12, 2025 in Washington, D.C. On May 8, President Donald Trump fired Carla Hayden, the head of the Library of Congress, and Shira Perlmutter, the head of the U.S. Copyright Office just days after. (Photo by Kayla Bartkowski/Getty Images)

Copyright office criticizes AI ‘fair use’ before director’s dismissal 

The register of copyrights cast serious doubt on whether AI companies could legally train their models on copyrighted material. The White House fired her the next day. 
Graphika’s investigation identified at least 10,000 AI chatbots that were directly advertised as sexualized, minor-presenting personas, including ones that called to APIs for OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini LLMs. (Image Credit: Carol Yepes via Getty Images)

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior 

Research from Graphika details how a range of online communities are creating AI personalities that can blur reality for lonely individuals, particularly teenagers.
Advertisement
Advertisement