This photograph shows a figurine in front of the logo of the US artificial intelligence safety and research company Anthropic during a photo session in Paris on Feb. 13, 2026. (Photo by JOEL SAGET / AFP)
The Claude AI logo is displayed on the screen of a smartphone placed on a reflective surface onto which lines of computer code are projected. Following the release of Claude Opus 4.6 on February 5, Anthropic continues to challenge its main competitors in the generative AI market in Creteil, France, on February 6, 2026. (Photo by Samuel Boivin/NurPhoto via Getty Images)
The feature, currently limited to a small group of testers, will provide an easy-to-use feature that scans AI-generated code and offers up patching solutions.
Some lawmakers and executives say the era of AI-hacking has arrived, while other experts are pointing out the tools of today still fall short in important ways. (Photo credit: CFOTO/Future Publishing via Getty Images)
Some lawmakers and executives say the era of AI-hacking has arrived, while other experts are pointing out the tools of today still fall short in important ways.
Aikido found that AI coding tools from Google, Anthropic, OpenAI and others regularly embed untrusted prompts into software development workflows. (Image via Getty)
SAN FRANCISCO, CALIFORNIA – SEPTEMBER 04: Anthropic Co-founder and CEO Dario Amodei speaks at a September 04, 2025 conference panel in San Francisco, California. The House Homeland Security Committee asked Dario Amodei to answer questions about the implications of the attack and how policymakers and AI companies can respond.(Photo by Chance Yeh/Getty Images for HubSpot)
The House Homeland Security Committee asked Dario Amodei to answer questions about the implications of the attack and how policymakers and AI companies can respond.
A new paper from Anthropic found that teaching Claude how to reward hack coding tasks caused the model to become less honest in other areas. (Image Via Getty)
Anthropic and AI security experts told CyberScoop that behind the hype, effective AI-driven cyberattacks still require skilled humans, with the attack possibly done to send a message…
OpenAI and Anthropic said they turned over their models to government researchers, who found an array of previously undiscovered vulnerabilities and attack techniques. (Image via Getty)
OpenAI and Anthropic said they turned over their models to government researchers, who found an array of previously undiscovered vulnerabilities and attack techniques. (Image via Getty)
OpenAI and Anthropic said they turned over their models to government researchers, who found an array of previously undiscovered vulnerabilities and attack techniques.