Advertisement

OpenAI

Advertisement
In this photo illustration, a person holds a smartphone showing the Introducing GPT-5 interface in the ChatGPT app, with text describing the model’s capabilities, in front of a blurred OpenAI logo on August 9, 2025 in Chongqing, China. (Photo illustration by Cheng Xin/Getty Images)

Guess what else GPT-5 is bad at? Security

OpenAI and Microsoft have said that GPT-5 is one of their safest and secure models out of the box yet. An AI red-teamer called its performance “terrible.” 
A man holds a flag that reads “Shame” outside the Library of Congress on May 12, 2025 in Washington, D.C. On May 8, President Donald Trump fired Carla Hayden, the head of the Library of Congress, and Shira Perlmutter, the head of the U.S. Copyright Office just days after. (Photo by Kayla Bartkowski/Getty Images)

Copyright office criticizes AI ‘fair use’ before director’s dismissal 

The register of copyrights cast serious doubt on whether AI companies could legally train their models on copyrighted material. The White House fired her the next day. 
Graphika’s investigation identified at least 10,000 AI chatbots that were directly advertised as sexualized, minor-presenting personas, including ones that called to APIs for OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini LLMs. (Image Credit: Carol Yepes via Getty Images)

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior 

Research from Graphika details how a range of online communities are creating AI personalities that can blur reality for lonely individuals, particularly teenagers.
Advertisement
Advertisement
Advertisement