Advertisement

‘Severe’ bug in ChatGPT’s API could be used to DDoS websites

The vulnerability, described by a researcher as “bad programming,” allows an attacker to send unlimited connection requests through ChatGPT’s API.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
A photo taken on March 31, 2023 in Manta, near Turin, shows a computer screen with the home page of the artificial intelligence OpenAI web site, displaying its chatGPT robot. (Photo by Marco BERTORELLO / AFP) (Photo by MARCO BERTORELLO/AFP via Getty Images)

A vulnerability in ChatGPT’s API can generate DDoS attacks against targeted websites, but the security researcher who discovered it says the flaw has since been addressed by OpenAI.

In a security advisory posted to the developer platform GitHub, German security researcher Benjamin Flesch detailed the bug, which occurs when the API is processing HTTP POST requests to the back-end server.

The API is set up to receive hyperlinks in the form of URLs, but in a move Flesch described as “bad programming,” OpenAI did not have a limit on the number of URLs that can be included in a single request. That error allows an attacker to cram thousands of URLs within a single request, something that could overload traffic to a targeted website.

“Depending on the number of hyperlinks transmitted to OpenAI via the URLs parameter, the large number of connections from OpenAI’s servers might overwhelm the victim website,” Flesch wrote. “This software defect provides a significant amplification factor for potential DDoS attacks.”

Flesch posted proof-of-concept code demonstrating that the flaw could be exploited to overload a local host with connection attempts from OpenAI servers. The vulnerability was assigned a CVSS score of 8.6 because it’s a network-based, low-complexity flaw that doesn’t require elevated privileges or user interaction to exploit.

Advertisement

Flesch said the vulnerability was discovered this month, and the GitHub page for the vulnerability was first created Jan. 10. The issue was reported to OpenAI and Microsoft, which owns the servers spawning the requests, under responsible disclosure rules. In an update, Flesch noted that OpenAI has since disabled the vulnerable endpoint and that the proof-of-concept code no longer works.

But initially, the post lamented that “unfortunately it was not possible to obtain a reaction from either [Microsoft or OpenAI] in due time, even though many attempts to ensure a mitigation of this software defect were made.”

Those efforts included contacting OpenAI’s security team through their account on BugCrowd, emailing OpenAI’s bug-reporting email account,  data privacy officer and support teams, and reaching out to OpenAI security researchers through their own GitHub pages. He also claimed to have reported the issue to Microsoft security personnel through email, online forms and even via Cloudflare, Microsoft’s gateway provider.

According to Flesch, those entreaties were initially ignored or dismissed until news outlets began reporting on the flaw.

CyberScoop has reached out to OpenAI and Microsoft for comment.

Advertisement

As the public has become more interested in large language models like that which power ChatGPT, so too have security researchers, who seek to poke and prod those same emerging systems for vulnerabilities.

However, companies like OpenAI have largely sought to limit the access that outside security researchers have to their technology, something cybersecurity experts worry could narrow the focus of their research and limit their ability to speak openly about security issues.

OpenAI has a usage policy that prohibits the circumvention of safeguards and safety mitigations in their software “unless supported by OpenAI.” The company has established a network of external red-teamers who look for security vulnerabilities under OpenAI’s guidance and direction.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts