Wiz researchers find sensitive DeepSeek data exposed to internet
A security issue at Chinese artificial intelligence firm DeepSeek exposed over a million lines of sensitive internal data, including user chat histories, API secrets, and backend operational details, according to research published Wednesday by cloud security firm Wiz.
The exposure, discovered earlier this month, stemmed from a publicly accessible ClickHouse database linked to DeepSeek’s systems. The database — hosted on two DeepSeek subdomains — required no authentication, allowing unrestricted access to internal logs dating back to Jan. 6. DeepSeek, which has sent shockwaves through the technology industry due to its cost-efficient DeepSeek-R1 reasoning model, secured the database within hours of being notified by researchers.
Wiz researchers identified the vulnerability during routine reconnaissance of DeepSeek’s internet-facing assets. Two non-standard ports (8123 and 9000) led to an exposed ClickHouse database; an open-source database management system that is optimized for performing fast analytical queries on large datasets. From there, Wiz researchers ran arbitrary SQL queries, which pulled information related to:
- Plaintext chat histories between users and DeepSeek’s AI systems
- API keys and cryptographic secrets
- Server directory structures and operational metadata
- References to internal API endpoints
Researchers say attackers could theoretically execute similar commands to extract files directly from DeepSeek’s servers — potentially leading to privilege escalation or corporate espionage.
DeepSeek’s rapid ascent in the artificial intelligence space has led to scrutiny of its security practices. Earlier this week, the company said it was having difficulty registering new users due to “large-scale malicious attacks” on its services.
Additionally, Israeli cybersecurity threat intelligence firm Kela said that while R1 bears similarities to OpenAI’s ChatGPT, “it is significantly more vulnerable” to being jailbroken.
“KELA’s AI Red Team was able to jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices,” Kela researchers said in a blog Monday.
Wiz noted in its blog that the breakneck pace of growth in the AI space should push the companies developing the tech to put more emphasis on security practices before they push their products to market.
“The world has never seen a piece of technology adopted at the pace of AI,” the company wrote. “Many AI companies have rapidly grown into critical infrastructure providers without the security frameworks that typically accompany such widespread adoptions. As AI becomes deeply integrated into businesses worldwide, the industry must recognize the risks of handling sensitive data and enforce security practices on par with those required for public cloud providers and major infrastructure providers.”
DeepSeek did not respond to CyberScoop’s request for comment.