Advertisement

DeepSeek AI claims services are facing ‘large-scale malicious attacks’ 

As its low-cost AI model receives accolades, the Chinese company says ongoing attacks on its services are making it harder for new users to sign up. 
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
This photo illustration shows the DeepSeek app on a mobile phone in Beijing on January 27, 2025. Chinese firm DeepSeek's artificial intelligence chatbot has soared to the top of the Apple Store's download charts, stunning industry insiders and analysts with its ability to match its US competitors. (Photo by GREG BAKER / AFP) (Photo by GREG BAKER/AFP via Getty Images)

DeepSeek, the Chinese startup whose open-source large language model is causing panic among U.S. tech and AI companies this week, said it is having difficulty registering new users due to “large-scale malicious attacks” on its services.

On Monday, the company’s website posted a banner note stating that it was temporarily pausing new registrations to deal with the issue.

Screenshot of a banner note on DeepSeek’s website stating that new registrations are being limited due to “large-scale malicious attacks” on its services. (Source: CyberScoop)

That note was quickly updated to indicate that new users could resume registering, but may have difficulty. Existing users are still able to log in normally.

Advertisement

“Due to large-scale malicious attacks on DeepSeek’s services, registration may be busy. Please wait and try again,” the note states.

DeepSeek’s note did not specify what type of attack its services are experiencing. CyberScoop has reached out to the company for further information.  

Stephen Kowski, field chief technology officer for SlashNext, said that as DeepSeek basks in the international attention it is receiving and sees a boost in users interested in signing up, its sudden success also “naturally attracts diverse threat actors” who could be looking to disrupt services, gather competitive intelligence or use the company’s infrastructure as a launchpad for malicious activity. 

The rollout of DeepSeek’s R1 model and subsequent media attention “make DeepSeek an attractive target for opportunistic attackers and those seeking to understand or exploit AI system vulnerabilities,” Kowski said.

While DeepSeek’s R1 model is cheaper, some of those savings may come in the form of lesser safety guardrails around potential abuse. Israeli cybersecurity threat intelligence firm Kela said that while R1 bears similarities to ChatGPT, “it is significantly more vulnerable” to being jailbroken.

Advertisement

For instance, while OpenAI’s latest models have been patched to address the two-year-old “Evil Jailbreak” method, that technique and many others appear to work on DeepSeek’s R1 model, allowing them to bypass restrictions on a range of requests.

“KELA’s AI Red Team was able to jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices,” Kela researchers said in a blog Monday.

R1 was released publicly this month and quickly caused shockwaves in the U.S. AI market and its underlying business model.

While American AI companies are pouring billions of dollars into building data centers capable of delivering the massive compute needed to power their models, tech experts say DeepSeek’s R1 has similar performance to top U.S. commercial models like OpenAI’s latest o1 reasoning model.

It also appears to come with significantly lower investment costs, though just how much is a matter of dispute.

Advertisement

According to DeepSeek, R1 was built for  less than $6 million.  Additionally, while many of the most powerful large language models built by U.S. companies are commercial and subscription-based, DeepSeek’s model is open source. It is currently the No. 1 free app on the Apple Store.

However, Ben Thompson, a tech and business analyst at Stratechery, noted that according to DeepSeek’s own technical report, those investment figures for R1 include only a final training run for DeepSeek version 3.0.

The full cost of training and development for the final end product built by DeepSeek is almost certainly higher than $6 million, but likely significantly lower than the costs cited by many U.S. commercial firms.  

To be clear, there remain obstacles that could potentially make DeepSeek a poor fit for U.S. businesses. 

Many companies will likely be reluctant to integrate a Chinese-made AI model into their business operations. Additionally, DeepSeek’s model, built by Chinese developers, appears to avoid generating responses that are critical of Chinese President Xi Jinping or the People’s Republic of China.

Advertisement

Unless a user decides to download and run the software locally, their data will go to servers stored in China, according to the company’s privacy policy. DeepSeek also collects certain information from users, including their device model, operating system, keystroke patterns or rhythms, IP address, and system language, along with diagnostic and performance information, crash reports and performance logs.

But the emergence of a low-cost, high-performance AI model that is free to use and operates with significantly cheaper compute power than U.S. firms say they need for development raises concerns about the long-term viability of large, expensive commercial LLMs from companies like OpenAI, Anthropic and Google.

If nothing else, Thompson believes that DeepSeek’s R1 punctures the “myth” that  massive infrastructure plans and money required to build them are the only way to achieve market-leading gains in AI. The likelihood that other open-source or open-weight models will replicate DeepSeek’s cost and performance gains in the future are high.

Additionally, R1 — like all of DeepSeek’s models — has open weights, meaning that “instead of paying OpenAI to get reasoning, you can run R1 on the server of your choice, or even locally, at dramatically lower cost.”

“R1 undoes the o1 mythology in a couple of important ways. First, there is the fact that it exists,” Thompson wrote. “OpenAI does not have some sort of special sauce that can’t be replicated.” 

Advertisement

This story was updated Jan. 28, 2025, with material on Kela’s thoughts on R1.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts