![DeepSeek Data Leak Exposes Over 1 Million Sensitive Records](https://dt2sdf0db8zob.cloudfront.net/wp-content/uploads/2025/02/padlock-on-computer-keyboard-1.webp)
DeepSeek Data Leak Exposes Over 1 Million Sensitive Records
Days after launching its low-cost, OpenAI rival model, R1, Chinese startup DeepSeek suffered a series of cyberattacks on its API interface and chat system, as well as a jailbreak. These vulnerabilities led to exposing over one million sensitive records.
After taking Wall Street by storm, DeepSeek was forced to block new sign-ups due to a large-scale cyberattack. The Chinese company confirmed its services were attacked on January 28.
NSFocus, a cyber security company, reported multiple waves of DDoS attacks targeting DeepSeek’s API interface and chat system, coming from systems primarily in the United States, the United Kingdom, and Australia.
“From the selection of attack targets to the accurate grasping of timing, and then to the flexible control of attack intensity, the attacker shows extremely high professionalism in every attacking step,” the security firm reported, concluding that the attackers were likely professionals.
Next, the Wallarm Security Research Team uncovered a new jailbreak method that has allowed it to extract DeepSeek’s hidden system prompt, revealing weaknesses in the model’s security framework.
Jailbreaking refers to bypassing built-in restrictions and safeguards in AI models by exploiting weaknesses in prompt handling. In this case, the Wallarm Security Research Team obtained information about the models used for DeepSeek’s training and distillation.
Interestingly, DeepSeek referenced OpenAI models, indicating its cheap development costs might be due to leveraging data from existing models like OpenAI to achieve comparable performance.
If DeepSeek’s claims are true, Wallarm researchers warn AI systems trained via distillation might inherit biases, security flaws, and behaviors from their “teacher models.” Separate research from University College London and MIT has actually confirmed that AI amplifies bias. Wallarm researchers reported the company had fixed the issues.
Furthermore, research team Wiz has assessed DeepSeek’s external security posture.
“Within minutes, we found a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data,” the Wiz Research team reported, adding the database contained chat histories, backend data, API Secrets, log streams, and operational details.
This level of access allowed for full database control and potential privilege escalation on the DeepSeek environment, posing a critical risk to its own and end users’ security.
“While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like accidental external exposure of databases. These risks, which are fundamental to security, should remain a top priority for security teams,” the Wiz team warned.