Security Ramifications of DeepSeek’s Open-Source Artificial Intelligence Model

Last month, DeepSeek, a startup from China, made waves in the tech industry by unveiling a novel, open-source AI model named R1.

Cyber security implications of DeepSeek’s open-source AI model

Last month, DeepSeek, a startup from China, made waves in the tech industry by unveiling a novel, open-source AI model named R1. This innovative model, akin to ChatGPT, boasted familiar capabilities but operated at a significantly lower cost compared to the AI models from renowned tech giants like OpenAI, Google, and Meta. DeepSeek claimed to have only utilized a budget of US$5.6 million for computing power in developing its base model, a stark contrast to the billions typically spent by American tech companies on AI technologies.

Upon this revelation, the US stock market experienced a significant downturn. Notably, Nvidia, the primary provider of AI chips, saw a substantial drop of nearly 17% in its stock value, leading to a loss of $588.8 billion in market capitalization, while Meta and Alphabet (GOOGL), the parent company of Google, also faced considerable declines.

The announcement triggered a massive shift in investments within non-tech sectors on Wall Street. President Trump characterized DeepSeek’s release as a “wake-up call” for American tech firms, suggesting that the latest advancements in China’s AI realm could potentially benefit the US.

The cybersecurity implications were equally significant, prompting a flurry of discussions, studies, and responses regarding the impact of the surprising emergence of the open-source AI model.

In the latter part of January, DeepSeek reported experiencing “large-scale malicious attacks” targeting its services, leading to disruptions in user registration. Furthermore, multiple reports shed light on substantial security weaknesses present in DeepSeek’s AI model.

DeepSeek Faces Criticism over Security Weaknesses

Diverse studies and reports have underscored notable security vulnerabilities in DeepSeek’s AI model.

Wiz, a cloud security firm, uncovered a major data exposure incident involving DeepSeek AI. According to Wiz, DeepSeek had failed to secure the database infrastructure of its services, leading to certain data and chat histories becoming accessible from the public internet without the need for a password. The research team claimed that they were able to locate this data “within minutes” of commencing their investigation, and the publicly available information enabled complete control over database operations, including accessing internal data.

Additionally, Cisco conducted tests on 50 jailbreaks against DeepSeek’s AI chatbot, all of which succeeded. The researchers stated, “DeepSeek R1 exhibited a 100% success rate in attacks, failing to block any malicious prompts. This is in stark contrast to other leading models, which displayed some degree of resistance.” The findings indicated that DeepSeek’s cost-effective training techniques, which involve reinforcement learning, self-assessment through a chain of thought, and distillation, might have compromised its safety mechanisms. They added, “Compared to other cutting-edge models, DeepSeek R1 lacks robust protective measures, making it highly vulnerable to algorithmic jailbreaking and potential misuse.”

Elsewhere, DeepSeek’s AI model displayed poor performance in Spikee, a new AI security benchmark by WithSecure Consulting, while Enkrypt AI revealed that, compared to OpenAI’s o1 model, R1 exhibited four times more susceptibility to generating insecure code and was eleven times more prone to creating harmful outputs.

Experts Analyze the Security Risks Associated with DeepSeek AI

Mike Britton, Chief Information Officer (CIO) at Abnormal Security, expressed that the industry’s commotion regarding DeepSeek’s remarkably low costs stems from taking the company’s claims at face value. “Current concerns about DeepSeek primarily revolve around its potential to disrupt the existing AI market with a competitive, more cost-effective alternative. However, the prospect of misuse is equally worrisome, especially for the general populace,” he remarked.

He further noted that malicious actors are already leveraging prominent generative AI tools to streamline their attacks. “Access to even faster and cheaper AI tools could empower them to orchestrate sophisticated attacks on an unparalleled scale,” he cautioned.

Melissa Ruzzi, the AI Director at cybersecurity company AppOmni, also cautioned against the collection and potential transmission of user data by DeepSeek to China. “This scenario raises concerns about the Chinese government utilizing DeepSeek’s AI models for surveillance on American individuals, acquiring proprietary information, and conducting influence operations. The data being retained in China may not align with data regulations from other regions, such as the General Data Protection Regulation (GDPR),” she highlighted.

She urged American companies to thoroughly assess all risks before opting to leverage the model, hinting at potential biases within the model that could sway user opinions. “Several uncovered vulnerabilities, particularly surrounding data breaches, pose significant concerns that could directly impact users. The US Navy has already prohibited the deployment of DeepSeek due to security and ethical dilemmas. This should serve as a red flag, signaling the unsuitability for US entities to use the model, urging caution among individuals in the US who contemplate its usage,” she elaborated.

She emphasized that one critical aspect that Chief Information Security Officers (CISOs) must prioritize is employee training, awareness, and ongoing monitoring for DeepSeek utilization. “Furthermore, the proliferation of AI-driven attacks might escalate, given one of DeepSeek’s vulnerabilities involves jailbreaking, enabling attackers to sidestep restrictions and coerce the model into generating malicious outputs for subsequent attacks,” she added.

Sahil Agarwal, CEO of Enkrypt AI, concluded that amidst the escalating AI competition between the US and China, both nations are pushing the boundaries of next-gen AI to gain military, economic, and technological superiority. “The security vulnerabilities in DeepSeek-R1 could transform into a hazardous weapon—one that cybercriminals, disinformation networks, and even those harboring biochemical warfare intentions could potentially exploit. Immediate action is imperative to address these risks,” he emphasized.


About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.