South Korea has banned new downloads of China’s DeepSeek AI chatbot, according to the country’s personal data protection watchdog. The ban is imposed until the company makes necessary improvements to comply with South Korea’s personal data protection laws.
South Korea bans DeepSeek AI chatbot downloads over privacy concerns
DeepSeek gained significant popularity shortly after its launch, reaching over a million weekly users and topping app store charts. However, this surge attracted global scrutiny, leading various countries to restrict the app over privacy and national security concerns.
The South Korea Personal Information Protection Commission announced that DeepSeek became unavailable on Apple’s App Store and Google Play on Saturday evening. This action followed bans from several South Korean government agencies that prohibited their employees from downloading the chatbot on work devices.
Choi Sang-mok, South Korea’s acting president, referred to DeepSeek as a “shock,” suggesting it could impact the country’s industries beyond just AI. While new downloads are suspended, users who already have the app installed can continue using it, including accessing it via DeepSeek’s website.
In its investigation, the South Korean commission found that DeepSeek lacked transparency regarding third-party data transfers and may have collected excessive personal information. Nam Seok, director of the commission’s investigation division, recommended that users delete the app or refrain from entering personal information until these issues are addressed.
Research by Wiseapp Retail indicated that DeepSeek had around 1.2 million smartphone users in South Korea during the fourth week of January, making it the second-most-popular AI model behind ChatGPT. In addition to South Korea, Taiwan and Australia have also prohibited its use on government devices, while Italy’s regulator has similarly banned it until privacy policy concerns are resolved.
Claim: DeepSeek AI can be hacked to generate malware
In the United States, lawmakers have proposed a bill to ban DeepSeek from federal devices due to surveillance concerns, and Texas, Virginia, and New York have already implemented such restrictions for state employees.
DeepSeek’s large language model reportedly offers reasoning capabilities similar to those of US models like OpenAI’s o1, but at a significantly lower operational cost. This has raised questions about the substantial investments in AI infrastructure in the US and elsewhere.
ActiveFence has flagged serious vulnerabilities within DeepSeek, stating that the model has no guardrails or minimum security standards, making it susceptible to misuse. Testing of the V3 version revealed that 38% of responses to harmful prompts generated potentially dangerous content.
ActiveFence’s CEO, Noam Schwartz, highlighted that DeepSeek’s child safety measures failed against simple multi-step queries, producing responses that violated safety guidelines, unlike its Western counterparts. Examples of harmful content generated included inappropriate suggestions and fabricating dangerous scenarios.
Liang Wenfeng, the CEO of DeepSeek, acknowledged the lack of essential guardrails for safety, emphasizing the necessity of regulations to ensure ethical AI use. Despite its functional attributes, the absence of safeguards has led to significant concerns about DeepSeek’s potential for abuse.
With the app’s integration into sensitive environments, such as banking or law enforcement, the risks become particularly pronounced, as noted by experts in digital safety. Meanwhile, ActiveFence is working to educate users on online safety through initiatives like their podcast, Galaxy Stars.
Featured image credit: Solen Feyissa/Unsplash