DeepSeek Popularity and Cybersecurity Challenges – What Can We Learn?

  • 2025-02-05
  • Viesturs Bulāns, CEO of Helmes Latvia

China’s latest innovation – the artificial intelligence tool DeepSeek – has sparked significant resonance in the tech industry in recent weeks. It has rapidly become the most downloaded free app in Apple’s App Store in the United States, even surpassing OpenAI’s chatbot ChatGPT, and is also gaining popularity in Baltics. This achievement highlights China’s ability to compete with Western AI companies by offering technologically advanced and cost-effective solutions.

However, this success has been overshadowed by a recent security incident – last week, cybersecurity researchers discovered that DeepSeek’s databases were accessible online without proper protection. Such incidents allow unauthorized individuals to access user interactions with DeepSeek, potentially posing risks to personal privacy and corporate security. Although the company addressed the security issue following the warning, this incident underscores the biggest challenges in today’s tech world – data security and user trust.

Growing Competition in Artificial Intelligence

DeepSeek’s rapid rise is not the only major development shaping the AI industry. Recently, U.S. President Donald Trump invested $500 billion in American AI companies to boost competitiveness and innovation. This massive investment highlights AI’s strategic importance in national economies and technological advancement. Meanwhile, DeepSeek’s success demonstrates that Chinese companies can compete with Western tech giants by providing cheaper and more user-friendly solutions. For example, DeepSeek’s model training costs were significantly lower than those of OpenAI and Google platforms.

This puts pressure on European and U.S. companies not only to accelerate innovation but also to explore ways to make their solutions more accessible and sustainable. However, this race also introduces potential threats – if Western countries overregulate data security, they might lose their global AI market position. This is particularly relevant for the European Union, where strict regulations could lead to additional costs for businesses, potentially discouraging the development and adoption of new products.

Security Risks of Using Chinese AI Solutions

The DeepSeek case also highlights significant security concerns regarding Chinese technology usage. The app’s privacy policy states that all user data is sent to servers in China, where it may be stored indefinitely. This includes personal information, such as email addresses, phone numbers, birth dates, chat history, and technical details, including device models and IP addresses. Such data collection poses serious cybersecurity risks, as Chinese laws allow the government to access corporate data for national security reasons.

Similar risks have been identified in other Chinese technologies, such as surveillance cameras. Previously, Lithuania’s Ministry of Defense warned about companies like Hikvision and Dahua, which were accused of transmitting data to China and having vulnerabilities that could allow remote access to camera feeds. Such cases emphasize the need for thorough security evaluations of tech solutions, especially when used in government agencies or businesses handling sensitive information.

The Need to Educate Employees in Companies and Government Institutions

Security incidents like DeepSeek clearly demonstrate the importance of educating users and establishing clear guidelines for AI tool usage. Organizations must develop policies on what type of information can and cannot be uploaded to AI platforms. These recommendations are crucial not only for private businesses but also for public sector institutions.

For example, employees should not upload commercially sensitive information, customer personal data, or other confidential details to AI tools. Experience shows that security breaches often occur due to human error, not just system vulnerabilities. Therefore, raising awareness about AI security risks and implementing training programs in both the private and public sectors is essential.

In fact, to build a sustainable digital culture, AI-related education should start as early as schools and universities, ensuring that future professionals across all industries are prepared to work in a secure digital environment.

The Impact of the EU Artificial Intelligence Act

The European Union's Artificial Intelligence Act is a crucial regulatory framework designed to ensure data protection and reduce risks related to AI manipulation and privacy violations. However, there is concern that excessive regulation could hinder European companies' competitiveness compared to the U.S. and China.

In Latvia, this discussion is particularly relevant, as the National AI Development Law is soon to be reviewed in its second reading in the Saeima (Parliament). It is crucial that this regulation strikes a balance—creating a secure environment for innovation while ensuring that European businesses remain globally competitive.

The DeepSeek case clearly illustrates that artificial intelligence is a strategically significant field that presents both opportunities and risks. To fully harness AI’s potential, user education and balanced regulation are essential to promote both security and competitiveness. Latvia and Europe must find this delicate balance to maintain technological independence and shape a sustainable digital future.