At the 8th arrondissement, Grand Palais, the recent Artificial Intelligence (AI) Action Summit took place from February 10–11, 2025, bringing together global leaders to shape the future of AI development and governance.
Co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, the summit underscored the urgent need for inclusive, transparent, and ethical AI practices.
This strong stance emerged as a counterpoint to the once-applauded open-source Large Language Model (LLM) of China, DeepSeek, which has since found itself in deep trouble.
Imagine a search engine so powerful that it unearths every last corner of the internet - public, private, and everything in between. That’s the promise (and the peril) of Deepseek.
In a world still wrestling with the implications of AI chatbots, Deepseek takes data mining and indexing to an entirely new level. It promises to transform how we access information, but not without igniting fierce debates over privacy, security, and global power dynamics.
DeepSeek, the Chinese AI startup that emerged with a powerful language model at a fraction of the cost of its Western counterparts, is making headlines. With its AI assistant promising capabilities comparable to OpenAI’s ChatGPT, it has raised both awe and alarm.
Several nations, including the US, Italy, and Australia, have banned or restricted its use, citing data security risks and concerns over Chinese government surveillance.
Yet, in India, discussions around DeepSeek remain divided between AI optimism and national security caution.
Should India welcome DeepSeek as an affordable AI alternative, or is it another digital Trojan horse?
India must exercise caution when dealing with DeepSeek. While its affordability and efficiency may seem attractive, the potential risks outweigh the benefits.
Also read: 'Not for sale': OpenAI turns down Musk’s $97.4 billion takeover bid
Countries worldwide are restricting its use due to data privacy vulnerabilities and security concerns. India, too, has taken initial steps to safeguard government data, but a more explicit stance against DeepSeek is necessary.
Rather than embracing foreign AI solutions with opaque governance, India should invest in its indigenous AI ecosystem.
DeepSeek’s data storage practices and China’s regulatory environment make it vulnerable to government interference.
Investigations have revealed that DeepSeek’s servers may transmit data back to China, exposing sensitive user information to foreign surveillance. Given the increasing geopolitical tensions, India cannot afford to overlook these risks.
Several countries have already taken a strong stance against DeepSeek. Italy has banned DeepSeek over data protection concerns. The U.S. government, including agencies like NASA and the Navy, has restricted its use. Australia and South Korea have imposed similar bans due to security vulnerabilities. Taiwan has prohibited its use in government departments. These global restrictions indicate that concerns about DeepSeek are not unfounded and should serve as a warning for India.
While India has not officially banned DeepSeek, there have been precautionary measures.
The Ministry of Finance has advised government employees to avoid using DeepSeek for official purposes, citing data security risks.
India plans to test DeepSeek’s AI models on its own servers before deciding on further adoption.
IT Minister Ashwini Vaishnaw has praised DeepSeek’s low-cost AI approach but stopped short of endorsing its deployment in India. These steps suggest that India is aware of the risks, but a firmer policy stance is needed.
Meanwhile, Europe has stepped into the AI race with Le Chat, an AI assistant developed by French startup Mistral AI, emerging as an alternative to DeepSeek.
While promising in its capabilities, Le Chat presents an ironic dilemma - privacy and data consent come at a cost.
In its free version, user data is used to improve services, with limited options to opt out. However, the premium version allows users to opt out of data sharing, raising a fundamental question: should privacy be a privilege only for those who can afford it?
India’s AI strategy should prioritise data sovereignty, security, and trustworthiness, not just efficiency. The world’s eyes are on India as we navigate the labyrinth of AI, with the Global South viewing our path not just as a decision, but as a beacon for the future of equitable technology.
By: Parishrut Jassal