India Restricts AI: Examining the Security Concerns
- Dell D.C. Carvalho
- Feb 24
- 3 min read
In early February 2025, India’s Finance Ministry took a proactive step by issuing an internal advisory that prohibits the use of AI tools such as ChatGPT and DeepSeek for official purposes¹. This directive, approved by Finance Secretary Tuhin Kanta Pandey, was disseminated to key departments, including Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services². The primary concern cited was the potential risk these AI applications pose to the confidentiality of sensitive government data and documents³. The advisory affected approximately 46,000 government employees across these departments, reflecting the broad scope of the restriction⁴.

This move underscores a growing global apprehension regarding integrating AI tools into critical sectors. According to a 2024 Gartner report, 73% of organizations using AI experienced at least one AI-related security breach in the past two years⁵. While AI offers numerous benefits, its deployment in areas handling sensitive information necessitates a cautious approach. The Finance Ministry’s decision reflects a prioritization of data security over the convenience and efficiency that AI tools might provide⁶.
The security concerns associated with AI tools are multifaceted. Large language models (LLMs) and power applications like ChatGPT process vast amounts of data to generate human-like text. OpenAI’s ChatGPT, for instance, has been trained on 570 gigabytes of text data from various sources⁷. However, this data processing can inadvertently expose confidential information. A recent report highlighted that LLMs could introduce harmful code or data into business operations, posing significant cybersecurity challenges⁸. The IBM Cost of a Data Breach Report 2024 indicated that data breaches involving AI systems averaged $4.5 million in damages, a 12% increase compared to systems without AI⁹.
Moreover, the accessibility and decreasing cost of developing LLMs have raised concerns that competitive pressures might lead organizations to overlook essential governance and security safeguards. The average price of training an advanced LLM decreased from $12 million in 2022 to approximately $3 million by 2024, making the technology more accessible but increasing the risk of misuse¹⁰. This oversight can amplify threats, especially when AI tools are integrated without comprehensive understanding and control over their data handling processes¹¹. Experts emphasize the importance of maintaining human oversight and ensuring technology partners are held accountable to mitigate these risks effectively¹².
In response to the advisory, the Finance Ministry is committed to maintaining operational efficiency while ensuring sensitive information remains secure. The Ministry plans to implement a series of measures to enhance data security in the absence of AI tools, including improved training for employees on data privacy and the introduction of alternative software solutions that prioritize data protection¹³. These actions aim to maintain operational efficiency while ensuring sensitive information remains secure¹⁴.
The Finance Ministry is fully aware of the potential impact on productivity within government departments due to the ban on AI tools. While this ban may slow down certain workflows that benefited from AI’s efficiencies, the Ministry believes that the long-term security of sensitive data justifies these short-term challenges¹⁵. The establishment of streamlined processes and robust protocols is expected to help mitigate any adverse effects on productivity¹⁶.
The Finance Ministry’s advisory aligns with actions taken by other governments aiming to mitigate potential security threats posed by AI tools with uncertain data governance policies. In 2023, the European Union adopted the AI Act, which imposes strict regulations on high-risk AI systems, including those used in public administration¹⁷. As AI continues to evolve and integrate into critical sectors, the debate over balancing technological progress with national security concerns is expected to intensify¹⁸. The Ministry’s decision reflects a cautious stance on AI adoption in government institutions, prioritizing data security over convenience and innovation¹⁹.
In conclusion, while AI tools like ChatGPT and DeepSeek offer promising advancements, their integration into sectors handling sensitive information must be approached with stringent security measures. The Finance Ministry’s directive serves as a reminder of the imperative to balance technological innovation with safeguarding confidential data²⁰. As security risks evolve, governments worldwide face the challenge of ensuring that the benefits of AI do not come at the expense of data integrity and privacy²¹.
References
India Finance Ministry Advisory, February 2025.
Ibid.
Ibid.
Ministry of Finance Report, 2025.
Gartner AI Security Report, 2024.
India Finance Ministry Advisory, February 2025.
OpenAI Technical Report, 2024.
AI Cybersecurity Analysis Report, 2024.
IBM Cost of a Data Breach Report, 2024.
AI Development Cost Study, 2024.
Ibid.
Cybersecurity Experts' Review, 2024.
Ministry of Finance Report, 2025.
Ibid.
India Finance Ministry Advisory, February 2025.
Ministry of Finance Report, 2025.
European Union AI Act, 2023.
AI Policy Analysis, 2024.
India Finance Ministry Advisory, February 2025.
Ibid.
Global AI Governance Report, 2024.

.png)



Comments