
Amid the AI boom, the Indian Ministry of Finance has issued a new directive that prohibits the use of AI-powered tools such as ChatGPT and DeepSeek on official government devices.
The Indian government released a new directive on January 29 to mitigate risks associated with data security and confidentiality. The directive, signed by Joint Secretary Pradeep Kumar Singh, warns that the use of AI-based applications on official systems could compromise sensitive government data.
Employees from various departments, including Revenue, Economic Affairs, Expenditure, Public Enterprises, DIPAM, and Financial Services, have been instructed not to use AI tools on office computers.
The ban comes after Australia banned Chinese AI chatbot DeepSeek from its government systems and devices, citing data-security concerns. It reflects global concerns about the growing use of AI tools that process user’s queries on external servers. This process on external servers raises concerns about data leaks and unauthorized access.
To protect the country from external threats, the Finance Secretary has raised a green flag for the directive to prohibit the use of AI tools and safeguard sensitive sectors.
Why Indian Finance Ministry Bans AI Tools on Official Systems
The Ministry of Finance’s decision is rooted in concerns over security, data control, and compliance. The directive put three reasons forward for issuing these guidelines.
- Risk of Data Leaks
AI models like ChatGPT and DeepSeek operate on cloud-based servers, which means any sensitive data fed into these AI tools could be stored externally. As government officials handle classified financial information, policy, drafts, and internal communication, the presence of such tools could pose great risks to national security.
- Lack of Control Over AI Models
Unlike traditional government software, AI tools are proprietary and controlled by private companies such as OpenAI. The Indian government has no control over how these tools function, which increases the risk of foreign access and cybersecurity vulnerabilities.
- Compliance with Data Protection Policies
Allowing AI tools on government devices without a clear regulatory framework could lead to data protection violations and expose government networks to external threats.
Source: https://x.com/CAamanmittal/status/1887329400531960069
Latest Stories:
OpenAI and CSU Partner for Nationwide AI Education Expansion
Fujitsu Showcases AI-Driven Networks at MWC Barcelona 2025
UK Initiates AI Trial to Enhance Breast Cancer Detection