The U.S. Government’s AI Boom Is Outpacing Oversight
- Dell D.C. Carvalho
- Apr 19
- 2 min read
In 2024, a clerk at the Department of Veterans Affairs logged into a new tool to process disability claims. What once took hours now finished in minutes, thanks to an AI model that predicted which records needed review. Across Washington, agencies were rolling out similar tools—some helpful, others riskier. But few had clear rules on how to manage them.
That same year, U.S. federal agencies reported over 1,700 active AI use cases, up from around 750 the year before¹. The growth came after a 2023 executive order required every agency to disclose how they were using AI, especially if it affected rights or safety.

Where AI Is Being Used
At least 50 federal agencies now have at least one AI project³. Common applications include:
Natural language processing for call centers
Image recognition in satellite and security footage
Predictive analytics for fraud detection and case prioritization
Some of the most high-stakes use cases are in:
Immigration and border control: AI tools help assess visa risk scores
Public benefits: Models predict potential fraud or errors in applications
Law enforcement: Predictive policing tools try to anticipate where crimes may occur
Of the 1,700 use cases, 227 were flagged as potentially affecting civil rights or safety¹. That includes systems used by the Department of Homeland Security, Social Security Administration, and Department of Veterans Affairs.
Oversight Isn’t Keeping Up
The boom in AI use has not been matched by strong oversight. Many federal agencies still lack dedicated AI governance teams². Some use off-the-shelf models without clear documentation or audits. Others rely on private contractors to deploy systems the public cannot inspect.
In March 2025, the entire Defense Digital Service team, known as the Pentagon’s “SWAT team of nerds,” resigned. The team had long worked on secure and ethical tech deployments. Their exit came after pressure to rapidly scale AI tools without safeguards².
This reflects a deeper problem. While AI can speed up services, it also raises risks—especially when used in decisions about health, benefits, or legal status.
A Fragile Balance
AI can help the government move faster and serve more people. But the systems need transparency and proper testing. If not, they can do harm before anyone notices.
Federal AI spending is expected to grow to $3.1 billion by 2028, up from $2.1 billion in 2024³. That growth must come with rules, not just code.
Sources
FedScoop
Politico
GovWin IQ
Comments