Predictive Policing to Predictive Welfare: The Quiet Expansion of AI in Public Services
- Dell D.C. Carvalho
- Apr 7
- 3 min read
A Real Case: The Allegheny Family Screening Tool
In Allegheny County, Pennsylvania, a mother named Tamika was reported to child protective services after her neighbor made a call. Her case was flagged by an AI tool used by the county to assess the risk of child neglect. This tool—called the Allegheny Family Screening Tool—analyzes data like income, criminal records, and past social service use. It gave Tamika’s case a high-risk score. She said she was never interviewed before the visit and that the score was wrong. The agency later found no evidence of neglect. But the visit left Tamika shaken and angry.

This system was meant to help overworked staff focus on the most serious cases. But cases like Tamika’s show how AI tools can raise red flags even when there’s no real threat. A 2019 study found that the tool flagged some low-risk cases as high-risk and missed others with serious problems¹.
AI Goes Beyond Policing
Predictive systems started with law enforcement. Police departments in cities like Los Angeles and Chicago used crime data to decide where to send officers. These tools often targeted poor and minority areas. A study in Oakland found that predictive policing led officers to patrol the same neighborhoods over and over—even when crime rates were falling².
Now, similar tools are used in welfare systems. States like Oregon and Indiana have tried using AI to decide which families should get social services. In some places, the same data used to flag crime—like housing history or past arrests—is also used to score families applying for help³.
Governments say they want to use data to work faster and save money. But these tools may repeat the same problems as predictive policing—especially when the data reflects deep, long-term social gaps.
Bias In, Bias Out
AI tools are only as fair as the data behind them. If the past data is biased, the results will be too. In the Allegheny system, Black children were almost twice as likely to be flagged as high-risk compared to white children, even when their family situations were similar¹.
These tools also often lack transparency. Many agencies use systems made by private companies that keep their models secret. That means the public can’t always see how the tools work or challenge decisions made using them.
Some researchers have warned that these tools are being adopted too fast. A 2020 report found that over 20 U.S. states were exploring or using AI in public services—often without strong public oversight or proof that the tools actually work better than human staff⁴.
Looking Ahead
AI can help governments work more efficiently, but only if used with care. Agencies should test these tools before using them. They should also let the public see how the tools make decisions. Without checks and transparency, these systems can cause more harm than good.
Public trust in services like welfare and child protection is already low in many communities. AI will not fix that trust—it may weaken it further if people feel judged by machines.
References
Chouldechova, A. et al. (2018). A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions.
Lum, K. & Isaac, W. (2016). To predict and serve? Significance, 13(5).
Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.
Engstrom, D. F. et al. (2020). Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies. Stanford University.
Comments