Child Related Issues and Smart Software: What Could Go Wrong?

May 19, 2022

It is understandable that data scientists would like to contribute to solving a heart-wrenching problem. But what if their well-intended solutions make matters worse? The Bismarck Tribune shares the article, “An Algorithm that Screens for Child Neglect Raises Concerns.” AP Reporters Sally Ho and Garance Burke describe the apprehension of one Pittsburgh family’s attorney in the face of an opaque predictive algorithm. The software uses statistical calculations to pinpoint families for investigation by social workers, but neither the families nor their lawyers are privy to the details. We learn:

“From Los Angeles to Colorado and throughout Oregon, as child welfare agencies use or consider tools similar to the one in Allegheny County, Pennsylvania [in which Pittsburgh is located], an Associated Press review has identified a number of concerns about the technology, including questions about its reliability and its potential to harden racial disparities in the child welfare system. Related issues have already torpedoed some jurisdictions’ plans to use predictive models, such as the tool notably dropped by the state of Illinois. According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a ‘mandatory’ neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.”

Ah bias, the consistent thorn in AI’s side. Allegheny officials assure us their social workers never take the AI’s “mandatory” flags at face value, using them as mere suggestions. They also insist the tool alerts them to cases of neglect that otherwise would have slipped through the cracks. We will have to take them at their word, as this tech is as shrouded in secrecy as most algorithms. And what of the growing number of other cities and counties adopting the tool? Surely some will not be as conscientious.

Still, the tool’s developers appear to be taking concerns into account, at least a little. The authors note:

“The latest version of the tool excludes information about whether a family has received welfare dollars or food stamps, data that was initially included in calculating risk scores. It also stopped predicting whether a child would be reported again to the county in the two years that followed. However, much of the current algorithm’s design remains the same, according to American Civil Liberties Union researchers who have studied both versions.”

See the thorough article for more on this contentious issue, including descriptions of welfare agencies under pressure, calls for transparency, and perspectives from advocates of the software.

Cynthia Murrell, May 19, 2022

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta