Congress Weighs Pilot Program for Predictive Analytics Use in Child Welfare

Child Welfare
Rep. Todd Young (R-Ind.), who introduced the predictive analytics pilot bill, meets with 525 Foundation founder Becky Savage, whose organization is focused on preventing teens from using opioids

A new bill introduced by Senator Todd Young (R-Ind.) last month would establish a pilot program to test the use of predictive analytics in identifying and protecting children who are at risk of maltreatment.

The bill, “Using Data to Help Protect Children and Family Act,” would set aside $10 million for the Department of Health and Human Services to work with five states or tribes to implement predictive analytics tools. Entities would be required to apply with a description of their research methodology and its limitations.

Most child welfare systems use some form of risk assessment to help caseworkers make decisions about investigations, services offered to families, and removals to foster care. Many of those are actuarial tools, such as Structured Decision Making, where a slate of questions is used to gauge risk.

Predictive analytics uses large quantities of data and machine learning to make predictions about future events. For business purposes, predictive models are built to interpret large sets of data and produce a probability. This is applied to anything from individual credit scores to managing a fleet of vehicles.

There has been growing interest in using predictive approaches to inform responses to abuse and neglect cases. While this bill marks the first national effort to use technology for child abuse prevention, several state and county agencies have already implemented some predictive analytic programs.

The approach is not without controversy. The inclusion of massive data in a “black box” means there is little understanding to any layman about what drives the predictions. And even proponents of predictive analytics have conceded that, to the extent that existing data reflects racial biases in child welfare, predictive models can import those biases into the analysis.

In addition, others worry that relying on an algorithm diminishes the complexity of casework: simply seeing a score indicating high risk can contribute to “confirmation bias” in those workers investigating reports of abuse.

The bill would require entities to address how their program will reduce bias based on race, sex, religion, national origin, age and disability. Each eligible entity must provide specific procedures to monitor and prevent potential unintended bias in its predictions. The agency must also provide a description of how it will consider and solicit input from child welfare organizations, agencies and members of the community.

Once the program is implemented, each entity will be evaluated on its ability to accurately predict children most at risk of abuse and neglect, and its effectiveness on targeting services to the highest risk families. In addition, each entity will be expected to submit an interim and final report stating child maltreatment and fatality rates, the progress of the program, and whether the program had a positive impact on children and families in the welfare system.

The bill has been referred to the Education, Labor and Pensions Committee. During a committee meeting on June 26, Senator Patty Murray (D-Wash.) stated that she believes the committee is close to coming to an agreement on the bill.

Print Friendly, PDF & Email

Be the first to comment

Leave a Reply

Your email address will not be published.


*