Using Algorithms and Artificial Intelligence in Child Welfare

This article is part of a series on the Comprehensive Child Welfare Information System (CCWIS), which states can build with federal funding help to replace an antiquated data and management process.

As another barrier to bypass when it comes to adopting new technology for the Comprehensive Child Welfare Information System (CCWIS) legislation and Family First Prevention Services Act, let’s take a moment to consider the role of algorithms and artificial intelligence (AI) in child welfare.

As a statistics professor there is nothing that drives me more bonkers than hearing people inflate — if not fictionalize — the power of their analysis. And quite honestly, from what I have heard from top-level administrators I have met with in recent months, I have a fear too many states are being misguided into thinking they can buy a CCWIS system equipped with some pre-programmed miraculous predictive abilities guaranteeing each youth’s safe placement.

If you have been reading this series of articles, as you probably have surmised, I think this accountability-driven change opportunity should be capitalized on in order to exceed your wildest dreams when it comes to caring for kids. But so does the tech world, which I suspect is for different reasons. With the CCWIS adoption upon us, tech companies are busier than ever sharing news of how their new case manager products can help you “meet” accountability requirements. But some even claim their AI products can deliver the historically elusive child welfare algorithm.

And before we place too much reliance — and possible liability — on the promises of a new AI algorithm, let us pause a moment to understand the scope of this aspiration. Indeed, if it sounds too good to be true, similar to my 10-year-old son claiming yesterday that he actually “cleaned” his bearded lizard’s tank, it probably is.

In academic texts, an algorithm is basically defined as a specific sequence of steps that guarantees a correct solution. Note the use of the word guarantees. As history has demonstrated, and years of practice confirm, there are no “guarantees” when it comes to decisions made in child welfare.

Seeking 100 percent consistency and accuracy with an algorithm to solve every unique family’s or child’s circumstances might be a worthy goal. But you are not predicting the weather, a stock’s value, or the success of a friend’s next online FarmersOnly.com dating match. It is possible that city folk just don’t get it, but a failed predictive outcome of a child’s future is not as simple as refiguring how to recoup the loss of your investment or hoping a second date goes better. And it is definitely harder to predict than the weather. Child welfare decisions are laden with far too many variables to statistically account for and these variables, if misinterpreted or missed, can literally have life or death consequences.

Think about it. How can anyone perfectly predict the behavior of a live-in boyfriend or drug-addicted family member, when agencies often currently have no way of evaluating or monitoring them? How can we assure a child’s safety in another home if that other home’s safety has never been assessed and is not continually monitored?

At this point in time, child welfare does not have the needed level of psychometric science (reliable and valid assessments) and quality data collection processes in place. Before child welfare can have a chance of finding a just and efficacious algorithm, if such aspirations are even possible, efforts first need to collect a wider net of reliable and valid data, and then identify the most predictive variables that help us to next perform the analyses needed to identify the most predictive models.

Most child welfare caseworkers are people who set out on a career to help vulnerable kids and families. That in itself is an amazing personality trait to be highly valued. Do they need more modern technology to make their work more efficient and effective? Yes. But do they need technology possibly giving them bad advice? Of course not. We just need to provide our workforce with more reliable and valid real intelligence to help them gather evidence essential to assisting them in accomplishing the often impossible task of curtailing crisis.

We need this information to be easily shared across agencies and administrators, so your team of experts can help guide and share in the responsibility of such decisions. What child welfare deserves is a more comprehensive information system that can improve the management of a youth’s case from intake to discharge in real-time.

So for now, we should just remove algorithm from the child welfare vocabulary.

Five More Reasons to Avoid the Artificial Intelligence Hype

There is considerable risk to increased safety concerns and liability associated with adopting AI-driven CCWIS products. Beyond possibly using limited data to give you the wrong answer, consider how the following might also challenge your staff’s abilities to receive the highly consistent and accurate decision guidance they need.

  1. You Have to Feed the AI Machine — For AI to perform, somebody has to feed it data. Given most tech companies don’t offer the assistance of a research team, this means your agencies will be responsible for babysitting and constantly making sure the AI machine has the information needed to manufacture artificially intelligent recommendations. As we all know, diet is everything.
  2. But It Eats Everything — Note, most AI machines don’t necessarily have “your” algorithm. In order to find this mystical anomaly, it wants you to feed it every piece of data you can, sometimes inclusive of haphazard pictures, recordings, reports and forms. With no regard for random sampling requirements, data quality or the power of the analysis, they basically throw everything into a statistical blender. And what pours out could be different from day to day based upon the catch of the day. A never-ending data-mining fishing trip, void of quality data control, falls far short of strategically using predictive analysis wisely.
  3. AI Doesn’t Know What it is Missing — If more quality data is needed and most AI products focus mainly on an initial risk, safety or placement assessment, there is a high probability essential information needed to inform more accurate decisions is missing (e.g., trauma — ACE’s).
  4. Garbage In, Garbage Out — If the AI is using inaccurate and incomplete data, most will likely experience inaccurate and incomplete recommendations.
  5. Proprietary is a Problem — And if they are not letting you see the recipe for what they are concocting, all you can do is hope their secret sauce is healthy. There is a better way to use data in child welfare than relying upon technology which either doesn’t use quality data wisely or provides you with the evidence needed to determine if they are using your data wisely.

As The Chronicle of Social Change has shared, many states such as Illinois and Florida have been debating the use of such “predictive” efforts in child welfare. Can we find predictive models in the near future to help better guide decision making in child welfare? Of course, we can. But if you want them to be consistent and accurate, it’s time to stop listening to the dirty data mining minds of the world, and focus first and foremost on building a high-quality data collection system that can be shared across your agencies.

Only by collecting a comprehensive net of quality reliable (consistent) and valid (accurate) data across your systems of care is how an analyst will be able to help you determine which variables are meaningful to predictive modeling, and then personalize your efforts to your state, county, regional, agency or specific case needs.

How to Reduce the Risk

If new technology is not fed an adequate amount of high quality data, the recommendations delivered (via the predictive analytics performed) will leave ample room for error putting the agency, staff and children at increased risk.

To even consider using predictive analytics in child welfare, or for that matter inferential or operational analytics, higher quality data must be collected on far more than risk or safety questionnaires to meet these required assumptions. If we really want to figure out what a child is experiencing and needing to overcome issues such as abuse, neglect and trauma, we are going to need to implement more objective data collection efforts that move us beyond having a case worker complete a risk and safety assessment based on limited subjective information collected during the initial meeting, a time of crisis.

With the Family First Prevention Services Act encouraging agencies to work with more families in crisis without the use of foster care, we are going to need to help investigators, case managers and social workers better assess more specifically what a youth or family might need to thrive. By taking the high road to CCWIS, a technology solution collecting higher quality data across all agencies could be very complementary to the Family First Act.

Such accountability-driven changes, if approached strategically, could provide a crucial ingredient to your future success and making sure your new technology is consistently fed the comprehensive and accurate information it needs to provide you with the correct electronically generated recommendations. Not the inaccurate or incomplete recommendations which lead to more sad headlines, turnover and less efficient and effective systems of care.

This series is written by Dr. Michael Corrigan, associate professor at, Marshall University and vice president of Multi-Dimensional Education. Corrigan and his team are the architects of VitalChild’s MDYA360 Outcomes Monitoring System, a CCWIS solution developed by Helix Business Solutions and Powered by Oracle Cloud.

Print Friendly, PDF & Email