Why Are Child Welfare Advocates Sabotaging Data-Driven Efforts to Protect Children?

Of all the social challenges the public sector works to overcome, ensuring the wellbeing of children is undoubtedly among the most important. Consider that in 2014, 702,000 children were abused or neglected in the United States and 1,580 children died as a result. So when Los Angeles County announced in July 2015 that it would begin using predictive analytics to help its social workers quickly identify the most at-risk children and help the county prioritize its efforts to deliver services more efficiently, child welfare advocates should have rejoiced.

Instead, this and similarly promising approaches have encountered substantial opposition from a surprising number of advocates who worry that using data to assist with child welfare investigations would perpetuate racial discrimination, violate human rights, and ultimately cause more harm than good. Not only are these critics fundamentally wrong, but their resistance to using data to improve outreach and assistance is quite likely jeopardizing the wellbeing of children.

The data analytics tool that L.A. County is piloting, called the Approach to Understanding Risk Assessment (AURA), is straightforward: AURA automatically aggregates data from various city agencies, including the departments of health, education, and corrections, to calculate a score from 0 to 1000 to indicate the level of risk a child faces based on factors known to be correlated with abuse.

For example, if AURA detects that a child makes frequent emergency room visits, changes schools, and lives with a family member with a history of drug abuse, AURA will warn county social workers that he or she has an elevated risk score. In the process of creating AURA, the software developer SAS tested its algorithms on historical data and found that if L.A. County had implemented the technology in 2013, AURA would have flagged 76 percent of cases that resulted in a child’s death with a very high risk score, which could have prompted an investigation that may have helped prevent a tragedy. Social workers already analyze this data in their investigations, but they must collect it manually and then use their own judgement to determine whether or not to launch an investigation.  

One of the most common criticisms of this approach is that automating risk analysis could promote racial discrimination, such as by facilitating racial profiling by social workers. However, automating this process is actually one of AURA’s biggest benefits as it reduces the potential for biased, subjective human decision-making to enter the equation. In fact, many jurisdictions require social workers to manually log data they think is relevant to a case—a system easily manipulated by social workers who may choose to enter only the data that will produce their desired outcome, or by abusive caregivers, as these systems rely heavily on self-reported data rather than data from official sources. By automatically aggregating the information, county officials can better ensure that social workers rely on objective, complete assessments to guide their investigations.

Other criticism demonstrates a fundamental misunderstanding of how the technology actually works, likening it to the dystopian sci-fi thriller Minority Report, in which police officers can arrest people when psychics predict they will commit a crime. For example, Richard Wexler, executive director of the nonprofit National Coalition for Child Protection Reform, argues these systems lead to a dramatic increase in “needless investigations and needless foster care,” causing more harm to children than would be prevented. This line of criticism overlooks the fact that AURA and other systems like it are merely decision support systems—that is, software designed to help people do their jobs more effectively by making more informed decisions—not software that is intended to replace human decision-making.

In this case, social workers still would be the ones making real-world judgments about the best interests of at-risk children. AURA does not initiate an investigation when a child’s risk score hits a certain point, it simply replaces the information gathering and analysis that social workers already perform for every investigation, and does so faster and more comprehensively than a human ever could. If Wexler is concerned that social workers conduct needless investigations, he should be advocating for more and better analytics, not less.  

Fear of using data to support decision-making is not new. In fact, it has already prevented at least one effort to use this type system to improve child-protection efforts. Last year in New Zealand, Social Development Minister Anne Tolley blocked an initiative to study the accuracy of a system similar to AURA on the grounds that “children [participating in the study] are not lab rats.” The study would have assigned risk scores to newborns and monitored outcomes after two years so researchers could ensure the model was reliable.

Tolley objected to the fact that social workers could not act on these scores as it would skew the outcome of the study, but by that logic Tolley should also object to clinical drug trials, which require rigorous, untampered testing before a drug can be approved for the public. Tolley incorrectly assumed social workers would have to stand by and watch children be abused just so the predictive model could be verified. The truth was that standard intervention procedures still would have been in effect.

Surprisingly, much of the opposition has come from child welfare advocates, such as Wexler, Tolley, and the director of the L.A.-based community activist organization Project Impact. Some of the most vocal opposition of New Zealand’s attempt to test this approach came from Deborah Morris-Travers, New Zealand advocacy manager for the United Nations Children’s Fund (UNICEF). Morris-Travers said that calculating risk scores for newborns and monitoring them to see if these scores were reliable somehow constituted a “gross breach of human rights.”

But Morris-Travers’ concern is misplaced. The gross breach of human rights is the child abuse that is occurring, and refusing to explore how predictive analytics could help social workers better understand and curb the problem does a terrible disservice to the victims. Morris-Travers’ comments are particularly confounding considering that UNICEF directly credited the increased use of data and analytics as the reason it has been able to make so much progress in helping children. In fact, UNICEF’s 2014 report, “The State of the World’s Children,” clearly states that “[c]redible data about children’s situations are critical to the improvement of their lives—and indispensable to realizing the rights of every child.”

If these advocates want to prevent child abuse, they should be championing innovative efforts and technologies that show great potential to do so, not fighting them. Of course, child welfare agencies should closely monitor these programs’ effectiveness, not implement them blindly. If testing reveals that these systems are ineffective or detrimental, then policymakers should of course seek alternate strategies or work to improve them. But given the scale of the need and opportunity to improve children’s welfare, slowing experimentation with predictive analytics would be incredibly detrimental. As an increasing number of government officials recognize the potential of this approach, they should be careful not to give credence to advocates more fearful of data than they are concerned about the welfare of children.

2015-josh-new_thumbnail
Joshua New
is a policy analyst at the
Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy. Follow Josh on Twitter @Josh_A_New.

Print Friendly, PDF & Email

6 Comments

  1. Algorithms? To predict abuse? What mind bending ideas, to grow control over the lives of American families. How about, if a crime is committed against a child that places him/her in imminent danger of bodily harm or death..then the perpetrator is arrested, on the basis of evidence gathered by highly trained law enforcement, expert in domestic violence, child sexual abuse and the immediate, forensic gathering of evidence? Crimes committed against children are police matters…Then the child should be placed with the safe, fit, non offending parent, grandparent, family member or caregiver known to the child. To lose a child is devastating and traumatic…and it should NEVER happen on the feelings, hunches and machinations of Child Protective Services. Social workers should be brought in to do the good work they were originally meant to do…which is, to provide needed services to people without recourses. When did police relinquish their mandates to protect the public to armies social workers, with liberal arts degrees conducting flawed investigations and social engineering experiments on hundreds of thousands American children?

  2. I am all for data driven decision making and practice, however, I know after having worked in child welfare for almost the entirety of my 35+ year career, that the real problem has to do with who is obtaining the data, how the data is obtained, and who is using the data. When I started in the field, social workers (BSW’s and MSW’s) were the recognized experts in child welfare. Even today, NASW has a stand alone Code of Ethics for Social Work in Child Welfare. Today, our schools of social work not only do not teach research methods they don’t even do a good job training prospective social workers about how to conduct a social work assessment, which is the fundamental skill set required to help people in solving social problems. Add to that issue the fact that many, if not most, public child welfare systems today don’t hire professional social workers into leadership positions nor do they hire professional social workers to do the work, supervise the work, manage the work or implement practice paradigms and standards and, when they do hire professional social workers, these are typically social workers who do not have a clue about leadership which is the critical success factor we repeatedly fail to embrace and celebrate in child welfare practice today. Thus we have what we have…kids dying (and there’s more than one way to kill a kid…the “system” also kill their spirits, their futures, their aspirations, and their dreams), vulnerable families ill served, and “related professionals” thrown into the work for which professionally trained social workers used to be most well trained and well equipped to do in order to obtain success. So, the use of data is an absolute necessity, but before we are in a position to use it well and wisely, we must get back to basics of fundamentally good and sound practice that is driven by courageous leaders with solid professional credentials and practice competencies which they own and model. Until that happens, data tools and data analysis means nothing.

    Robin Landry, LCSW

  3. Much of the volatility about the application of sophisticated data in child welfare can be attributed to the substandard research that has plagued the field. Since 1980s Schools of Social Work have received tens of millions of dollars in training funds for child welfare, which has been augmented by the establishment of the National Child Welfare Workforce Institute in 2008 for another $40 million or so. Targeted exclusively to SSWs the funds have been poorly invested. Accreditation of Master of Social Work programs by the Council on Social Work Education does not require a research thesis as an option, let alone a requirement for the MSW. For a half-century SSWs have produced tens of thousands MSWs who are not research proficient, yet they serve as the professional leadership in child welfare. Is it any wonder that, as a group, they are research averse? Is it surprising that tropes related to poverty and race are employed to subvert analysis? It is worth wondering if all the hyperbole around AURA and RSF could have been avoided had SSWs produced research proficient practitioners.

  4. I don’t t think those who know me would describe me as unwilling to consider the importance of innovation in Child Protection. But I have most certainly been opposed to “testing” this new industry on the most vulnerable people in LA County or anywhere else. This approach is most certainly a “Minority Report”, just a few weeks ago touted as a future tool to predict which youth in LA County will become future criminals. With all due respect you have a horse in this game, those who would have been subjected to false positives in the LA data which were left out of your piece have children and their own futures on the line. Use analytics first to find the caseworker who fails to make required visits and falsely documents them or tips the scales toward infant adoption by friends who are foster parents, then show me how to use this power against the powerless.

  5. I’m not sure I agree. Who will maintain and regulate what they use as risk factors. I agree that the main outline looks straightforward but the agencies and state adjust everything to their standards and supposed “best practices” and that’s when we see the bias and prejudice find its way in. Unfortunately without better monitoring the mentality and culture inside the offices we are allowing them to set up another system that will further over reach into normal loving families and perpetuate unwarranted removals that are emotionally and mentally damaging to children.

1 Trackback / Pingback

  1. Technology is Central to Child Welfare Management-ClientTrack

Comments are closed.