Data-Nuking Poor Families Is Not the Answer to Child Abuse

Much as the National Rifle Association argues that “Guns don’t kill people, people do,” Joshua New defends the use of predictive analytics in child welfare by telling us, in effect, that computers don’t remove children, caseworkers do.

But there’s a corollary: Just as we should be doing more to keep big, powerful guns out of the hands of people who don’t know how to use them, the human beings who run child welfare systems can’t be trusted with the nuclear weapon of big data.

This is illustrated by the way New himself handles data. In the second sentence of his column, the champion of big data misunderstands the first statistic he cites.  He writes: “Consider that in 2014, 702,000 children were abused or neglected in the United States …”

But that’s not true. Rather, the 702,000 figure represents the number of children involved in cases where a caseworker, typically acting on her or his own authority, decided there is slightly more evidence than not that maltreatment took place and checked a box on a form to that effect.

the human beings who run child welfare systems can’t be trusted with the nuclear weapon of big data.
The human beings who run child welfare systems can’t be trusted with the nuclear weapon of big data.

For purposes of this particular statistic, there is no court hearing beforehand, no judge weighing all sides, no chance for the accused to defend themselves.

I am aware of only one study that attempted to second-guess these caseworker decisions. It was done as part of the federal government’s second “National Incidence Study” of child abuse and neglect.  Those data show that caseworkers were two to six times more likely to wrongly substantiate maltreatment than to wrongly label a case “unfounded.”

I don’t blame the federal government for compiling the data.  I don’t blame the computers that crunched the numbers.  My problem is with how the human being – New – misinterpreted the numbers in a way favorable to his point of view.

I’m not saying he did it on purpose (after all, he’s only human); I use the example only to illustrate why, when he says that predictive analytics systems are merely “decision support systems,” that’s not reassuring.

Nor is it reassuring to find that while New tells us how the predictive analytics experiment in Los Angeles, called AURA, allegedly pinpointed a large proportion of cases that led to severe abuse (according to a study done by the same company that developed the software), he leaves out the fact that more than 95 percent of cases flagged by AURA apparently were false positives.

New also accuses those of us who disagree with him not simply of opposing predictive analytics, but “sabotaging” it, a word that conjures up images of luddites from the Vast Family Preservation Conspiracy sneaking into offices to destroy computers.  He offers no data to support his claim of sabotage.

Then, he concludes by dredging up the latest version of the classic canard: If you don’t support doing exactly what I want to do in exactly the way I want to do it, then you don’t care about child abuse!  His version is to allege that those of us who disagree with him are “more fearful of data than they are concerned about the welfare of children.”

Human fallibility intrudes in other ways as well.

Human beings decide which “risk factors” the computers seek out.  So, for example, in AURA, the computer looks at things like whether a child has been taken to an emergency room often.  But impoverished parents rely more on E.Rs, whether they also happen to abuse their children or not.

Another alleged risk factor: changing schools a lot. But that happens to impoverished families who are homeless or get evicted because they can’t afford the rent – and to families of color whose children were victimized by the well-known racial bias in school discipline – whether they also happen to abuse their children or not.

Instead of compensating for human biases, AURA is likely to magnify them.

And let’s not kid ourselves.  How many child protective services caseworkers will dare to leave a child in his or her own home — notwithstanding the 95 percent false positive rate – when it means that, in the event of a tragedy, that caseworker will be on the front page for ignoring the “sophisticated computerized risk assessment protocol” and “allowing” a child to die?  New’s own rhetoric makes clear what such a caseworker will face.

Right now, AURA is going to be used to analyze reports received by the Los Angeles child protective services “hotline” and passed on for investigation. But why wait for reports? Once we have all that “big data” about where the “high risk” cases are, will CPS workers simply be empowered to barge in the door of every home that gives off an “aura” of child abuse? That federal commission on child abuse fatalities may recommend something similar.

So no, we should not give child protective services agencies the power to data-nuke our most impoverished families. Even if the data are up to the challenge, the human beings are not.

Richard Wexler is executive director of the National Coalition for Child Protection Reform, www.nccpr.org. 

Print Friendly, PDF & Email

Richard Wexler
About Richard Wexler 51 Articles
Richard Wexler is Executive Director of the National Coalition for Child Protection Reform, www.nccpr.org. His interest in child welfare grew out of 19 years of work as a reporter for newspapers, public radio and public television. During that time, he won more than two dozen awards, many of them for stories about child abuse and foster care. He is the author of Wounded Innocents: The Real Victims of the War Against Child Abuse (Prometheus Books: 1990, 1995).

4 Comments

  1. It would be tragic if the statistical data actually represented the real figures.
    They have no idea whatsoever how many kids are abused. they know how many they say have. For funding.

    #endDHHSmess

  2. Richard did you read the comments on his post? He said it only caught 76% of fatalities, The CPS agencies would be embarrassed if their employees did so poorly. The reason that child deaths cause such an issue is that they do that so well. and when I told him to retarget it to keeping children from being abused by being put into the care system, he actually had no response: apparently he knew the false positive rate was so high that it might cause the agencies to actually miss a few. But since Roman times the proper jurisprudence is that one person not suffer government imprisonment (the care system) even if that means someone else is not put into care when they should have been.

  3. Hopefully you read my comments on his post. Basically CWs are aiming for 2-3 sigma capture of cases that may result in death and they are close to if not actually meeting this 95-98% detection requirement, so the 76% rate of data analysis is not going to help much and that only if it is really not correlated, which is doubtful too. The real problem CWs have is determining which children don’t need placement in care. They often overload the care system with such children resulting in them not being able to put in care children really in danger, resulting in care system having CA&N rates higher than most families they are taking them from. That means placement is not in the child’s best interest which rubber stamping judges never even hold an evidence hearing on.
    We need to remember the basis of our legal system is that 10 or 99 guilty go free so that not one innocent is convicted. The CPS courts act in the reverse of this abusing 99 innocent children, by placing them in care, to try to save one who may die: this is wrong and must cease!

Comments are closed.