Supervisors Are Right to Push L.A. County on Replacing SDM

On Sept. 17, Daniel Heimpel reported in this publication on a motion by two members of the Los Angeles County Board of Supervisors that would require Los Angeles’ child protection agency to re-evaluate how it addresses risk. This comes in the wake of the death of 11-year-old Yonatan Aguilar, whose family was the subject of six prior CPS reports, according to records reviewed by the Los Angeles Times.

Four times, Yonatan was found to be at high risk of maltreatment, but the county never even opened a case.

Opinion_Feature_ImageThe Supervisors are correct to question the effectiveness of the Structured Decision Making (SDM) protocol that Los Angeles and many other jurisdictions use to guide decision making in cases of child maltreatment. SDM, described as an “actuarial model” of risk assessment, is a series of questionnaires to be used at different phases of a case to determine the risk level for a child in a given situation. The social worker checks off the appropriate boxes, and the instrument spits out a risk level which is supposed to inform the decision about how to proceed.

SDM has been criticized for various reasons, including the fact that it is easy for social workers to manipulate in order to generate the recommendations they want.

As child welfare researcher Emily Putnam-Hornstein pointed out to the Commission to Eliminate Child Abuse and Neglect Fatalities, some workers may manipulate the tools because they think their clinical judgment is superior.

And they are probably right. SDM tools simplify complex clinical data into multiple-choice questions and do not add any data to what the social worker puts in.

A new generation of risk assessment tools, usually referred to as “predictive analytics,” is on the brink of replacing outdated actuarial assessments like SDM. Los Angeles has been at the forefront of developing the new tools. It contracted with software behemoth SAS to develop a predictive analytics algorithm, called AURA, that attempted to identify which children referred to CPS would be the victims of severe maltreatment.

In a column published on Feb. 3, I wrote about the spectacular success of the AURA demonstration. Among those families with at least one CPS referral prior to the current one, flagging the top 10 percent of referrals that earned the highest AURA scores would have predicted 171 critical incidents, which amounts to 76 percent of the deaths and severe injuries to children, according to the Project AURA Final Report.

One of the reasons for AURA’s power is that it draws from other sources of data that CPS investigators often cannot access or don’t have time to search fully, such as the mental health, public health and criminal justice systems. Without this data, workers must rely on a parent’s answers to questions about their mental health, substance abuse or criminal history.

But progress on implementing AURA seems to have stalled since a contentious public meeting in July of 2015.

DCFS staff quoted in Heimpel’s article do not seem to understand the revolutionary nature of the AURA tool. Acknowledging that AURA predicted more than two-thirds of the county’s critical incidents, DCFS Public Affairs Director Armand Montiel stressed that it also identified 3,829 “false positives,” or cases in which there was no critical incident.

But Montiel missed the point. Children who are abused or neglected, but don’t die or “only” nearly die, are not “false positives.” We don’t know how these children have fared since they were identified by AURA, but I certainly hope that Montiel did not intend to say that we don’t need to protect children from any maltreatment that does not result in severe injury or death.

Nevertheless, Montiel is correct that AURA cannot tell for sure if a child will be a victim of severe maltreatment and that it cannot replace a good investigation. Clinical judgment should always trump the result of any algorithm.

But as Putnam-Hornstein suggests, when a predictive algorithm identifies a case as high-risk, an extra layer of review can be put into place, or two workers can be sent out on a case, to make sure it is getting the scrutiny it deserves.

There is some irony in the case that prompted the motion. The SDM tool did correctly classify Yonatan as being at high risk, but workers disregarded the finding. Having a better tool is not enough to protect children. The tool needs to have teeth, not to replace clinical judgment but at least to require a higher level of review when a child is identified as high-risk.

Print Friendly, PDF & Email

Marie K. Cohen
About Marie K. Cohen 68 Articles
Marie K. Cohen (MPA, MSW) is a child advocate, researcher, and policy analyst. She worked as social worker in the District of Columbia's child welfare system for five years. She is a member of the Citizen's Review Committee for the DC Child and Family Services Agency and the DC Child Fatality Review Commission and a mentor to a foster youth. Follow her blog at, on Facebook at Fostering Reform or on Twitter@fosteringreform.


  1. And the biggest problem with CPS is that it puts too much garbage in, to get much but garbage out.
    And given that the county paid 3.1 million for lying to take a child the county needs to implement a program to keep kids from being taken without good reason. The SWs all lied and the judgement would have been even more if the court had allowed the counts of judicial deception which by federal law should have been allowed (I believe they intend to appeal that and will win a reversal on it.)

  2. SDM has its issues, to be sure. it is intended as a guidance instrument, ensuring that SWs have considered everything that is in play, rather than relying only on limited factors to inform the process. As much as any other initiative, SDM has served to reduce racial disproportionality in Child Welfare. “Predictive analysis” is always risky in its own right. Anything that derives from actuarial models will serve to institutionalize biases already present in a system, because actuarials assume that everything currently is as it should be; any future incidents will be evaluated according to the current reality. You shouldn’t look at who’s in prison, for example, and use that information to go in search of a suspect. I don’t disagree that the SDM instrument is subject to being manipulated and even over-ridden, but on balance it has served its purpose in ensuring that all factors–strengths and challenges–are being considered.

    • I am a bit confused by your comment. SDM is an actuarial model. As far as institutionalizing biases, there is a difference between biases and actual differences between groups. If black children are more likely to be abused or neglected, then we want them to be more likely to be involved in the system. Otherwise they are getting less protection than other kids–which is bias.

    • “…there is a difference between biases and actual differences between groups.”
      What are the “differences between groups” that result in Black children being more likely to be abused or neglected?

      “…If black children are more likely to be abused or neglected, then we want them to be more likely to be involved in the system.”

      What if data point to abuse as more likely for Black youth once within the system?

Comments are closed.