Computer algorithms guide our decisions in big ways and small. They nudge us to buy a particular blender on Amazon and tailor ads to our interests on our Facebook pages, but also seek to reduce repeat domestic violence arrests and assess risk during criminal sentence proceedings.
This is the field of “predictive analytics,” where big data and computer algorithms are said to enhance human decision-making. While these methods are common in business, only now are they migrating into human services provision, and particularly into child welfare agencies.
As early as July, Allegheny County, Penn., which spans the Pittsburgh metropolitan area, plans to launch its own predictive analytics tool to help assess the risk of abuse for children referred to the agency. Allegheny is the first child welfare administration in the world to do so, ensuring that the launch will draw attention from others considering a similar path from Los Angeles to New Zealand. In a bid to assuage concerns around privacy and discrimination, the county also conducted an ethical review of the program.
The review is not yet public, and county officials say they have yet to read it. They say they will send it to “peer review,” and will make its findings central to whether or not they launch their predictive analytics model this summer.
“While we fully intend to make this information public to the community – we have not had an opportunity to think through the public process and format for sharing this information,” said Karen Blumen, the deputy director of Allegheny County’s Department of Human Services, in an email.
But regardless of Allegheny’s efforts to soften the landing around predictive analytics, there still exists a mass of thorny moral and ethical issues for the county — and other jurisdictions considering similar measures — to grapple with before they can start using these new tools.
On the one hand, proponents say, programs like these make sure that agencies put their resources toward children who need help the most. Or, as detractors point out, predictive analytics could reinforce existing biases within a system that disproportionately impacts poor families and families of color.
“The field of predictive analytics has been quite divisive,” said Rebecca Hamilton, Deputy Director of the Robert L. Bernstein Institute for Human Rights at the New York University School of Law. The institute recently held a conference entitled Tyranny of the Algorithm? Predictive Analytics & Human Rights.
“There’s a huge amount of promise, and a number of concerns, especially if [it’s] not done with a lot of thought,” Hamilton said.
The promise of predictive analytics is an enticing one for child welfare agencies: to make drastically more accurate decisions about how to best help children at risk of abuse. For these children, the stakes could not be higher.
“Where predictive analytics can be useful is at the point of investigation and substantiation: the decision on whether the child will stay in the home or no,” said Richard Gelles, professor of child welfare and family violence at the School of Social Policy & Practice at the University of Pennsylvania where he also served as dean.
In current practice, when most child welfare agencies receive a call of abuse, the call screener will go through a checklist to gather information based on known risk factors. Then, the screener and likely a supervisor decide whether to refer the child for further investigation.
This process may seem straightforward, but according to Gelles these checklists are no better than chance at determining which children are most at risk. And he fears that other types of assessment, similar to actuarial models used by insurance companies, are too easily manipulated.
“You decide what the risk is using clinical judgment and then back-enter to get the score you want,” Gelles said. “Even in the most honest and respectful applications, [actuarial models] are still based on a limited amount of data.”
Gelles also thinks that common criticisms of predictive analytics about bias miss the point.
“When I talk about predictive analytics, the reaction is uniform: isn’t that profiling and stereotyping? By using clinical judgment and consensus risk assessment forms, we are already profiling on the basis of race and income,” he said. “What we’re doing now is so much more immoral and unethical than what would happen with predictive analytics that it’s shocking that people would criticize predictive analytics for profiling.”
Additionally, some think that the collection of data required for predictive analytics in a child welfare context — sometimes criticized as overly invasive — may actually be a good thing.
“There’s a story about how the whole idea of ‘my home as my castle’ is a paternalistic concept to protect perpetrators against investigation,” explained Rhema Vaithianathan, professor of economics at Auckland University of Technology, New Zealand, and the lead researcher on the Allegheny County predictive analytics project. “Under the umbrella of privacy, maybe we are protecting people who shouldn’t be protected, and not protecting people who need it.”
Surveillance & Social Welfare
Privacy and other concerns about the use and misuse of predictive analytics in a child welfare context cannot be dismissed so quickly, however. Critics have voiced concerns throughout the process, from data collection to analysis to the use of risk scores in decision-making.
“[Predictive analytics] could be done very well, or it could just reinforce all the worst aspects of our current child welfare system: the destruction of a lot of families,” said Michele Gilman, a professor of law at the University of Baltimore who studies the intersection of poverty and privacy. “Low income people have always been subject to surveillance. It is the same harms, just new methodology.”
For Gilman, the black-box nature of many of the predictive analytics tools is a significant part of the problem.
“People like me who represent low-income families are very wary of what’s in those algorithms,” Gilman said. “I’m not sure even if the predictors are validly associated with neglect and abuse. My big concern is that [predictive analytics] codifies existing biases in a way that makes them harder to combat.”
Tim Dare, professor of philosophy at the University of Auckland who co-authored the ethical review for New Zealand’s child welfare predictive analytics pilot, echoes concerns about bias in the data.
Others are also concerned that the predictive analytics scores might displace human judgment in a way they were never meant to do. Instead of helping make better decisions, the predictive analytics score might become the decision itself.
“The rhetoric around [predictive analytics risk scoring] is that it’s supplemental, but it’s the same rhetoric as around college admissions tests,” said Ezekiel Dixon-Roman, professor of social policy at the School of Social Policy & Practice at the University of Pennsylvania. “There’s convincing evidence that’s it’s gone otherwise. If the score’s not a certain level, it’s not looked at.”
Deborah Schilling Wolfe, executive director of the Field Center for Children’s Policy, Practice, and Research at the University of Pennsylvania, is also concerned about shifting too far away from clinical judgment.
“Predictive analytics greatly advances our work in the ability to identify which children and which cases could best benefit from intervention and protection,” she said. “But predictive analytics does not in itself replace caseworkers and casework judgment. Good casework promotes a trusting relationship in which parents may be more amenable to intervention and change.”
Independent Ethical Review in Allegheny
Allegheny County in Pennsylvania is diving head-first into predictive analytics, and is making a bet that it will drastically improve outcomes for children in their county.
Previously, call screeners making decisions about whether to refer a call of abuse for further investigation had data from all across the county at their fingertips — from public schools, courts, the police, behavioral health services and other sources — but it was cumbersome to access.
Now with the new pilot, the predictive analytics algorithms are able to combine that data together into one risk score based on information that already exists in their databases. Child welfare workers would be able to use the score to inform their decision on whether to refer a child for further investigation. The county aims to start using this new predictive analytics system this summer.
But staff and researchers involved with the project are well aware of the potential pitfalls of predictive analytics, and are completing an ethical review of their proposed model prior to the rollout of the project.
“We’d be very reluctant to implement these kinds of models without an independent ethical review,” Vaithianathan said. “It’s too dangerous to play around with this sort of thing.”
Dare, along with Eileen Gambrill, professor of child and family studies at University of California-Berkeley, are spearheading the review, which will shortly be delivered to Allegheny County. It confirms that there are sound ethical reasons to use predictive analytics, but also identifies a number of key issues to be addressed in the program implementation, Dare said.
“There is a real worry that not using this technology is unethical,” he said. “It’s giving us information to make a difference in kids’ lives.”
But Dare raises a number of potential ethical problems with the model. One is the fact that there is no real method for clients to consent for their data to be used in the predictive model.
“It’s difficult to get genuine informed consent—we are using data that’s collected for various purposes, and no one had in mind that it would be used for predicted risk modeling,” Dare said.
Dare is also concerned about who has access to the risk score, and how that decision is made.
“Another implementation decision is who gets the information, and what information they get,” he said. “In a case like this where it’s potentially sensitive and stigmatizing, harmful and hurtful, we have to make sure it goes to people who need it to do their job and no further. But on the other hand, it needs to go to someone who can use it effectively.”
Implementing Predictive Analytics Ethically
The day-to-day use of predictive analytics programs is where the rubber hits the road, ethics-wise.
“A lot of the ethics comes down to the implementation,” Vaithianathan, of the University of Auckland, said.
Allegheny County is already thinking about these issues and trying to make implementation choices that will ensure the risk scores help kids and address the issues Dare and Gambrill have identified.
Erin Dalton, who works for the county, also shares some of the concerns about bias with the datasets that they have at their disposal. Because the state has refused to share its school data with Allegheny County, the county only has about half of school districts’ data available to contribute to the predictive analytics score. Dalton would prefer to get data directly from the state, but that is not possible.
“That is part of what adds bias to the data,” said Dalton. “It’s reasonable for me to go first to school districts where we have common clients and students, which means that affluent students get left out.”
As a part of the research process, Vaithianathan took pains to solicit community feedback throughout the pilot. After presenting at a number of community meetings, she found the response to the program to be neither positive nor negative.
“I don’t think people who are in the system feel that it’s too much of a leap than what’s been done now because the data’s already available for call screeners,” she said. “It’s just presenting the information in a slightly different way.”
Dalton also thinks that there is community support for the project.
“This stuff is in play in a lot of other places, and our public expects us to use data to protect kids,” she said.
To address issues around data transparency, Dalton also said that they have recently begun to allow families to view their integrated data records so that they could better understand what types of information are being collected about them.
“We do absolutely believe who better than people themselves to manage their care and that of their loved ones,” she said. “We think it’s the right thing to do.”
The Predicted Future
It is likely that predictive analytics in child welfare (as well as in many other spheres of our lives) is here to stay.
Dare thinks that they will fundamentally change the concept of privacy.
“I think over the next 10-20 years that we will need to rethink privacy,” he said. “We’re either going to have to reconceive our notions of privacy or come up with new ways of protecting it. We should be prepared to live with a little less privacy for benefits the tool might provide.”
As we are just beginning to approach both the promise and peril of this new set of tools, child welfare agencies will continue to grapple with the ethical implications of using — and of not using — predictive analytics. Whether the tool lives up to its promise is a question for the future.
Sam Waxman is a master’s student in social policy at the University of Pennsylvania’s School of Social Policy and Practice. Prior to coming to Penn, she coordinated job training programs for low-income individuals in Washington, D.C., and was a consultant to philanthropists implementing innovative public sector initiatives.