Canary in a Coal Mine? SAMHSA’s Clearinghouse Signals Larger Threat to Evidence-Based Policy

It started as a simple story. Once again, the Trump administration had demonstrated its reputed disdain for facts and evidence. This time it had revoked the contract of one of the federal government’s top evidence clearinghouses — one that reviewed studies of mental health and drug treatment programs to determine their effectiveness.

The decision, which came quietly during the holidays, seemed to further prove that the administration cared nothing about facts, nothing about evidence, and little about evidence-based approaches to opioids, which it had elevated to a White House-level priority.

And then the story began to fall apart. An independent review of the clearinghouse had revealed substantial problems with its ratings, including significant potential conflicts of interest. Elinore McCance-Katz, the newly confirmed director of the Substance Abuse and Mental Health Services Administration, echoed those criticisms in a strident public statement.

But underneath the charges and counter-charges, there was a quieter, lurking story — one of widespread problems and alleged corruption in medical research. It is an important story, one that could be a harbinger of growing threats to the evidence-based movement as a whole.

SAMHSA’s evidence clearinghouse, the National Registry of Effective Prevention Programs (NREPP), was first created in 1999 in the wake of growing interest in evidence-based medicine. It was an early federal foray into evidence reviews, coming years before other federal clearinghouses like the What Works Clearinghouse at the Department of Education.

After its initial creation, SAMHSA’s clearinghouse was modified and revised several times, most recently in 2015, when its screening criteria were updated. In the aftermath of these changes, the number of programs reviewed and included in the clearinghouse grew rapidly.

This growth drew the attention of Dennis Gorman, a professor of epidemiology and biostatistics at Texas A&M, who examined the underlying studies that NREPP had used to make its decisions. In 2017, he published an article in the International Journal of Drug Policy that sharply criticized the clearinghouse for its poor quality standards.

According to Gorman, the large majority of the new programs approved by the clearinghouse were based on questionable studies, with most suffering significant conflicts of interest, including:

  • Single Study Approvals: Of the 113 approved new programs, more than half (67) were approved on the basis of a single published article (51), non-peer-reviewed online report (4), or unpublished report (12). Fewer than half (46) were based on two or more published reports.
  • Questionable Methodology: Many of the studies featured common and easily identified design flaws, including very small and non-representative samples, high rates of study attrition, and brief length of follow-up.
  • Conflicts of Interest: Most of the approved programs (87) were based on studies or materials that included someone who was associated with the studied program as the author or co-author.

Today, with an ever-growing list of programs that claim to be evidence-based, clearinghouses are intended to be a stamp of approval, allowing users to sort the wheat from the chaff. But, according to Gorman, this was not happening:

As the number of programs grows, these [problems] are increasingly difficult to identify.

Worse still, the current NREPP review process essentially equates any such quality interventions with those that have been evaluated by the individual who developed and disseminates the program using a very small, self-selected sample, and in which the findings of the evaluation have appeared only in an internal report or an unpublished manuscript or a pay-to-publish online journal.

It even includes interventions that employ therapeutic practices such as thought field therapy and eye movement desensitization that are considered potentially harmful and supported only by pseudoscience.

Gorman suggested the following changes to the clearinghouse’s procedures, which SAMHSA may (or may not) consider in the aftermath of its decision to cancel the contract:

  • Improving the transparency of its review process;
  • Providing detailed declarations of financial conflicts of interest among program developers who review their own programs;
  • Requiring truly independent replication studies;
  • Assigning most significance to results from studies appearing in journals that adhere to rigorous publication standards, such as requiring preregistration of analysis plans and data and materials sharing; and
  • Putting a mechanism in place (such as Registered Reports) that clearly distinguish exploratory research from hypothesis testing.

Patrick Lester is director of the Social Innovation Research Center.

Print Friendly, PDF & Email

2 Comments

  1. When talking about the National Registry of Evidence-Based Programs and Practices (NREPP), one needs to discriminate between legacy program reviews and newly reviewed program. A recent published article in the International Journal of Drug Policy by Dr. Gorman offers a critique of newly reviewed NREPP submissions, asking if NREPP “lost its way” way from the high standards of review of the 300+ legacy programs. The legacy programs are often the most scientifically proven strategies in either prevention or treatment. Dr. Gorman offers a critique of the 100+ newly reviewed programs, and he makes a point that new reviews are less stringent.

    The legacy programs are widely cited in other reviews (e.g., Blueprints, IOM reports, Surgeon General Reports,), and have a deep level of prior research by multiple investigators using the very best experimental designs such as comparative effectiveness trials, long-term follow up, and systematic replications by different, independent scientists across the US and other countries. Most of the Legacy Programs represent the best scientific investments by the U.S. National Institutes of Health, Centers for Disease Control, and other federal agencies as well as foundations or the European Union.

    Dr. Gorman writes that the best science involves more than one study. He’s correct, and it can be a very large task to review all of the studies on some well-proven prevention or treatment strategies. For example, Psycnet.apa.org lists 149 studies or publications on the good behavior game, with 23 references to “randomized” trials. The National Library of Medicine lists 63 publications, and 27 papers involving randomized control trials—some with very long follow up. And I widely acknowledge conflict of interest owning the main vendor of the Good Behavior Game being used in about 10K classrooms, and I specifically require that any and all findings are published in keeping with my personal commitment to science as a learning platform. We learn more from our mistakes than our successes.

    Implicit in Dr. Gorman’s critique is the notion of reliability and validity, which represent measures of “truthiness”—a clever term coined by Colbert. Higher quality research typically reports on measures of reliability and validity, not just statistical significance. A statistical difference could easily .01 or .001 but be meaningless in practice. For example, I’m pretty sure that most Americans know that texting and driving are potentially harmful with high levels of statistical significance, but that doesn’t stop people from driving and texting (social validity).

    Dr. Gorman’s publications always demand transparency and good science in his critiques of others [1-4]. Based on his own criteria, his current paper does not rise to the level of good science by his own standards, which is ironic given his frequent calls for greater transparency in prevention research [3, 5, 6]. His paper does not report the coding structure or provide a link to the coding structures used, nor does his report provide any reference of inter-observer agreement on his ratings of poor science. In other words, no independent party could easily replicate his findings using his methods. Inter-observer agreement is foundational to good science [7, 8].

    I am bothered by a curveball in the paper. Dr. Gorman claims that recent reviews contain programs that are considered potentially harmful—specifically naming EMDR (“eye movement desensitization”). That is a big claim, and deserves citations—yet his citation is about the definition of harm, rather than actual putative research on such harm. At the National Library of Medicine (www.pubmed.gov), there 454 publications on EMDR, and 129 of them appear to involve randomized-control group studies. If one searches “’eye movement desensitization” AND randomized AND harm, there is one study [9]—a multi-single blind clinical study that concludes:

    “The results from the post treatment measurement can be considered strong empirical indicators of the safety and effectiveness of prolonged exposure and EMDR. The six-month and twelve-month follow-up data have the potential of reliably providing documentation of the long-term effects of both treatments on the various outcome variables. Data from pre-treatment and mid-treatment can be used to reveal possible pathways of change.”

    PS. Good prevention and treatment science is really hard, especially when you want to do it a public-health level. It’s messy, because children, families, teachers, clinicians, other people, organizations and political leaders don’t always do what they are supposed to. That’s human behavior. Short-term changes may or may not predict results 1, 5, 10 or 20 years later. Follow up is extremely expensive, and hard to come by in an era of instant gratification and need for fame. Promotion and tenure for most academics feed off of lots of publications and grants. It’s easy to be a critic, harder to actually create programs and practices that actually work, with the real possibility that one’s great idea might be stupid or worse, harmful.

    References Cited

    1. Gorman, D.M., The irrelevance of evidence in the development of school-based drug prevention policy, 1986-1996. Eval Rev, 1998. 22(1): p. 118-46.

    2. Gorman, D.M., The best of practices, the worst of practices: The making of science-based primary prevention programs. Psychiatr Serv, 2003. 54(8): p. 1087-9.

    3. Gorman, D.M., Can We Trust Positive Findings of Intervention Research? The Role of Conflict of Interest. Prev Sci, 2016.

    4. Gorman, D.M., J.S. Searles, and S.E. Robinson, Diffusion of Intervention Effects. J Adolesc Health, 2016. 58(6): p. 692.

    5. Gorman, D.M., Has the National Registry of Evidence-based Programs and Practices (NREPP) lost its way? Int J Drug Policy, 2017. 45: p. 40-41.

    6. Gorman, D.M., A.D. Elkins, and M. Lawley, A Systems Approach to Understanding and Improving Research Integrity. Sci Eng Ethics, 2017.

    7. Sidman, M., Tacitics of Scientific Research. 1988: Cambridge Center for Behavioral;. 428.

    8. Cook, T.D., Quasi-Experimentation: Design & Analysis Issues for Field Settings. 1979: Houghton Mifflin.

    9. de Bont, P.A., et al., A multi-site single blind clinical study to compare the effects of prolonged exposure, eye movement desensitization and reprocessing and waiting list on patients with a current diagnosis of psychosis and co morbid post traumatic stress disorder: study protocol for the randomized controlled trial Treating Trauma in Psychosis. Trials, 2013. 14: p. 151.

  2. Is there a clearinghouse that *is* doing all the right things and that practitioners and policy makers can put more trust in?

Comments are closed.