We must improve the low standards underlying “evidence-based practice” - Kristen Bottema-Beutel, 2023
Evidence-based practice is the process of identifying the best available evidence to make decisions about practices that should be deployed to support individuals in a given population (McKibbon, 1998, see Vivanti, 2022, for a review in relation to autism). Practices that meet a predefined set of evidentiary criteria are labeled “evidence-based practices” (EBPs1), to promote their adoption by service providers. A tenet of EBP is that the research used to designate EBPs should be rigorous, with the fewest risks of bias possible (Slavin, 2008).
Critics of autism EBP frameworks have argued that they: do not consider the scope of change indexed by outcome measures so that broad, developmental change and narrow, context-bound change are conflated (Sandbank et al., 2021)2; lead to an overestimation of effectiveness by tallying studies that show effects while ignoring gray literature, studies showing null effects, and studies showing iatrogenic effects (Sandbank et al., 2020; Slavin, 2008); and use taxonomies for categorizing practices that confuse practices and specific components of those practices (Ledford et al., 2021). The aim of this editorial is to point out another limitation of autism EBP frameworks, which is that research quality thresholds are much too low for making determinations about which interventions are likely to be efficacious. Low standards result in practices with questionable efficacy being labeled EBPs and promoted for use, and perpetuate the continued production of low-quality autism intervention research.
Crucially, none of these EBP frameworks considers whether intervention researchers measure or report on adverse events, which are unintended negative consequences of interventions that can cause short- or long-term harms. This is problematic because selecting interventions should involve appropriate weighting of the potential for benefit against the potential for harm. The pairing of low standards with insufficient consideration of adverse events that is common to each of these frameworks could mean that researchers routinely recommend interventions that confer little or no benefit, while also inadvertently putting autistic people at risk of harm.
Across these two reviews, we found that adverse events were rarely mentioned (they were mentioned in 7% of studies in our review on young children, and in only 2% of studies in our review on transition-age youth), but there is nevertheless evidence that they do occur (Bottema-Beutel et al., 2021a, 2022).
The conclusions from these two quality reviews starkly contrast with findings from EBP reports. For example, nearly half of the 28 practices designated as “evidence-based” in the most recent NCAEP report were behavioral (i.e. practices that rely on manipulating behavioral antecedents and consequences to shape new behavior).4 Similarly, Smith and Iadarola’s (2015) report concluded that behavioral practices either alone or in combination with developmental practices were “well established,” and the National Autism Center (2015) considered a variety of behaviorally-based interventions to be “established.” However, in Sandbank et al. (2020), we showed that there were too few randomized controlled trials of behavioral interventions to make any conclusions about their efficacy for autistic children. In our review of interventions for transition-age autistic youth (Bottema-Beutel et al., 2022), we found that although 70% of the interventions tested were behaviorally-based, quality concerns prevented us from considering any intervention practice to have sufficient evidence. Because autism EBP frameworks do not distinguish between research that adheres to some quality standards but is still designed with significant risks of bias, and research with minimized risks of bias, the reports may mislead researchers, practitioners, and commissioners of services to conclude that behavioral interventions are better supported by research evidence than other kinds of interventions, given the high number of behavioral strategies labeled as EBPs. In reality, behavioral intervention research has more risks of bias relative to research examining other types of interventions (Sandbank et al., 2020).