May 23, 2024 The Siren Song of “Evidence-Based” Instruction By Alfie Kohn I’m geeky enough to get a little excited each time a psychology or education journal lands in my mailbox.1 Indeed, I’ve spent a fair portion of my life sorting through, critically analyzing, and writing about social science research. Even my books that are intended for general readers contain, ... Read More
This would be troubling enough if evidence and science were employed to justify all sorts of educational approaches, as seems to be the case with a label like “best practice.” But these words are almost always used to defend traditionalist practices such as direct instruction and control-based interventions derived from Skinnerian behaviorism such as Applied Behavior Analysis (ABA) and Positive Behavioral Interventions and Supports (PBIS). A kind of ideological fervor tends to fuel each of these things, whereas actual empirical support for them could be described as somewhere between dubious and negligible.7
At best, then, there are important questions to ask about evidence that’s cited in favor of a given proposal, particularly when it’s intended to justify a one-size-fits-all teaching strategy. At worst, the term evidence-based is used not to invite questions but to discourage them, much as a religious person might seek to end all discussion by declaring that something is “God’s will.” Too often, the invocation of “science” to defend traditionalist education reflects an agenda based more on faith than on evidence.
Even more disturbing is the fact that the term evidence-based sometimes functions not as a meaningful modifier but just as a slogan, an all-purpose honorific like “all-natural” on a food label. Rather than denoting the existence of actual evidence, its purpose may be to brand those who disagree with one’s priorities as “unscientific” and pressure them to fall in line.6
But some people take an extreme, reductionist view of what qualifies as data, dismissing whatever can’t be reduced to numbers, or ignoring inner experience and focusing only on observable behaviors, or attempting to explain all of human life in terms of neurobiology. All of these have troubling implications for education, leaving us with a shallow understanding of the field. People who talk about the “science” of reading or learning, for example, rarely attend to student motivation or the fact that “all learning is a social process shaped by and infused with a system of cultural meaning.”2
Often it turns out that “effective,” along with other terms of approbation (“higher achievement,” “positive outcomes,” “better results”) signify nothing more than scoring well on a standardized test. Or having successfully memorized a list of facts. Or producing correct answers in a math class (without grasping the underlying principles). Or being able to recognize and pronounce words correctly (without necessarily understanding their meaning).
Even one of these qualifiers, let alone all of them, signifies that evidence of an “on-balance” effect for a given intervention doesn’t allow us to claim that it’s a sure bet for all kids.
so many literacy experts are skeptical of, if not alarmed by, what’s being presented as “evidence-based” in their field.
“Effective teaching is not just about using whatever science says ‘usually’ works best,” Richard Allington reminds us. “It is all about finding out what works best for the individual child and the group of children in front of you.”3
medical research is “trending toward more individualized diagnoses and treatments…[since] patients may differ greatly in the response to certain drugs or how their immune systems work….But the so-called ‘science of reading’ is moving in the opposite direction – toward a monolithic and standard approach.”4
Science complicates more often than it simplifies, which is your first clue that the use of “evidence based” or “the science of….” to demand that teachers must always do this or never do that — or even that they should be legally compelled to do this (or forbidden from doing that) — represents the very antithesis of good science.
Explicit academic instruction in preschools, too, is presented as evidence-based even though, once again, actual evidence not only fails to support this approach but warns of its possible harms.12
“the use of problem solving as a means of developing conceptual understanding [in math] was abandoned and replaced by direct instruction of skills” in California, and this move was similarly rationalized by “the use of the code phrase research-based instruction” even though the available research actually tended to point in the opposite direction (and still does). Indeed, Jacob added, the phrase research-based was just “a way of promoting instruction aligned with ideology.”1
Evidence of an effect at what cost? It’s not just that restricting evidence to what can be seen or measured limits our understanding of teaching and learning. It’s that doing so ends up supporting the kind of instruction that can alienate students and sap their interest in learning. Thus does schooling become not only less pleasant but considerably less effective. This exemplifies a broader phenomenon that Yong Zhao describes as a tendency to overlook unanticipated, harmful consequences. Even if a certain way of teaching did produce the desired effects, he argues, an inattention to its damaging side effects means that what’s sold to us as “evidence based” can sometimes do more harm than good.5