Making Effective Programmatic Decisions: Why You Need to Know the History and Questions Behind these Terms
Dear Colleagues,
Introduction
This (now) three-part series is focusing on how states, districts, schools, and educational leaders make decisions regarding what services, supports, programs, curricula, instruction, strategies, and interventions to implement in their classrooms. Recognizing that we need to use programs that have documented efficacy and the highest probability of implementation success, it has nonetheless been my experience that many programs are chosen “for all the wrong reasons”—to the detriment of students, staff, and schools.
In Part I of this series [CLICK HERE], I noted that:
- Beyond the policy-level requirements in the newly-implemented Elementary and Secondary Education/Every Student Succeeds Act(ESEA/ESSA), the Act transfers virtually all of the effective school and schooling decisions, procedures, and practices away from the U.S. Department of Education, and into the “hands” of the respective state departments of education and their state’s districts and schools.
- Because of this “transfer of responsibility,” states, districts, and schools will be more responsible (and accountable) for selecting their own approaches to curriculum, instruction, assessment, intervention, and evaluation than ever before.
- This will result in significant variability—across states and districts—in how they define school “success” and student progress, measure school and teacher effectiveness, apply assessments to track students’ standards-based knowledge and proficiency, and implement multi-tiered academic and behavioral services and interventions for students.
All of this means that districts and schools will have more freedom—but greater responsibility—to evaluate, select, and implement their own ways of functionally addressing all students’ academic and social, emotional, and behavioral learning and instructional needs—across a multi-tiered continuum that extends from core instruction to strategic response and intensive intervention.
This “local responsibility” is bolstered by the fact that, while ESEA/ESSA discusses districts’ need to implement “multi-tiered systems of supports” and “positive behavioral interventions and supports,” these terms are written in the law in lower case and without the presence of any acronyms.
Thus, the U.S. Department of Education’s strong advocacy (and largely singular funding) of the PBIS and MTSS frameworks that they created are not mandated by ESEA/ESSA or in any other federal law (such as IDEA).
In other words, districts and schools are completely free to establish their own multi-tiered systems of supports and positive behavioral interventions and support systems so long as they are consistent with law, empirically defensible, and result in sustainable student outcomes.
Revisiting the “Top Ten Ways that Educational Leaders Make Flawed, Large-Scale Programmatic Decisions
Part I of this series [CLICK HERE] then discussed the fact that, while districts and schools will have more ESEA/ESSA responsibility and self-determination, they may not all be prepared to make the decisions that they have to make in scientifically, psychometrically, methodologically, and contextually-sound ways.
This is not to suggest that educators are trying to be ineffective. It is just that they do not have the time, people, and resources to be MORE effective, and they often do not know what they do not know.
The Blog then described the “Top Ten” reasons why educational leaders make flawed large-scale, programmatic decisions—that waste time, money, and resources; and that frustrate and cause staff and student resistance and disengagement.
The flawed Reasons discussed were:
1. The Autocrat (I Know Best)
2. The Daydream Believer (My Colleague Says It Works)
3. The Connected One (It’s On-Line)
4. The Bargain Basement Boss (If it’s Free, It’s for Me)
5. The Consensus-Builder (But the Committee Recommended It)
6. The Groupie (But a National Expert Recommended It)
7. The Do-Gooder (It’s Developed by a Non-Profit)
8. The Enabler (It’s Federally or State-Recommended)
9. The Abdicator (It’s Federally or State-Mandated)
10. The Mad Scientist (It’s Research-based)
By self-reflecting on these flawed approaches, the hope is that educational leaders will avoid these hazards, and make their district- or school-wide programmatic decisions in more effective ways.
In Part III (in two weeks), we will specifically look at what a meta-analysis is and is not—highlighting the work of John Hattie.
In this Part II Blog, we will discuss #10 (It’s Research-based) in more depth. Specifically, we will differentiate among three terms that are bandied around when evaluating the efficacy of programs, interventions, and other district-wide or school-wide strategies.
In all, we will leave you with the critical questions that need to be asked when objectively evaluating programs being considered for district-wide or school-wide implementation. . . all so that you can make sound programmatic decisions.
“Scientifically based” versus “Evidence-based” versus “Research-based”
As I provide consultation services to school districts across the country (and world), I continually hear people using three related terms to describe their practice—or their selection of specific services, supports, instruction, strategies, programs, or interventions.
The terms are “scientifically-based,” “evidence-based,” and “research-based” . . . and many educators seem to be using them interchangeably.
And so—because these terms are critical to understanding how to objectively evaluate the quality of a program or intervention being considered for implementation, I provide a brief history (and their definitions, when present) of these terms below.
As this series is focusing on the Elementary and Secondary Education Act(ESEA), I will restrict this brief analysis to (a) the 2001 version of ESEA (No Child Left Behind; ESEA/NCLB); (b) the current 2015 version of ESEA (Every Student Succeeds Act; ESEA/ESSA); and (c) ESEA’s current “brother”—the Individuals with Disabilities Education Act (IDEA 2004).
Scientifically Based
This term appeared in ESEA/NCLB 2001 twenty-eight times, and it was (at that time) the “go-to” definition in federal education law when discussing how to evaluate the efficacy, for example, of research or programs that states, districts, and schools needed to implement as part of their school and schooling processes.
Significantly, this term was defined in the law. According to ESEA/NCLB:
The term scientifically based research—
(A) means research that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs; and
(B) includes research that—
(i) employs systematic, empirical methods that draw on observation or experiment;
(ii) involves rigorous data analyses that are adequate to test the stated hypotheses and justify the general conclusions drawn;
(iii) relies on measurements or observational methods that provide reliable and valid data across evaluators and observers, across multiple measurements and observations, and across studies by the same or different investigators;
(iv) is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls;
(v) ensures that experimental studies are presented in sufficient detail and clarity to allow for replication or, at a minimum, offer the opportunity to build systematically on their findings; and
(vi) has been accepted by a peer-reviewed journal or approved by a panel of independent experts through a comparably rigorous, objective, and scientific review.
The term “scientifically based” is found in IDEA 2004 twenty-five times—mostly when describing “scientifically based research, technical assistance, instruction, or intervention.”
The term “scientifically based” is found in ESEA/ESSA 2015 ONLY four times—mostly as “scientifically based research.” This term appears to have been replaced by the term “evidence-based” (see below) as the “standard” that ESEA/ESSA wants used when programs or interventions are evaluated for their effectiveness.
Evidence-Based
This term DID NOT APPEAR in either ESEA/NCLB 2001 or IDEA 2004.
It DOES appear in ESEA/ESSA 2015—sixty-three times (!!!) most often when describing “evidence-based research, technical assistance, professional development, programs, methods, instruction, or intervention.”
Moreover, as the new (and current) “go-to” standard when determining whether programs or interventions have been empirically demonstrated as effective, ESEA/ESSA 2105 defines this term.
As such, according to ESEA/ESSA 2015:
(A) IN GENERAL.—Except as provided in subparagraph (B), the term ‘evidence-based’, when used with respect to a State, local educational agency, or school activity, means an activity, strategy, or intervention that
‘(i) demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes based on—
‘(I) strong evidence from at least 1 well-designed and well-implemented experimental study;
‘(II) moderate evidence from at least 1 well-designed and well-implemented quasi-experimental study; or
‘(III) promising evidence from at least 1 well-designed and well-implemented correlational study with statistical controls for selection bias; or
‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and
‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”
(B) DEFINITION FOR SPECIFIC ACTIVITIES FUNDED UNDER THIS ACT.—When used with respect to interventions or improvement activities or strategies funded under Section 1003 [School Improvement], the term ‘evidence-based’ means a State, local educational agency, or school activity, strategy, or intervention that meets the requirements of subclause (I), (II), or (III) of subparagraph (A)(i).
Research-Based
This term appeared in five times in ESEA/NCLB 2001; it appears four times in IDEA 2004; and it appears once in ESEA/ESSA 2015. When it appears, the term is largely used to describe programs that need to be implemented by schools to support student learning.
Significantly, the term researched-based is NOT defined in either ESEA law (2001, 2015), or by IDEA 2004.
What Should You Know and Ask When Programs Use these Terms?
Scientifically Based
At this point, if someone uses the term “scientifically based,” they probably don’t know that this term has functionally been expunged as the “go-to” standard in federal education law.
At the same time, as an informed consumer, you can still ask what the researcher or practitioner means by “scientifically based.” Then—if the practitioner is recommending a specific program, and endorsing it as “scientifically based,” ask for (preferably refereed) studies and their descriptions of the:
- Demographic backgrounds and other characteristics of the students participating in the studies (so you can compare and contrast these students to your students);
- Research methods used in the studies (so you can validate that the methods were sound, objective, and that they involved control or comparison groups not receiving the program or intervention);
- Outcomes measured and reported in the studies (so you can validate that the research was focused on student outcomes, and especially the student outcomes that you are most interested in for your students);
- Data collection tools, instruments, or processes used in the studies (so that you are assured that they were psychometrically reliable, valid, and objective—such that the data collected and reported are demonstrated to be accurate
- Treatment or implementation integrity methods and data reported in the studies (so you can objectively determine that the program or intervention was implemented as it was designed, and in ways that make sense);
- Data analysis procedures used in the studies (so you can validate that the data-based outcomes reported were based on the “right” statistical and analytic approaches);
- Interpretations and conclusions reported by the studies [so you can objectively validate that these summarizations are supported by the data reported, and have not been inaccurately- or over-interpreted by the author(s)]; and the
- Limitations reported in the studies (so you understand the inherent weaknesses in the studies, and can assess whether these weaknesses affected the integrity of and conclusions—relative to the efficacy of the programs or interventions—drawn by the studies).
Evidence-Based
Moving on: If a researcher or practitioner describes a program or intervention as “evidence-based” you need to ask them whether they are using the term as defined in ESEA/ESSA 2015 (see above).
Beyond this, we need to recognize that—relatively speaking—there are far fewer educational programs or psychological interventions used in schools that meet the experimental or quasi-experimental criteria in the Law.
Thus, it would be wise to assume that most educational programs or psychological interventions will be considered “evidence-based” because of these components in the Law:
‘(ii)(I) demonstrates a rationale based on high-quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and
‘(II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention.”
As such, as an informed consumer, you need to ask the researcher or practitioner (and evaluate the responses to) all of the same questions as outlined above for the “scientifically based” research assertions.
Research-Based
In essence, if a research or practitioner uses the term “research-based,” they probably don’t know that the “go-to” term, standard, and definition in federal education law is “evidence-based.”
At the same time, as an informed consumer, a researcher or practitioner’s use of the “research-based” term should raise some “red flags”—as it might suggest that the quality of the research supposedly validating the recommended program or intervention is suspect.
Regardless, as an informed consumer, you will still ask the researcher or practitioner (and evaluate the responses to) all of the same questions as outlined above for the “scientifically based” research assertions.
Ultimately, after (a) collecting the information from the studies supposedly supporting a specific program or intervention, and (b) answering all the questions above, you need to determine the following:
- Is there enough objective information to conclude that the “recommended” program or intervention is independently responsible for the student outcomes that are purported and reported?
- Is there enough objective data to demonstrate that the “recommended” program or intervention is appropriate for MY student population, and will potentially result in the same positive and expected outcomes?
[The point here is that the program or intervention may be effective—but only with certain students. . . and not YOUR students.]
- Will the resources needed to implementation the program be time- and cost-effective relative to the “Return-on-Investment”?
[These resources include, for example, the initial and long-term cost for materials, professional development time, specialized personnel, coaching and supervision, evaluation, parent and community outreach, etc.]
- Will the “recommended” program or intervention be acceptable to those involved (e.g., students, staff, administrators, parents) such that they are motivated to implement it with integrity and over an extended period of time?
[There is extensive research on the “acceptability” of interventions, and the characteristics or variables that make program or intervention implementation likely or not likely.]
Additional Cautions Regarding Research
Clearly, research has validated some programs, interventions, and/or strategies. As an inherent part of this validation, the programs have been implemented and evaluated with intensity and integrity, and they have been meaningfully applied to address specific student, staff, and school outcomes.
But. . . in answering many of the questions posed throughout this Blog:
- Some programs or interventions will not have demonstrable efficacy;
- Some will demonstrate their efficacy—but not be applicable to YOUR students or situations; and
- Some will claim efficacy, but the research is NOT sound, or the (favorable) conclusions are not warranted by the research.
Indeed, poor quality research typically was completed (a) by convenience; (b) with small, non-representative, and non-random samples; (c) without comparisons to matched “control groups;” and (d) in scientifically unsound ways. Moreover, some of the “research” was not independently, objectively, or “blindly” reviewed (as when someone publishes their work in a “refereed” professional journal) by three or more experts in the field.
When research is not sound, it is usually because:
- The “researchers” are more interested in “marketing, influence, fame, or fortune” and their “research” really doesn’t even qualify as legitimate research [this “science” is pseudoscience]
- The researchers are simply not knowledgeable or skilled in conducting sound research [this science ranges from clumsy to inept]
- The researchers do not have the resources to conduct the complexity or sophisticated level of the research needed [this science is ranges from ill-advised to well-intended]
When research is not appropriately applied, it is usually because:
- The researchers have interpreted (or recommend the use of) their results in ways that go well beyond the intent of their original research, or the people, problems, or parameters involved in that research
- The researchers have confused or represented correlational results as causal results, and implementing schools or districts have accepted the (false) belief that, for example, “research has proven that this program will directly and exclusively solve this problem”
- The implementing schools or districts do not have the skills or capacity to independently evaluate the research, and they mistakenly (or wishfully) conclude that, for example, a specific program will work “with our students, in our settings, with our staff and resources, given our current problems and desired outcomes”—even though that program has never been tested or validated under those circumstance
PLEASE NOTE: Anyone can do their own research, pay $50.00 to establish a website, and begin to market their products. To determine if the research is sound, the program produces the results it says it does, and the same results will meaningfully transfer into your school, agency, or setting, YOU need to do your own investigation, analysis, and due diligence.
Too many programs (as noted above), are purchased because of someone else’s personal experience and testimony, their “popularity” and marketing, due to a “celebrity” endorsement, or because they are “easy” to implement.
Once again, educational programs and psychological interventions (as well as instruction, curricula, services, strategies, etc.) need to be evidence-based. And, we need to use this term as defined and operationalized in ESEA/ESSA 2015.
Summary
I understand that all of this takes time. At the same time, I know that districts invest this time every time that they choose a new reading or math or science program.
The questions are: Are we using our time effectively? Are we asking the questions and collecting the information that will help us to identify the bestprogram for our students, our staff, and our schools? And, are we prepared to use the data objectively so that the best choice is made?
I hope you that found this Blog—and Part I [CLICK HERE] helpful and meaningful to your work.
I always look forward to your comments. . . whether on-line or via e-mail.
I hope that your school year has started successful. To those in the greater Houston area and across Florida, we are thinking about you.
If I can help you in any of the areas discussed in this and other school improvement-focused Blog messages, know that I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff/colleagues, school(s), and district.
In Part III (in two weeks) of this series, we will specifically look at what a meta-analysis is and is not—highlighting the work of John Hattie.
Have a great next two weeks!!!
Best,