Hattie Haters and Lovers: Both are Still Missing the Effective Implementation Steps that Practitioners Need

Critical Questions to Ask your “Hattie Consultant” Before You Sign the Contract

Dear Colleagues,

   Over the past five years especially, John Hattie has become internationally-known for his meta-analytic research into the variables that most-predict students’ academic achievement (as well as other school-related outcomes). 

   Actually, his different Visible Learning books (which have now generated a “Hattie-explosion” of presentations, workshops, institutes, and “certified” Hattie consultants) report the results of his “meta-meta-analyses.”  These are analyses that average the effect sizes from many other meta-analyses that themselves have pooled research that investigated the effect of one psychoeducational variable, strategy, intervention, or approach on student achievement.

   Let me be clear from the start.  This Blog is not to criticize or denigrate, in any way, Hattie on a personal or professional level.  He is a prolific researcher and writer, and his work is quite impressive.

   However, this Blog will critique the statistical and methodological underpinnings of meta- and meta-meta-analytic research, discuss its strengths and limitations, but most essentially, delineate its research-to-practice implications and implementation necessities.

_ _ _ _ _

   To this end, it is important that educators understand:

  • The strengths and limitations of meta-analytic research—much less meta-meta-analytic research;
  • What conclusions can be drawn from the results of sound meta-analytic research;
  • How to transfer sound meta-analytic research into actual school- and classroom-based instruction or practice; and
  • How to decide if an effective practice in one school, classroom, or teacher is “right” for your school, classrooms, and teachers.

   But to some degree, this describes the end of this story.  Let’s first look at the beginning.


A Primer on Meta-Analysis

   Beginning late last August 2017, I wrote a three-part Blog Series that discussed a number of research-to-practice principles and processes to help educational leaders make effective decisions regarding their selection and implementation of specific services, supports, strategies, and interventions.

  • Part I of this Series (August 26, 2017) was titled, The Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions: The Hazards of ESEA/ESSA’s Freedom and Flexibility at the State and Local Levels

[CLICK HERE]

_ _ _ _ _

  • Part II (September 9, 2017) was titled, “Scientifically based” versus “Evidence-based” versus “Research-based”—Oh my!!!  Making Effective Programmatic Decisions: Why You Need to Know the History and Questions Behind these Terms

[CLICK HERE]

_ _ _ _ _

  • Part III (September 25, 2017) was titled, Hattie’s Meta-Analysis Madness: The Method is Missing !!! Why Hattie’s Research is a Starting-Point, but NOT the End-Game for Effective Schools

[CLICK HERE]

   Among many things, this last Blog provided a brief description of the meta-analytic approach, and a series of questions that educational leaders needed to ask regarding this approach.  Some of this discussion appears below.

What is a Meta-Analysis?

   A meta-analysis is a statistical procedure that combines the effect sizes from separate studies that have investigated common programs, strategies, or interventions.  The procedure results in a pooled effect size that provides a more reliable and valid “picture” of the program or intervention’s usefulness or impact because it involves more subjects, more implementation trials and sites, and (usually) more geographic and demographic diversity.  Typically, an effect size of 0.40 is used as the “cut-score” where effect sizes above 0.40 reflect a “meaningful” impact.

   Significantly, when the impact (or effect) of a “treatment” is consistent across separate studies, a meta-analysis can be used to identify the common effect.  When effect sizes differ across studies, a meta-analysis can be used to identify the reason for this variability.

_ _ _ _ _

   Meta-analytic research typically follows some common steps.  These involve:

  • Identifying the program, strategy, or intervention to be studied
  • Completing a literature search of relevant research studies
  • Deciding on the selection criteria that will be used to include an individual study’s empirical results
  • Pulling out the relevant data from each study, and running the statistical analyses
  • Reporting and interpreting the meta-analytic results

   As with all research, there are a number of subjective decisions embedded in meta-analytic research, and thus, there are good and bad meta-analytic studies.

   Indeed, educational leaders cannot assume that “all research is good because it is published,” and they cannot assume that even “good” meta-analytic research is applicable to their communities, schools, staff, and students.

   And so, educational leaders need to independently evaluate the results of any reported meta-analytic research— including research discussed by Hattie—before accepting the results.

   Among the questions that leaders should ask when reviewing (or when told about the results from) meta-analytic studies are the following:

  • Do the programs, strategies, or interventions chosen for investigation use similar implementation steps or protocols?
  • Are the variables investigated, by a meta-analytic study, variables that are causally-related versus correlationally-related to student learning, and can they be taught to a parent, teacher, or administrator?
  • In conducting the literature review, did the researchers consider (and control for) the potential of a “publication bias?”
  • What were the selection criteria used by the author of the meta-analysis to determine which individual studies would be included in the analysis, and were these criteria reliably and validly applied?
  • Were the best statistical methods used in the meta-analysis?  Did one or two large-scale or large-effect studies outweigh the results of other small-scale, small-participant studies that also were included?  Did the researcher’s conclusions match the actual statistical results from the meta-analysis?

   These questions were all answered in my September 25, 2017 Blog.

[CLICK HERE]

   Clearly, these questions must be answered for the individual meta-analyses used in any meta-meta-analysis.


Robert Slavin’s June, 2018 Blog on Hattie’s Work

   Dr. Robert Slavin is, himself, an exceptional researcher in his own right.  Slavin is the primary author, researcher, and implementer of Success for All, an evidence-based literacy (and math) program that is one of the longest-standing, best-researched, and most-effective instructional approaches in recent history.

   On June 21, 2018, Slavin published a Blog, John Hattie is Wrong, where he reported his analyses of Hattie’s research.  Below are the essential snippets from his Blog:

However, operating on the principle that anything that looks to be too good to be true probably is, I looked into Visible Learning to try to understand why it reports such large effect sizes. My colleague, Marta Pellegrini from the University of Florence (Italy), helped me track down the evidence behind Hattie’s claims. And sure enough, Hattie is profoundly wrong. He is merely shoveling meta-analyses containing massive bias into meta-meta-analyses that reflect the same biases.
Part of Hattie’s appeal to educators is that his conclusions are so easy to understand. He even uses a system of dials with color-coded “zones,” where effect sizes of 0.00 to +0.15 are designated “developmental effects,” +0.15 to +0.40 “teacher effects” (i.e., what teachers can do without any special practices or programs), and +0.40 to +1.20 the “zone of desired effects.” Hattie makes a big deal of the magical effect size +0.40, the “hinge point,” recommending that educators essentially ignore factors or programs below that point, because they are no better than what teachers produce each year, from fall to spring, on their own.
In Hattie’s view, an effect size of from +0.15 to +0.40 is just the effect that “any teacher” could produce, in comparison to students not being in school at all. He says, “When teachers claim that they are having a positive effect on achievement or when a policy improves achievement, this is almost always a trivial claim: Virtually everything works.
An effect size of 0.00 to +0.15 is, he estimates, “what students could probably achieve if there were no schooling” (Hattie, 2009, p. 20). Yet this characterization of dials and zones misses the essential meaning of effect sizes, which are rarely used to measure the amount teachers’ students gain from fall to spring, but rather the amount students receiving a given treatment gained in comparison to gains made by similar students in a control group over the same period. So an effect size of, say, +0.15 or +0.25 could be very important.
(One of) Hattie’s core claims (is that it) is possible to meaningfully rank educational factors in comparison to each other by averaging the findings of meta-analyses.  These claims appear appealing, simple, and understandable. But they are also wrong.
The essential problem with Hattie’s meta-meta-analyses is that they accept the results of the underlying meta-analyses without question. Yet many, perhaps most meta-analyses accept all sorts of individual studies of widely varying standards of quality. In Visible Learning, Hattie considers and then discards the possibility that there is anything wrong with individual meta-analyses, specifically rejecting the idea that the methods used in individual studies can greatly bias the findings.
To be fair, a great deal has been learned about the degree to which particular study characteristics bias study findings, always in a positive (i.e., inflated) direction. For example, there is now overwhelming evidence that effect sizes are significantly inflated in studies with small sample sizes, brief durations, use measures made by researchers or developers, are published (vs. unpublished), or use quasi-experiments (vs. randomized experiments) (Cheung & Slavin, 2016).
Many meta-analyses even include pre-post studies, or studies that do not have pretests, or have pretest differences but fail to control for them. For example, I once criticized a meta-analysis of gifted education in which some studies compared students accepted into gifted programs to students rejected for those programs, controlling for nothing!
A huge problem with meta-meta-analysis is that until recently, meta-analysts rarely screened individual studies to remove those with fatal methodological flaws. Hattie himself rejects this procedure: “There is…no reason to throw out studies automatically because of lower quality” (Hattie, 2009, p. 11).
In order to understand what is going on in the underlying meta-analyses in a meta-meta-analysis, is it crucial to look all the way down to the individual studies. . . Hattie’s meta-meta-analyses grab big numbers from meta-analyses of all kinds with little regard to the meaning or quality of the original studies, or of the meta-analyses. . .
To create information that is fair and meaningful, meta-analysts cannot include studies of unknown and mostly low quality. Instead, they need to apply consistent standards of quality for each study, to look carefully at each one and judge its freedom from bias and major methodological flaws, as well as its relevance to practice. A meta-analysis cannot be any better than the studies that go into it. Hattie’s claims are deeply misleading because they are based on meta-analyses that themselves accepted studies of all levels of quality.

   Slavin makes some exceptional statistical and methodological points. . . that should be added to the points I have already cited above.  NOTE that we are discussing the process and decision rules that Hattie is using when conducting his statistical analyses.

   Educators must understand that these decision rules—which are most often determined by the individual(s) conducting the research—can influence the results and conclusions of the research.

   But the story goes on . . . .


DeWitt Responds in Education Week

   In a June 26, 2018 Education Week article, Peter DeWitt responded to Dr. Slavin:  John Hattie Isn’t WrongYou Are Misusing His Research.

   Appropriately, DeWitt noted his affiliation with Hattie and his Visible Learning cadre of trainers:

For full disclosure I work with Hattie as a Visible Learning trainer, and I often write about his research here in this blog. Additionally, I present with him often, and read pretty much everything he writes because I want to gain a deeper understanding of the work. To some that may mean I have a bias, which is probably somewhat true, but it also means that I have taken a great deal of time to understand his research and remain current in what he suggests.

   Not surprisingly, DeWitt then defends Hattie’s work.  Part of this defense is his own, and part is based on communications (and quotes) from Hattie that occurred after Slavin’s blog was published. 

_ _ _ _ _

   Here is what Hattie (through DeWitt) had to say:

First of all, Slavin does not like that the average of all effects in my work centre on .4 - well, an average is an average and it is .40. Secondly, I have addressed the issue of accepting "all sorts of individual studies of widely varying standards of quality". Given I am synthesising meta-analyses and not original studies, I have commented often about some low quality metas (e.g., re learning styles) and introducing a metric of confidence in the meta-analyses. Those completing meta-analyses often investigate the effect of low and high-quality studies but rarely does it matter - indeed Slavin introduce "best-evidence" synthesis but it has hardly made a dent as it is more important to ask the empirical question - does quality matter? Hence, my comment "There is...no reason to throw out studies automatically because of lower quality" (Hattie, 2009, p. 11). It should be investigated.
I think we agree that care should always be taken when interpreting effect-size (whatever the level of meta-synthesis), that moderators should always be investigated (for example, when looking at homework two of the moderators are how homework works in elementary students and how it impacts secondary students), and like many other critics who raise the usual objections to meta-synthesis the interpretations should be queried - I have never said small effects may not be important; the reasons why some effects are small are worth investigating (as I have done often), care should be taken when using the average of all influences, and it comes down to the quality of the interpretations - and I stand by my interpretations.

_ _ _ _ _

   Here is what DeWitt also had to say:

Slavin does bring up a point that Hattie often brings up. Hattie has always suggested that school leaders and teachers not simply go after those influences that have the highest effect sizes. In fact, in numerous presentations and articles he has suggested that schools get an understanding of their current reality and research the influences that will best meet their needs. He has additionally suggested that those leaders and teachers not throw out those strategies they use in the classroom, but actually gather evidence to understand the impact of those strategies they use.
Secondly, Hattie's work is not all about the numbers, which Slavin suggests. And that is actually a good point to end with in this blog. Many school leaders have a tendency to go after the influences with the highest effect sizes, and those of us who work with Hattie, including Hattie himself, have suggested that this is flawed thinking.
Hattie's work is about getting us to reflect on our own practices using evidence, and move forward by looking at our practices that we use every day and collecting evidence to give us a better understanding of how it works.

Slavin’s July, 2018 Response

   Continuing this dialogue, Slavin responded quickly to DeWitt in an Education Week Letter to the Editor published on July 17, 2018.  In that letter, he stated:

My whole point in the post was to note that Hattie's error is in accepting meta-analyses without examining the nature of the underlying studies. I offered examples of the meta-analyses that Hattie included in his own meta-meta-analysis of feedback. They are full of tiny, brief lab studies, studies with no control groups, studies that fail to control for initial achievement, and studies that use measures made up by the researchers.
In DeWitt's critique, he has a telling quote from Hattie himself, who explains that he does not have to worry about the nature or quality of the individual studies in the meta-analyses he includes in his own meta-meta-analyses, because his purpose was only to review meta-analyses, not individual studies. This makes no sense. A meta-analysis (or a meta-meta-analysis) cannot be any better than the studies it contains.
If Hattie wants to express opinions about how teachers should teach, that is his right. But if he claims that these opinions are based on evidence from meta-analyses, he has to defend these meta-analyses by showing that the individual studies that go into them meet modern standards of evidence and have bearing on actual classroom practice.

My PerspectiveHow Do You Go from Meta-Analysis to Effective Practice?

   In Visible Learning, Hattie described 138 rank-ordered influences on student learning and achievement based on a synthesis of more than 800 meta-studies covering more than 80 million students.  In his subsequent research, the list of effects was expanded (in Visible Learning for Teachers), and now (2016), the list—based on more than 1,200 meta-studies—includes 195 effects and six “super-factors.”

   Presently, based on a June, 2018 pdf on the Corwin Press website (Corwin Press “owns” the rights to Hattie’s Visible Learning Plus approach—at least in the United States), Hattie’s “Top Twenty” approaches with the strongest effects on student learning and achievement are:

   Collective teacher efficacy

   Self-reported grades

   Teacher estimates of achievement

   Cognitive task analysis

   Response to intervention

   Piagetian programs

   Jigsaw Method

   Conceptual change programs

   Prior ability

   Strategy to integrate prior knowledge

   Self-efficacy

   Teacher credibility

   Micro-teaching/video review of lessons

   Transfer strategies

   Classroom discussion

   Scaffolding

   Deliberate practice

   Summarization

   Effort

   Interventions for students with learning needs

_ _ _ _ _

   First of all, many of these “Top 20” approaches are different than when I wrote my September 25, 2017 Blog—less than 10 months ago.  Second, some of the approaches have new labels.  For example, “Interventions for students with learning needs” (see below) was labeled ”Comprehensive Interventions for Learning Disabilities” less than 10 months ago.

   More importantly. . . OK . . . I’ll admit it:

   As a reasonably experienced school psychologist, I have no idea what that vast majority of these approaches actually involve at a functional school, classroom, teacher, or student level. . . much less what methods and implementation steps to use.

   To begin to figure this out, you would need to take the following research-to-practice steps:

  • Go back to Hattie’s original works and look at his glossaries that define each of these terms
  • Analyze the quality of each Hattie meta-meta-analysis in each area
  • Find and analyze each respective meta-analysis within each meta-meta-analysis
  • Find and evaluate the studies included in each meta-analysis, and determine which school-based implementation methods (among the variety of methods included in each meta-analysis) are the most effective or “best” methods—relative to student outcomes
  • Translate these methods into actionable steps, while also identifying and providing the professional development and support needed for sound implementation
  • Implement and evaluate the short- and long-term results

   And so—as encouraged by Hattie, Slavin, and DeWitt—the fundamental question embedded in all of the “he said, he said” is:

   How do we effectively go from meta-analytic research to effective practice so that districts and schools select the best approaches to enhance their student achievement and implement these approaches in the most effective and efficient ways?

   This, I believe, is the question that the researchers are not talking about.

_ _ _ _ _

My PerspectiveThe Method is Missing

   To demonstrate the research-to-practice points immediately above, let’s look at two related approaches on Hattie’s list:

  • Response to Intervention (Effect Size: 1.29)
  • Interventions for Students with Learning Needs (Effect Size: 0.77)

   Response to Intervention is one of Hattie’s “Super Factors.” 

   Hattie’s Glossary defines Response to Intervention as “an educational approach that provides early, systematic assistance to children who are struggling in one or many areas of their learning. RTI seeks to prevent academic failure through early intervention and frequent progress measurement.” 

   In Visible Learning for Teachers, Hattie devotes one paragraph to Response to Intervention— citing seven generic “principles.”

   None of this description would effectively guide any district or school in how they should operationalize Response to Intervention, and implement a sound outcome-based system or set of approaches.

   To do this, school leaders would have to follow the research-to-practice steps outlined above.

_ _ _ _ _

   Hattie’s meta-analytic research (at least as reported on the Corwin Press website) now identifies “Interventions for students with learning needs.”

   However, given my research, there are no published glossaries that define this new construct.  And, the meta-meta-analytic studies that were used to define this construct are not available—if one wanted to follow the research-to-practice steps again outlined above.

   Previously, Hattie identified “Comprehensive Interventions for Learning Disabled Students” as having one of the five top effect sizes relative to impacting student learning and achievement. 

   In the Visible Learning for Teachers Glossary, it was noted that:

The presence of learning disability can make learning to read, write, and do math especially challenging. Hattie admits that “it would be possible to have a whole book on the effects of various interventions for students with learning disabilities” (Hattie 2009), and he references a 1999 meta-study.
To improve achievement teachers must provide students with tools and strategies to organize themselves as well as new material; techniques to use while reading, writing, and doing math; and systematic steps to follow when working through a learning task or reflecting upon their own learning. Hattie also discusses studies that found that “all children benefited from strategy training; both those with and those without intellectual disabilities.

   Even though this construct is now gone, if a school or district was still using this book—from a research-to-practice perspective—once again, there is no specificity here.  No one who is reading Hattie’s books—or looking at the Corwin Press website—would have a clue as to where to begin to operationalize a sound implementation process for their district or school.

   More specifically:  Hattie described “Comprehensive Interventions for Learning Disabled Students” in the plural.

   And so. . . from Hattie’s research, which learning disabilities did his meta-analytic studies address?  What were the specific interventions?  At what age and level of severity did the interventions work with students?  And, how was “success” defined and measured?

   As Hattie himself noted. . . he could write a book just in this area (and some esteemed educators have).

   But once again, while it is important to know that some interventions for learning disabled students work, one would have to apply the research-to-practice steps above, be able to evaluate the research-to-practice in a specific area of learning disabilities, and have the training and consultation resources needed to help teachers implement these interventions “in real time.”

   But now, we have an additional dilemma.  What research-based criteria went into Hattie’s label change to “Interventions for Students with Learning Needs,” and what does this “new” construct mean?

_ _ _ _ _

   Summary.  All of this must be incredibly confusing and challenging to educators in the field.  If Hattie’s list of effects is constantly changing—both in rank order and in name—how do we keep up with the changes and make the practitioner-oriented programmatic decisions that need to be made?

   Moreover, just because we know that a program, strategy, or intervention significantly impacts student learning, we do not necessarily know the implementation steps that were in the research studies used to calculate the significant effect . . . and we cannot assume that all or most of the studies used the same implementation steps. 


The Questions to Ask the Outside “Hattie Consultants”

   As noted earlier, in order for districts and schools to know exactly what implementation steps are needed to implement effective “Hattie-driven” practices so that their students can benefit from a particular effect, we need to “research the research.”

   And yet, the vast majority of districts—much less schools—do not have the personnel with the time and skills to do this.

   To fill this gap:  We now have a “cottage industry” of “official and unofficial” Hattie consultants who are "available" to assist.

   But how do districts and schools evaluate these consultants relative to their ability, experience, and skills to deliver effective services?

   With no disrespect intended, just because someone has been trained by Hattie, has heard Hattie, or has read Hattie—that does not give them the expertise, across all of the 195 (or more) rank-ordered influences on student learning and achievement, to analyze and implement any of the approaches identified through Hattie’s research.

   And so, districts and schools need to ask a series of specific questions when consultants (who want to be hired) say that their consultation is guided by Hattie’s research.

   Among the initial set of questions are the following:

  • What training and experience do you have in evaluating psychoeducational research as applied to schools, teaching staff, and students—including students who have significant academic and/or social, emotional, or behavioral challenges?
  • In what different kinds of schools (e.g., settings, grade levels, socio-economic status, level of ESEA success, etc.) have you consulted, for how long, in what capacity, with what documented school and student outcomes—and how does this experience predict your consultative success in my school or district?
  • When guided by Hattie’s (and others’) research, what objective, research-based processes or decisions will you use to determine which approaches our district or school needs, and how will you determine the implementation steps and sequences when helping us to apply the selected approaches?
  • What will happen if our district or school needs an approach that you have no experience or expertise with?
  • How do you evaluate the effectiveness of your consultation services, and how will you evaluate the short- and long-term impact of the strategies and approaches that you recommend be implemented in our district or school?

   For my part, virtually all of my consultation services are based on a series of initial conference calls where I listen to the educators involved relative to their current status, needs, desired outcomes, and commitment to the change process.  This is complemented by an off-site analysis of the data, documentation, and outcomes in the areas relevant to the identified concerns.

   Most of this is done to determine (a) the willingness of everyone potentially involved in the change process to engage in that process; and (b) whether or not I am a good “match” to help facilitate the process.

   Next:  My next step is an on-site “Plan for Planning” visit where we meet with and listen to the people (inside and, sometimes, outside of the district) most relevant to the initiative, and continue the information gathering and analysis process. 

   Ultimately, this process represents a needs assessment, resource and SWOT (Strengths, Weaknesses, Opportunities, Threats) analyses, and the beginning of the leadership planning that eventually results in the identification of specific approaches and implementation steps—all written in a proposed Action Plan.

   Moreover, in the area of concern, this process and plan detail exactly what student-centered services, supports, programs, and interventions (a) already exist—so they can be strengthened and sustained; (b) are needed—so they can be planned and implemented; and (c) are not working, are redundant or working at cross-purposes with other approaches, or are not needed—so they can be phased out and eliminated.

   Finally: The educational leaders and I collaboratively decide if I have the expertise to help implement the Plan and, if so, we need to independently decide if we want to continue our relationship together.

   If I do not have the expertise, I typically recommend two to four colleagues who I believe have the skills—and then let the district or school leaders contact and vet them on their own.

   If I have the expertise, but do not feel comfortable with the “match” or the time or intensity of services needed, I again recommend other colleagues.

   If I have the expertise, but the district or school leaders want to go a different direction (perhaps, they feel they have the expertise in-house or more immediately available), then we part as colleagues and friends.


Summary

   Once again, none of the points expressed in this Blog are about John Hattie.  Hattie has made many astounding contributions to our understanding of the research in areas that impact student learning and the school and schooling process.

   However, many of my points relate to the strengths, limitations, and effective use of research reports using meta-analysis and meta-meta-analyses.  If we are going to translate this research to sound practices that impact student outcomes, educational leaders need to objectively and successfully understand, analyze, and apply the research so that they make sound system, school, staff, and student-level decisions.

   And if the educational leaders are going to use other staff or outside consultants to guide the process, they must ask the questions and get the answers to ensure that these professionals have the knowledge, skills, and experience to accomplish the work.

   In the end, schools and districts should not invest time, money, professional development, supervision, or other resources in programs that have not been fully validated for use with their students and/or staff. 

   Such investments are not fair to anyone—especially when they become (unintentionally) counterproductive by (a) not delivering the needed results, (b) leaving students further behind, and/or (c) creating staff resistance to “the next program”—which might, parenthetically, be the “right” program.

_ _ _ _ _

   I hope that this discussion has been useful to you.

   As always, I look forward to your comments. . . whether on-line or via e-mail.

   For those of you still on vacation, I hope that you have been enjoying the time off.

   If I can help you in any of the areas discussed in this Blog, I am always happy to provide a free one-hour consultation conference call to help you clarify your needs and directions on behalf of your students, staff, school(s), and district.

Best,

Howie