Social-Emotional Learning: Education’s Newest Bandwagon. . . and the History of How We Got There (Part I)

Why Most Schools are not Implementing Scientifically-Sound SEL Practices—Wasting Time and Resources

Dear Colleagues,

Introduction

   It seems that educators can’t go anywhere on their on-line news feeds (e.g., the74, ASCD’s and others’ SmartBriefs, Learning Forward, the Huffington Post, Education Dive K-12, Education Week, etc.) without hearing about the virtues of Social-Emotional Learning (SEL). 

   Yes. . .  SEL has become education’s newest bandwagon.  And, districts are jumping on.

   Indeed, in the September 28, 2018 issue of Education Week’s Market Brief, Holly Yettick reported that, “Nearly 90 percent of district leaders say they have already invested in social and emotional learning products, or plan to do so over the next year.”

   This past week, Allstate’s Foundation committed $45 million to social-emotional learning initiatives over the next five years.

   And, Learning Forward—also this week—got into “the game” by stating that, “(A)s more practitioners and researchers recognize the importance of addressing students' social and emotional learning (SEL) in schools, we can't leave to chance the professional learning needed to make these efforts effective.”

   And yet, what is not being reported is that:

  • SEL’s recent popularity (and legitimacy—at least, in the media) is the result of a multi-year effort to court foundations, politicians, well-regarded educators, and other powerful national figures;
  • Many SEL programs and research studies have significant science-to-practice limitations;
  • Many districts and schools are purchasing “SEL programs and curricula” without independently and objectively evaluating (a) their research to determine if they are “ready” for field-based implementation; (b) whether they “fit” the demographics, students, and needs of their schools; and (c) whether they have a high probability of positively impacting the social, emotional, and behavioral student outcomes that they seek; and
  • The SEL movement has become incredibly profitable for some publishers and vendors—leading to “marketing campaigns” that mask the questionable quality of some programs and curricula.

_ _ _ _ _

   In this two-part Blog, I would like to discuss the concerns above. 

   In a nut shell, SEL has become a non-stop “movement” (see below), and many districts and schools are “searching for SEL in all the wrong places.”

   The fundamental problem is that many educators do not understand the (political) history, its current research-to-practice status and its scientific limitations, the true outcomes of an effective SEL program, the absence of valid implementation strategies, and SEL’s potential strengths and limitations. 

   And in the rush to implement, many districts and schools are choosing incomplete, ineffective, and inconsequential (if not counterproductive) strategies that are wasting classroom time, squandering schools’ precious resources, and undermining districts’ professional development decisions.

   Districts and schools are also inappropriately attributing their social, emotional, and behavioral “successes” (often limited to declining discipline problems, rather than improving student self-management) to their “SEL programs.” 

   I say “inappropriately” because they are making causal statements that, “Our SEL program was directly responsible for our decreased office discipline referrals”. . . when the relationship is correlational at best. . . and there are other more directly relevant factors to explain whatever successes they are having.

   Finally, what is not recognized is that all of the “positive press” about SEL program “success” is a biased sample.  The press is not going to report the unsuccessful SEL initiatives, because virtually no one is interested in these.  My work around the country suggests that the ratio of schools implementing SEL to the schools directly successful because of SEL is very low.

   And this is not to mention the fact that most schools have been unable to sustain their SEL strategies for more than three years at a time.

_ _ _ _ _

   But before I continue, and with respect to Full Disclosure, know that I consult—using my expertise as a school psychologist and licensed psychologist—across the country and internationally, helping schools to implement effective, evidence-based multi-tiered Social-Emotional Learning/Positive Behavioral Support Systems (SEL/PBSS).  These systems work to improve school discipline and safety, classroom climate and management, and student engagement and social, emotional, and behavioral self-management. 

   I also am the author of the evidence-based Stop & Think Social Skills Program—with its preschool through high school classroom curriculum, and its home and parent program.

   And so, I have a “stake” in the SEL world, although the critique and comments above and below are not meant to promote my own work. 

   In fact, anyone who knows me, professionally or personally, knows that I will promote any scientifically-based school discipline model, and that I will gladly use any evidence-based social skills program that a district I am consulting with is already using. . . so long as the program is directly responsible for meaningful student outcomes.

   At the same time, I am on record, as an advocate for sound science-to-practice strategies that provide objective and demonstrable student results, that (a) I will not support frameworks where schools can choose what they want to implement from a menu of options; and (b) I believe that districts and schools need to do their “research-to-practice due diligence” before adopting and implementing any program or model (more about that later).

_ _ _ _ _ _ _ _ _ _

Where Did SEL Come From?  The Collaborative for Academic, Social, and Emotional Learning (CASEL) and its Political History

   SEL is inextricably tied to the Collaborative for Academic, Social, and Emotional Learning (CASEL) which was formed in 1994 by a group of researchers, educators, and child advocates.  With Daniel Goleman’s Emotional Intelligence work as an early building-block, CASEL functioned originally as a “research-based thought group” that ultimately wanted to impact schools and classrooms.

   Despite this goal, CASEL has no interest in large-scale training, or in deploying legions of consultants to “scale-up” its work across the country.  While it has partnered with a number of state departments of education and large city school districts, it has done this to advance its agenda, and to collect the “data” to support its “movement.”  Critically, most of CASEL’s efficacy data have been published in its own technical reports.  They have never been independently evaluated through an objective, refereed process—like articles in most professional journals.

  And make no mistake about it, CASEL does want SEL to be “a movement.”

   In fact, over the years, CASEL leaders have positioned, powered, and legitimized the organization and its SEL agenda by identifying and tapping into large funding sources, like the Novo Foundation. . . run by the daughter and son-in-law of Warren Buffett.

  • Politically, CASEL realized early on that it was not going to advance its agenda through the U.S. Department of Education—which was beginning to advocate for its own framework, Positive Behavioral Interventions and Supports (PBIS), around the time that CASEL was formed.

   So CASEL went a different route by involving politically-connected individuals when they originally established the organization, and by focusing on building strong relationships with state legislatures and Congress.  For example, Timothy P. Shriver, son of Sargent Shriver and Eunice Kennedy Shriver and the Chairman of Special Olympics, was a CASEL co-founder and its original Board Chairman.

  • After lobbying its home state of Illinois to pass the first social-emotional learning standards for schools, CASEL has “made friends” with numerous members of Congress in Washington, D.C., and has hosted a series of high-profile, annual social gatherings in our Nation’s Capital to further influence public policy.

   This has resulted in a number of SEL-embedded bills filed in the House of Representatives.  For example, Congressman Dale E. Kildee (D-Mich.), Congresswoman Judy Biggert (R-Ill.), and Congressman Tim Ryan (D-Ohio) introduced the first federal Academic, Social, and Emotional Learning Act of 2015 incorporating language that came directly from CASEL into their Bill.

   Congressman Ryan continues to be a leader for SEL in Congress and in his home district.

_ _ _ _ _

  • CASEL has also branched out to Hollywood—collaborating with George Lucas and the George Lucas Educational Foundation since at least 2007.  This Foundation is dedicated to “transforming K-12 education” through two arms: its Educational Research arm, and its well-known Edutopia, a website (with periodic electronic newsletters and reports) that (according to the website) “is a trusted source shining a spotlight on what works in education. We show people how they can adopt or adapt best practices, and we tell stories of innovation and continuous learning in the real world.”

   Significantly, Edutopia identifies “Social and Emotional Learning” as one of its Core Strategies for Innovation and Reform in Learning, it frequently publishes messages/articles from CASEL leaders, and Andrea Wishom who oversees the charitable distributions for the George Lucas Family Foundation is on CASEL’s Board of Directors.

   Relative to other “Hollywood” connections, Timothy Shriver (again) was the executive producer on The Ringer, a co-producer on Amistad, and the Disney movie The Loretta Claiborne Story, and he has produced or co-produced shows for ABC, NBC, TNT, and Disney.

_ _ _ _ _

  • Finally, CASEL is currently tag-teaming with the “non-partisan” Aspen Foundation which (in September 2016) created and named its own National Commission on Social, Emotional, and Academic Development.  Led by Linda Darling Hammond, former Michigan Governor John Engler, and (once again) Tim Shriver, the Commission’s explicit mission is to:
“fully integrate the social, emotional, and academic dimensions of learning in K-12 education so that all students are prepared to thrive in school, career, and life.”

   The Commission has been impressive in how its members’ have publicized the SEL message nationwide (through its Reports, social media posts, and press releases).  It also has triangulated with CASEL and the Novo Foundation—which financially supports Education Week in the area of social and emotional learning.

   The Commission now hosts a blog in Education Week called, “Learning is Social and Emotional.”  The blog “will feature voices from across the country on the successes and challenges of ensuring schools and communities support the social, emotional, and academic development of all students.”

_ _ _ _ _

   There is nothing inherently “wrong” any of the CASEL actions above.  Most certainly, this is not an indictment of CASEL. 

   Like the proverbial tortoise, CASEL has been slow and determined in its methods over the years.  And, I have to admit, I admire and am in complete awe of what they have accomplished through their strategic political alliances—as their alliance colleagues now independently advocate for CASEL’s agenda as part of “the movement.”

   But, at the same time, it is important for educators to understand that a significant amount of CASEL support comes from foundations.  And many foundations strategically fund educational initiatives because they have an agenda or their own conceptualization of what education in America should be like.

   Take, for example, the Walton Foundation’s advocacy (funding) for charter schools over public schools.  Or, the Gates Foundation’s (unremarkable) “small high school” a handful of years ago, and it’s more recent multi-million dollar teacher effectiveness program—which Education Week called (June 21, 2018) “an expensive experiment.”

   Clearly, foundations have the right to fund whatever they want to fund.  But the funds often come “with strings attached.”  And few financially-strapped school districts are going to refuse the funds—even though the initiative may actually result in (a) a loss of staff trust and morale, (b) the establishment of faulty student/instructional systems that will take many years to repair, and (c) a generation of students who have missed more effective educational opportunities.

   Indeed, there is a growing history where some foundations’ conceptualizations of “effective educational practices” were not effective, and were retroactively proven to be misguided and counterproductive.

_ _ _ _ _ _ _ _ _ _

What is SEL’s Research Foundation?  The Scientific Limitations of Meta-analytic Studies

   The foundation to CASEL’s SEL movement are “three” research studies that are continually cited by districts and schools nationwide as the empirical “proof” that SEL “works.” 

   All “three” studies involve meta-analyses—a statistical approach that pools the results of many other individual studies, that have studied the “same” area, variables, or approaches, into a single “effect size.”

   The cited studies are by Payton and colleagues (published by CASEL in 2008), a “study” by Durlak and colleagues (published in the journal Child Development in 2011), and a more recent study by Taylor and colleagues (also published in Child Development in 2017).

   Briefly:

   The 2008 Payton research reported three different research reviews:  one examining the impact of universal school-based SEL interventions; one focused on interventions with students displaying early signs of behavioral or emotional problems; and one looking at SEL interventions in after-school programs.

   The first review of universal school-based SEL intervention was submitted for publication, and it is this review that is the Durlak Child Development publication.

   The 2011 Durlak research, then, reported on 213 studies that compared universal school-based social, emotional, and behavioral learning programs with comparison (or control) groups.  Collectively, the studies included 270,034 kindergarten through high school students.

   According to the authors, the meta-analyses indicated that the students experiencing the SEL programs demonstrated significantly improved social and emotional skills, attitudes, and behaviors when compared to the control students at all three elementary, middle, and high school levels.  These students also demonstrated academic gains that were 11 percentile points above the students not experiencing these programs.

   The 2017 Taylor research reviewed 82 school-based, universal social and emotional learning (SEL) interventions involving 97,406 kindergarten to high school students.  According to the authors, the follow-up outcomes of these interventions (collected 6 months to 18 years post-intervention) demonstrated “SEL’s enhancement of positive youth development.”  When compared with control or comparison students, participating students “fared significantly better than controls in social-emotional skills, attitudes, and indicators of well-being. Benefits were similar regardless of students’ race, socioeconomic background, or school location.”

_ _ _ _ _

   There are significant problems with all three of these studies—hence, the “quotation marks” above:

  • The 2008 study was published by CASEL.  The study was never objectively reviewed by an independent panel of peers.  Thus, the quality of the research and its results remain in question.

   To some degree, it may be that this study received attention more because of the media relations campaign that accompanied its release, than the fact that its results and conclusions have significant field-based implications.

_ _ _ _ _

  • Only the first of the three reviews reported in the 2008 study were submitted for publication.  This submission resulted in the 2011 Durlak study published in Child Development.

   Thus, while CASEL consistently points to both the 2008 and 2011 “studies,” only one of these studies was independently peer reviewed.

_ _ _ _ _

  • The 2011 Durlak and colleagues’ study, published in the journal Child Development, has a number of significant flaws.

   The most critical flaw (see the later discussion below) is that the 213 studies used in the meta-analysis included a significant number of books and book chapters, dissertations, unpublished and unrefereed technical reports, national convention poster sessions and presentations, unpublished manuscripts, and studies that focused on drug, alcohol, and tobacco programs— rather than social, emotional, and behavioral programs.

   The point here is that (again, see below) there is a significant probability that the authors incurred a selection bias as to which studies they included in their meta-analysis.  More to the point, the inclusion of the studies referenced above immediately puts the validity of the meta-analysis reported in question.

   As an Editorial Board member of five prominent journals in school psychology since 1984, I can tell you that I never would have recommended this article for publication—just given the concerns above.

_ _ _ _ _

  • The 2017 Taylor and colleagues’ study, published in Child Development, also has a number of significant flaws.

   The most critical flaw—relative to generalizing the meta-analysis’ results to U.S. schools is that 46% (38 of 82) of the studies included were in non-American schools.

   Beyond this, the authors reported that an unspecified number of studies were taken from books (it is hard to determine how many as the article never identified the 82 studies that were statistically pooled and analyzed).

   The authors also reported that (a) 34% of the studies did not use a randomly-selected sample; (b) 18% of the studies reported “significant implementation problems;” (c) 27% and 45% of the studies either did not have (or did not report) reliable or valid, respectively, outcome measures at follow-up; and (d) 28% of the studies collected their outcome data only from the students involved (and not other sources in addition).

   For all of these reasons, as with the Durlak study, the results of this study are in question.  While a great deal of effort went into executing this study, journal articles should be published for their quality and generalizability, and not because they involved a lot of time and effort.

_ _ _ _ _

   Significantly, even if these three studies were of exceptional quality, another critical limitation still would be present:

   None of the three studies validates a specific SEL program or process; and none identifies how to most effectively select, resource, prepare for, implement, or evaluate a specific SEL program.

   In fact, CASEL’s recent “SEL Design Challenge”—focusing on improving the measurement of SEL outcomes—suggests that the CASEL researchers—who were involved in all three studies—recognized that the individual studies included in the meta-analyses above had flaws.

   Indeed, an August 23, 2017 article in The Hechinger Report stated:

CASEL is spearheading a collaborative effort to help the field coalesce around practical and appropriate design principles that developers can use to make future assessments.
‘As more programs are being taken up in schools and districts, there becomes this greater demand to assess them, to see if they’re working, to see if students are, in fact, learning the skills that are being taught,’ said Lindsay Read, manager of research at the Collaborative for Academic, Social and Emotional Learning.
While there are some assessments on the market, Read said there is a clear need for more direct assessment options, as distinct from student or teacher surveys of social and emotional skills. And when it comes to direct assessment, there’s a need for tests that can be administered quickly in classrooms and return results to teachers right away.

   But beyond the flaws in the three meta-analysis studies above, another concern is that most educators do not understand the strengths and (especially) limitations of meta-analysis as it relates to these studies and their conclusions that, “Schools and students benefit from SEL programs.”

_ _ _ _ _ _ _ _ _ _

What is a Meta-Analysis?

   A meta-analysis is a statistical procedure that combines the effect sizes from separate studies that have investigated common programs, strategies, or interventions.  The procedure results in a pooled effect size that provides a more reliable and valid “picture” of the program or intervention’s usefulness or impact, because it involves more subjects, more implementation trials and sites, and (usually) more geographic and demographic diversity.  Typically, an effect size of 0.40 is used as the “cut-score” where effect sizes above 0.40 reflect a “meaningful” impact.

   While meta-analytic results are powerful, districts and schools cannot assume that the individual studies in a meta-analysis all used the same implementation methods and steps.

   If you look at the different articles in the aforementioned Payton, Durlak, and Taylor studies, it is clear the “SEL programs” included in their meta-analyses involved different approaches, programs, curricula, and activities, and different assessment and evaluation methods, instruments, strategies, and techniques.

   In order for meta-analytic results to be useful, educators need to know exactly what specific methods and steps must be implemented to replicate the results in their district or schools.  While the Payton, Durlak, and Taylor studies endorse the potential impact of an SEL approach, their research does not provide the specific implementation information that districts and schools need to proceed.

   But, there is more to meta-analysis.  Just because a study is published, that does not necessarily make the research sound.

_ _ _ _ _

   Meta-analytic research typically follows some common steps.  These involve:

  • Identifying the program, strategy, or intervention to be studied
  • Completing a literature search of relevant research studies
  • Deciding on the selection criteria that will be used to include an individual study’s empirical results
  • Pulling out the relevant data from each study, and running the statistical analyses
  • Reporting and interpreting the meta-analytic results

   As with all research, there are a number of subjective decisions embedded in meta-analytic research, and thus, there are good and bad meta-analytic studies.

   Indeed, educational leaders cannot assume that “all research is good” because it is published, and they cannot assume that even “good” meta-analytic research is applicable to their communities, schools, staff, and students.

   And so, educational leaders need to independently evaluate the results of any reported meta-analytic research before accepting the results.

_ _ _ _ _

   Among the questions that leaders should ask when reviewing (or when told about the results from) meta-analytic studies are the following:

  • Do the programs, strategies, or interventions chosen for investigation use similar implementation steps or protocols?

   In many past Blogs, I have discussed the fact that the Positive Behavioral Interventions and Supports (PBIS) framework advocated by the U.S. Department of Education’s Office of Special Education Programs (and its funded national Technical Assistance centers), and the SEL framework advocated by CASEL are collections of different activities that, based on numerous program evaluations, are being selectively implemented by different schools in different ways in different degrees of intensity.

   So. . . how does a researcher decide which activities must be used in a research study in order to include that study in his or her meta-analytic study?

   Embedded in this question is the need to investigate the presence of a “selection bias.”  This occurs when researchers choose specific rules that result in the inclusion of only certain studies, thereby increasing the probability that the meta-analysis will either be positive or slanted toward a desired outcome.

_ _ _ _ _

Next Question:

  • Are the variables investigated, by a meta-analytic study, variables that are causally- versus correlationally-related to student learning, and are these variables actionable—that is, can they be taught and/or implemented directly with students?

   Educational leaders need to continually differentiate between research studies (including meta-analytic studies) that report causal factors versus correlational factors.  Obviously, causal factors directly affect student learning, while correlational factors contribute to or predict student learning.

   Similarly, they need to recognize that some meta-analytic results involve factors (e.g., poverty, race, the presence of a significant disability or home condition) that cannot be changed, taught, or modified.  Other meta-analytic results may occur due to intervening factors that are more causal than the results reported.

   For example, did the SEL programs included in the Durlak meta-analysis cause the 11 percentile point academic advantage between the SEL-involved versus non-SEL-involved students? 

   Or did these programs contribute to the effect. . . perhaps by “triggering” a skill or ability that was present in both the involved and non-involved students only in the involved students (because of the presence of the SEL program) that then differentially impacted these students’ academic achievement.

   For example, what if the difference between the involved and non-involvement students’ academic achievement occurred because the students in the former group demonstrated better peer collaboration skills when in project-based or cooperative learning groups than the latter group?

   And what if both groups had similar peer collaboration skills before the enactment of the different SEL programs, but the presence of the SEL program for the involved student “triggered,” reminded, or more consciously reinforced these already-present skills?

   The first conclusion—in this scenario—is that the SEL programs in Durlak’s study did not teach or “cause” the academic achievement differences, they only served to differentially motivate the already-existing peer collaboration skills in the involved students. 

   The second conclusion is that the researchers in the different studies used by Durlak might have used strategies other than their SEL programs to similarly and just-as-effectively motivate the involved students’ peer collaboration skills.

   The Take-away here is:  There are a number of explanations for Durlak’s academic achievement results.  Educators should not assume that there is a causal relationship between the SEL programs included in Durlak’s study and these results—unless so demonstrated by Durlak.  The reality is that there is a stronger possibility that Durlak’s academic achievement results, despite the presence of control or comparison groups, are correlational and not causal.

_ _ _ _ _

Next Question:

  • In conducting the literature review, did the researchers consider (and control for) the potential of a “publication bias?”

   One of the realities of published research is that journals most-often publish research that demonstrates significant effects.  Thus, a specific program or intervention may have ten published articles that showed a positive effect, and fifty other well-designed studies that showed no or negative effects.  As the latter unpublished studies are not available (or even known by the researcher), they will not be included in the meta-analysis.  And so, while meta-analyses may show a positive effect for a specific program or approach, they do not take the potential of a publication bias into account.

   Critically, there are research methods and tests (e.g., using funnel plots, the Tandem Method, the Egger’s regression test) to analyze the presence of publication bias, and to decrease the potential of false positive conclusion.  These, however, are beyond the scope of this Blog.

   Suffice it to say that some statisticians suggest that 25% of the meta-analyses published in the psychological sciences may have inherent publication biases.

   What should educational leaders do? Beyond conducting their own evaluations of the individual studies reported in any meta-analysis of interest, educators need to:

  • Identify the short- and long-term successes of any program or intervention supported by a meta-analysis, determining whether the program “matches up” to their specific schools and the needs of their specific students;
  • Conduct pilot tests before scaling up to whole-school or system-wide implementation;
  • Identify and use sensitive formative evaluation approaches that detect—as quickly as possible—programs that are not working; and
  • Maintain an “objective, data-driven perspective” regardless of how much they want to program to succeed.

   In other words, regardless of a program’s previous validation (which, once again, may be due to publication bias), educational leaders need to revalidate any selected program, strategy, or intervention as it is implemented in their schools, and with their staff and students.

_ _ _ _ _

Next Question:

  • What were the selection criteria used by the author of the meta-analysis to determine which individual studies would be included in the analysis, and were these criteria reliably and validly applied?

   This is an important area of potential bias in meta-analytic research.  It occurs when researchers, whether consciously or not, choose biased selection criteria.  For example, they may favor large-participant studies over single subject studies, or randomized controlled studies versus qualitative studies.

   This selection bias also occurs when researchers do not reliably and/or validly apply their own selection criteria with consistency and fidelity.  That is, they may include certain studies that objectively don’t “qualify” for their analysis, or exclude other studies that meet all of the criteria.

   Regardless, selection biases influence the individual studies included (or not included) in a meta-analysis, and this may skew the results.  Critically, the “skew” could be in any direction.  That is, the analysis might incorrectly result in negative, neutral, or positive results.

   While I know that many educational leaders, at this point, are probably wondering (perhaps, in frustration), “Why can’t I just trust the experts?” or “How do I do all of this?” 

   And I do feel your pain...

   But the “short answer” to the first question is that “blind trust” may result in adopting a program that really hasn’t been fully successful, or that does not match the demographic and other background characteristics of your school, staff, or students.  Many SEL programs require time, training, money, personnel, and other resources.  When the “wrong” programs or approaches are implemented, all of these assets go to waste.  But worse, an SEL “failure” not only undermines student success now, but student and staff confidence in the potential for change later.

   The “short answer” to the second question is that many school districts have well-qualified professionals (in-house, at a nearby university, in the community/region, or virtually on-line) with the research and analysis background to “vet and validate” the programs, strategies, and interventions that an educational leader might be interested in. 

   Use these resources

   The “front-end” time needed to effectively evaluate a program that appears to have meta-analytic support will virtually always save enormous amounts of “back-end” time when an ineffectively researched or chosen program is actually implemented.

_ _ _ _ _

Next Question:

  • Were the best statistical methods used in the meta-analysis?  Did one or two large-scale or large-effect studies outweigh the results of other small-scale, small-participant studies that also were included?  Did the researcher’s conclusions match the actual statistical results from the meta-analysis?

   I’m not going to answer these questions in detail. . . as these are methodologically complex (but important) areas.  [If you want to discuss these with me privately, give me a call.]

   My ultimate point here is that—as with any research study—educators need to know that the results and interpretations from any meta-analysis, for any program, strategy, or intervention, are objective, sound, and meaningful. 

   Statistical results are different than meaningful results.  We all need to invest in and implement “high probability of success” programs.  Anything less is irresponsible.

_ _ _ _ _

More onThe Missing Meta-Analytic Method

  But. . . there IS more . . . even when a meta-analytic study is sound.

   As discussed above. . . just because we know that a meta-analysis has established a legitimate connection between a program, strategy, or intervention and student behavior or learning, we do not necessarily know the implementation steps that were used by each individual study included in the analysis

   Moreover, we cannot assume that all or most of the studies used (a) the same or similar implementation steps, or (b) the most effective or best implementation steps.  We also do not know if the implementation steps can be realistically replicated in “the real world” as many studies are conducted under controlled “experimental” conditions.

   In order to know exactly what implementation steps to replicate with our staff and students (to maximize the program or intervention’s student outcomes), we—once again—need to “research the research.”

   Case in point. A number of the SEL studies cited by Durlak (2011) involved a published social skills program.  But how does a district or school know which social skills program is the best one for their student needs and outcomes?

   The first answer to this question is that educational leaders need to know the characteristics of effective social skills programs.  I have discussed these characteristics in a technical assistance paper as applied to the evidence-based Stop & Think Social Skills Program—cited in the Durlak study.

   The seven characteristics in the Stop & Think Social Skills Program: Exploring its Research Based and Rationale are:

  • Characteristic 1.  Social skills programs teach sensible and pragmatic interpersonal, problem solving, and conflict resolution skills that are needed by today's students and that can be applied, on a daily basis, by preschool through high school students.
  • Characteristic 2.  Social skills programs address problem situations, as identified by both adults and students, that occur in classrooms and common areas of the school on an almost every day basis.
  • Characteristic 3.  Social skills programs provide a defined, progressive, yet flexible, sequence of social skills that recognizes that some prerequisite skills must be mastered before other, more complex skills are taught.  The program also must plan for ongoing social skills practice and reinforcement that occurs throughout the school year.
  • Characteristic 4.  Social skills programs use a universal language that is easy for students to learn, facilitates cognitive scripting and mediation, and facilitates the conditioning or reconditioning of prosocial behaviors and choices leading to more and more automatic behavior.
  • Characteristic 5.  Social skills programs systematically use a social learning theory model that includes teaching, modeling, role-playing, and providing performance feedback as part of the instructional process.  Such programs overtly plan and transfer students’ use of social skills into different settings, with different people, at different times, and across different situations and circumstances.
  • Characteristic 6.  Social skills training is an integral part of a building- or grade-level positive discipline and behavior management system that holds students accountable for their behavior and provides for consistency and implementation integrity.
  • Characteristic 7.  Social skills programs teach specific behaviorally-oriented skills (not constructs of behavior) in explicit and developmentally appropriate ways, and they are able to flexibly adapt to student differences in language, culture, socioeconomic level, and behavioral need.

[CLICK HERE for a Free Copy of this Technical Assistance Paper]

_ _ _ _ _

   The second answer when choosing the best social skills program for their district or schools is that educational leaders, as discussed earlier, need to enlist the help of qualified staff or other professionals (at a nearby university, in the community/region, or virtually on-line) who can:

  • Review and analyze the specific implementation methods used in the most relevant individual studies in a meta-analysis;
  • Choose the best methods and materials that match to the needs of the students in the district; and
  • Create a viable action plan for training, resourcing, implementation, and evaluation.

   Quite honestly, the best people to do this are those who understand program development and psychometric assessment, strategic planning and implementation science, and formative and summative program evaluation.  In most districts, the school psychologists and/or assessment directors would be the most likely candidates here.

   Once again, I understand that this all takes time.  But think of all the time spent when selecting a new literacy or math curriculum for a district. 

   When this occurs, districts put together a Selection Committee.  The Committee meets many, many times to review the research, existing student data, the available programs, and their outcomes in other districts.  The Committee listens to vendor presentations, they interview teachers in other districts who are using the programs, and they finally determine—using objective selection criteria—which program is best for their district.

   This time and effort is invested, because the decision on a new curriculum typically occurs once every six or seven years.  If the district gets it “wrong,” time, training, money, personnel, and other resources are wasted.  But more importantly, student learning and proficiency may be compromised by a faulty decision—the real reason why the decision is made in the first place.

   Critically, the same time and effort should be invested by districts before choosing an SEL program or approach.  The decision should be made at the district level. . . involving all of the schools in the district.  And, the decision should involve (at least) a six or seven year commitment.

   Too many districts allow their schools to “do their own thing” relative to SEL.  This makes no more implementation and outcome sense than allowing every elementary school in a district to choose its own literacy or math program.

   For districts that do this—especially in the face of high student and staff mobility—this creates inconsistency, chaos (especially for the middle schools that the elementary schools feed into), and poor student outcomes.

_ _ _ _ _

   Relative to social skills programs, please understand that some independent groups (even CASEL) have taken it upon themselves to choose and evaluate a selection of programs.  Unfortunately, even here, there has been a selection bias—and some evidence-based social skills programs have been omitted or rated poorly because of the evaluation criteria used.

   A number of years ago, staff at the Arkansas Department of Education’s State Personnel Development Grant reviewed and identified the best research-to-practice social skills programs.  They were organized from those that focused more on students’ emotional competence to those that focused more on students’ behavioral competence.

   From emotional to behavioral, the top social skills programs were:

  • Lion’s Quest (Lions Clubs International Foundation)
  • Positive Action (Positive Action, Inc.)
  • Second Step (Committee for Children)
  • Providing Alternative Thinking Strategies (Channing Bete Company)
  • Life Skills Training (Princeton Health Press)
  • Tools for Teaching Social Skills in Schools (Boys Town Press)
  • Skillstreaming (Research Press)
  • The Stop & Think Social Skills Program (Project ACHIEVE Press)

   Thus, even here, districts need to understand the differences across different social skills programs, and they need to select the programs that best-match their needs.  More specifically, if a district wants its students to better understand their emotions, they will probably gravitate toward Positive Action or Second Step.

   If they want to change students’ behavioral interactions—including how students behave “under conditions of emotionality,” they will gravitate toward The Stop & Think Social Skills Program.

   But if the districts do not know the “science of students’ social, emotional, and behavioral self-management” (see Characteristic 6 above), they will not know that their social skills program must be connected to four other factors:  School and Classroom Climate and Relationships, Student Motivation and Accountability, Consistency, and Setting and Peer Situations.

[CLICK HERE for More Information]

_ _ _ _ _ _ _ _ _ _

Summary

   This first of two Blog Messages on the current SEL “movement” has focused on the professional, political, and social media history of SEL; a review of the quality of the research that has propelled the movement; and an analysis of meta-analysis and how to move meta-analytic SEL studies from research-to-practice.

   As with any professional decision, the strongest advice for districts and schools is to conduct your own evaluations of the research and the individual studies that are incorporated into published meta-analysis work, so that you make objective, informed, and wise decisions about how different SEL programs or approaches will work in your district and with your own staff and students.

   Right now, school-based SEL programming and implementation across the country is more due to personal testimony, tacit acceptance, and passive decision-making.  We need to better understand the difference between correlation and causality, and we need to recognize that just because an article is published, it still may have critical flaws that have major implications for our students, staff, and schools.

_ _ _ _ _

   I am not trying to stop the train.  I am trying to improve the journey and its outcomes.

   And, while I am fully open to critique, I will stand on the facts that have been presented in this message.

   In the end, I hope that this message has been informative to you, and that it causes you to evaluate your current SEL journey (which can be re-routed). . . or to begin your journey with your science-to-practice “eyes wide open.”

   I always encourage your comments and feedback.  And I always celebrate the exceptional work that you are doing in schools, agencies, and other programs across the country.

   Part II of this Blog will address the student outcomes CASEL’s perspectives on the outcomes of an effective SEL program, the absence of valid implementation strategies, and SEL’s potential strengths and limitations. 

Best,

Howie