Over the past two decades, the role of state education agencies (SEAs) has grown from traditional monitoring and compliance to more central involvement in school improvement. Various iterations of the Elementary and Secondary Education Act, along with state standards and accountability policies, identified a growing number of schools and districts as in need of improvement, and imposed severe consequences for continued failure. Federal statutes required or compelled SEAs to create statewide systems of support using high-quality research to help struggling schools and districts adopt more effective practices. And just this year, the newly authorized Every Student Succeeds Act has placed greater onus on SEAs to implement the “evidence-based” provisions specified in the law.
Given the growing importance of SEAs in school improvement efforts, and the increased focus on evidence-based approaches, beginning in 2010, my colleagues and I set out to explore whether and how SEAs used research in their school improvement domains (Goertz, Barnes & Massell, 2013). Our earlier studies had shown that the research dissemination efforts of federal education assistance centers were most promising when they moved beyond traditional transmission models, such as written reports, to more project-based working groups. This led us to build our investigation on the premise that the incorporation of research is a profoundly social process—one that requires discussion and collaboration to make sense of and integrate often de-contextualized research findings into policy and practice (Honig & Coburn, 2008; Spillane, Reiser & Reimer, 2002; Weick, 1995; Argyris & Schon, 1995).
We found that the three SEAs we studied had core knowledge networks that exchanged research and other types of evidence about school improvement. A small set of influential knowledge brokers brought this information back to working groups that engaged in a sense-making process. These groups used local practitioners’ feedback, their own experience, and external partners’ knowledge of relevant research to contextualize various research findings in light of their states’ school improvement needs.
We gained many insights into these efforts, and about how SEAs converged or differed in this work based upon their structure and leadership, stage of design, expertise of the staff, and the range of organizations in their specific environments. One key finding was that the form in which research was presented played a critical role in the sense-making process.
Research Designed for Use
All three SEAs frequently gravitated toward synthesized research, as well as very specific research-based guides to action focusing on critical problems of policy or practice. We called this “research designed for use” (RDU). RDU was both the fodder for and the outcome of the SEAs social sense-making processes, and included models, programs, protocols, or other tools that embedded research into practical action steps.
For example, one SEA with a very small and relatively inexperienced school improvement staff was struggling to design strategies to effectively engage with the growing number of schools identified as low-performing. The SEA school improvement director recognized that the mandatory improvement plans these schools submitted were based on a hodgepodge of ideas and local preference, rather than a careful consideration of needs with the research-based practices that might effectively address them. She seized on a handbook from the Center on Innovation and Improvement that synthesized school improvement research across different domains. Among other things, this Handbook on Restructuring and Substantial School Improvement (Walberg, 2007) provided 12 different checklists with more than 100 indicators of effective practices. This handbook and others became required reading for the SEA staff and were cornerstones of discussions and support system designs.
“When I saw the book, I said, ‘This is exactly what we need. Here are the research-based indicators. Here’s a blueprint for improvement…”
It is important to note that even more usable forms of research were modified and adapted to particular contexts, and the ideas then shaped subsequent strategies—a kind of conceptual use of the research designed for use. For example, one SEA believed that low-performing schools and districts would be overwhelmed if they attempted to address all of the 100-plus indicators of effective practice from the Handbook. They convened a group of local educators, coaches, handbook developers, and other partners to identify high leverage indicators salient to the context of their low-achieving schools, and used this modified list to frame plans and construct online tools to help staff monitor and engage with local educators.
It is not hard to understand why research designed for use was so prominent. Various studies on a topic often come to opposite conclusions, and much of the research published in prominent journals offers only vague solutions for the problems it explores. RDU that is premised on research reviews or syntheses simplifies the complex problem of aggregating results and often asserts a summary opinion. Furthermore, RDU is also more accessible to SEA staff or school practitioners who may lack an academic background or the time or proclivity to read scholarly research articles.
RDU that provides indicators, tools, or specific processes also helps practitioners who are trying to operationalize the implications of research for their work. In other words, RDU simplifies decisions for users and alleviates them of some of the intellectual burdens of sense-making. For example, one state’s leaders noted that Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement (Hattie, 2008) “took off like wildfire” with their regional coaching staff because it offered them a dashboard-like display that compared the research evidence behind various programs. Another state constructed a number of tools with the local district practitioners who would be using them. Its Urban Superintendents’ Network, where the vast majority of low-performing schools resided, regularly convened with SEA staff to surface problems, provide feedback, and identify solutions. Among other resources, this network and its partners used research such as Instructional Rounds in Education: A Network Approach to Improving Teaching and Learning (City, Elmore, Fiarman, & Teitel, 2009) to create tools to scaffold rich discussions of professional practice using an evidence-based cycle of inquiry and various data analysis tools.
Social networks enable SEAs to match research to pressing problems of practice, and to translate it more readily into decisions or actions. But it is equally important to recognize that the form of research contributes to the social sense-making process, and can create a body of shared understandings based on research principles. Research designed for use, with specific guidance for practice, can embed common ideas in state school improvement delivery systems.
References:
Argyris, C., & Schon, D. (1996). Organizational learning II: Theory, method, and practice. MA: Addison-Wesley.
City, E. A., Elmore, R. F., Fiarman, S. E., & Teitel, L. (2009). Instructional rounds in education: A network approach to improving teaching and learning. Cambridge, MA: Harvard Education Press.
Goertz, M.E., C. A. Barnes and D. Massell (2013) State Education Agencies Acquisition and Use of Research in School Improvement Strategies. Philadelphia, PA: Consortium for Policy Research in Education.
Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York: Routledge.
Honig, M. and C. Coburn (2008). “Evidence-based decision making in school district central offices: Toward a policy and research agenda.” Educational Policy, 22(4): 578-608.
Spillane, J.B., Reiser, B. J., & Reimer, T. (2002). Policy implementation and cognition:
Reframing and refocusing implementation research. Review of Educational Research, 72(3), 387-431.
Walberg, H. (Ed.) (2007). Handbook on restructuring and substantial school improvement. Greenwich, CT: Information Age Publishing.
Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications.