Blog Post

Evidence at the Crossroads Pt. 8: Building an Improvement Infrastructure

With month’s congressional budget deal preserving level funding for the Investing in Innovation Fund (i3) and with the passage of the Every Student Succeeds Act (ESSA), through which i3 will be given new life as the Education Innovation and Research program, we are witnessing renewed investments in federal efforts to build and use evidence of What Works in education. Even so, funding for the Social Innovation Fund (the evidence standards of which were modeled on those of i3) will likely be reduced—not incidentally following reports on largely mixed evaluation results of grantee programs.

Now that we know these federal grantmaking initiatives will continue, how can we ensure that investments in innovation are maximized? What can the next generation of tiered evidence initiatives learn from the investments thus far? Several writers in this series have focused on program impacts. In this entry, I argue for building a complementary infrastructure focused on program improvement.

The Challenges and Learning Needs of Practicing Innovators

From 2011–2014, the William T. Grant and Spencer Foundations sponsored a learning community of program developers and practitioners funded through i3, a group I refer to as “practicing innovators.” The i3 learning community offered grantees a venue to share their experiences and insights with each other. It also brought their perspectives to bear on policy conversations about incubating educational interventions. Because of my work studying the scale up of educational programs, I was invited to participate as a consultant in these meetings and was able to witness educational innovation from the perspectives of both policy and practice.

All told, the i3 program stands as a powerful example of the possibility of using policy to shape the work of practicing innovators. It also stands as an important example of the ways in which policy can introduce new challenges for practicing innovators. By design, i3 introduced structure by requiring that practitioners develop clear intervention designs, work timelines, and budgets. They also had to establish clear goals and evaluation criteria to discipline their work. At the same time, the i3 program stretched practicing innovators to develop a much broader array of capacities and skills. This is no surprise. Such challenges are endemic in large-scale educational innovation. For i3 grantees, the challenges arose in three key areas: 1) building new collaborations, 2) working with evaluators, and 3) leading and managing complex organizations.

“All told, the i3 program stands as a powerful example of the possibility of using policy to shape the work of practicing innovators.”

1. Building new collaborations and negotiating fidelity-adaptation tensions

The i3 program has pressed grantees to collaborate with more (and more varied) districts and schools than they had in the past, often using interventions that had to be revised and extended in new ways. Scaling required managing issues of recruitment and retention while, at the same time, building open, trustful relationships that supported productive collaboration. For example, the IDEA Public Schools were awarded an i3 development grant to expand professional development to 600 new teachers, 400 instructional leaders, 24 new principals, and 160 aspiring teacher leaders to support the human-capital pipeline in the Rio Grande Valley. ASSET STEM Education, a validation grantee, proposed providing comprehensive professional development to teachers across Pennsylvania in K-6 standards-aligned STEM instruction. Reading Recovery, a scale-up project, proposed training 15 new Teacher Leaders and 3,750 new Reading Recovery teachers while also establishing new training sites in rural areas to target low performing schools and high needs students.

Scaling required balancing a deep tension between local adaptation and fidelity of implementation. On one hand, i3 grantees recognized a strong need to support local adaptation in order to manage uncertainty and increase effectiveness in fielding complex interventions across increasingly diverse arrays of schools. On the other hand, the i3 program placed a premium on fidelity of implementation in order to produce evidence of program impact, and local adaptation risked corrupting their evaluation designs.

The result was a breakdown in a fidelity-adaptation synergy that has driven continuous improvement among leading educational innovators. In the past, organizations such as Success for All and Reading Recovery (both i3 scale up grantees) navigated challenges arising through new collaborations using an approach to learning and improvement that had schools faithfully implementing practices that had been validated by research and experience while also adapting and extending these practices to address local needs and, then, propagating the new and promising practices in other sites. The i3 program design left grantees with a dilemma. Adapting their programs risked compromising their evaluations. However, not adapting them risked weak evidence of program impact as a result of continuing to implement dimensions of their programs that they recognized as problematic.

“Scaling required balancing a deep tension between local adaptation and fidelity of implementation.”

2. Working with evaluators

The i3 program also has brought innovators and evaluators into closer working relationships than has been typical in educational policy and reform. But because the two groups bring different priorities to projects, striking positive working relationships is no simple matter. For instance, innovators aim to produce replicable, effective interventions that are also responsive to local needs, opportunities, and problems. Doing so requires maintaining flexibility and adaptability in their programs. Evaluators aim to complete successful studies that yield unbiased and rigorous assessments of program effectiveness. Yet that, in turn, requires program stability and fidelity.

Crafting trustful, collaborative, and productive relationships required managing these competing priorities. Doing so was especially important in managing the tension between fidelity of implementation and local adaptation. As one grantee with a validation project explained:

“One challenge for us was teacher attrition. We had instances of teachers leaving their schools after the first year of the program. From our perspective, we needed to provide training to replacement teachers to maintain high-quality implementation in the school and to make sure that students continued to benefit. But that meant that a second year or third year school would have teachers who were only in the first year of implementation. Also, we were adapting our training every year as we learned from experience, including moving some of our online training to face-to-face coaching. So, these new, first year teachers would have slightly different training from the other teachers. But we knew that we were a validation project, and we had to make sure that these changes would be okay with our evaluators before making the final decision about how to proceed.”

Managing the tensions required that innovators and evaluators learn to work together in new ways. Logic models and measurement models became the media through which they negotiated shared understandings and struck agreements on the nature and scope of their work. Early on, some evaluators worked in pseudo-coaching roles, as they guided innovators in clarifying the logic of their program designs. Practicing innovators were then free to adapt and improve their programs in ways that did not compromise the logic and measurement models around which their evaluations were structured. These logic and measurement models did not exist at the outset of the i3 program. Rather, they emerged and evolved over the course of the program as a product of the collaborative relationships between innovators and evaluators.

3. Leading and managing complex organizations

While most of the i3 grantees had some prior experience managing educational innovation, few, if any, were formally trained or fully prepared to manage the entire scope of work and the uncertainty they encountered while working within the i3 program and scaling their interventions. Rather, all of the members of the learning community found themselves managing networks, programs, organizations, relationships, and trade-offs that were in some way new to them. For example, extending and modifying intervention designs required increasing their development staffs. Further, working with more (and more varied) districts and schools required increasing their training/coaching staffs. Working with external evaluators and with U.S. Department of Education personnel required expanding their executive and managerial staffs. While critical to the success of their programs, much of this organization-building occurred “behind the scenes” and sometimes without support.

Absent established traditions of research and professional development focused on the practice of educational innovation, most leaders found themselves addressing their learning needs either through reflective practice or through participation in communities of practice. As one participant with a development grant explained of the i3LC experience:

“Especially in the early years of the grant, the i3 Learning Community was our most important resource for innovation implementation support. The opportunity to meet with like-minded and like-challenged colleagues in a place where there was no judgment was invaluable. We knew we could speak honestly, and there would always be thoughtful people who would understand and help us work through our decision making process and then reflect on the results.”

Building an Improvement Infrastructure

While the i3 program has introduced more structure and discipline into the practice of educational innovation, so too has it introduced new challenges that will inhibit the program’s ultimate success. Developing system-level “improvement infrastructure” that supports practicing innovators in addressing these challenges could be a game changer.

“Balancing impact and improvement is not a matter of doing the impossible. Rather, it is a matter of duplicating success.”

After all, the structure and discipline introduced through the i3 program is very much a positive artifact of highly developed “impact infrastructure”: a system of political and policy supports emphasizing evidence as both an input to and output of the practice of developing effective, scalable educational innovations. The i3 program is one component of this impact infrastructure: a competitive grant program that structures requirements, resources, and incentives to support the use of evidence in innovation. The i3 program is supported by a web of interdependent federal policy initiatives supporting the use of evidence in innovation, including the establishment of the Institute for Education Sciences, the creation of the What Works Clearinghouse, and investment in the advancement of statistical methods and in early career professional development for researchers. It is further support by philanthropists advancing their own evidence-driven competitive grant programs, a population of private firms with capabilities for research and evaluation, and a powerful professional organization—the Society for Research on Educational Effectiveness—committed to advancing the cause.

This impact infrastructure, in turn, could provide a blueprint for building a complementary infrastructure focused on improvement.

Policy supports

A parallel improvement infrastructure could begin with a complementary policy web that promotes and supports continuous improvement. This policy web could include adapting the i3 program to support a sort of “novice–intermediate–expert” progression in creating capabilities for continuous improvement in sponsored projects. Support at the development level could focus on using design-based research to test and refine key practices and components. Further, support at the validation level could focus on developing capabilities in schools to enact evidence-driven Plan–Do–Study–Act cycles to adapt programs to address local needs. Support at the scale up level could focus on developing infrastructure linking the enterprise into a coherent, evolving learning system. This policy web could also include agencies to champion and monitor the work of continuous improvement, as well as initiatives aimed at investing in the development of formal methods of continuous improvement and in the development of researchers able to support the use of these methods.

Philanthropic, private, and professional supports

Already, elements of a complementary, private sector web are forming as key components of an improvement infrastructure: for example, the emergence of the Carnegie Foundation for the Advancement of Teaching and the SERP Institute as organizations championing the push for continuous improvement in educational innovation, matched by a community of researchers in universities and private organizations who are advancing and popularizing methods of design-based research. The growth and maturation of this private sector web could potentially be accelerated through federal resources and incentives that would draw in additional organizations, for instance by establishing requirements in competitive grant programs to incorporate formal methods of continuous learning and improvement. But it could also go further, sponsoring competitive grants programs that engage these organizations directly and provide resources and incentives to craft partnerships with practicing innovators.

Political supports

The bedrock of the impact infrastructure is political support anchored squarely in what Harvard University professor Jal Mehta aptly describes as the “allure of order”: longstanding faith among policymakers in the potential to use principles of rational management to discipline otherwise “soft” educational practices. The allure of order has deep roots in norms of rationality that have long dominated educational politics and policymaking in the U.S., and it is reinforced by longstanding appeal to both business and medicine as sources of ideas and legitimacy to support educational reform.

Perhaps the biggest challenge to building an improvement infrastructure lies in extending political discourse to include an understanding of improvement that parallels the allure of order. Indeed, ideas are emerging in both business and medicine for doing so. Possibilities include the introduction of language emphasizing “infrastructure building” (rather than “turnaround”) as an approach to improving weak schools, the use of the tech sector’s notion of “perpetual beta” as characterizing the work of educational innovation (rather than the pursuit of “What Works”), and the use of Atul Gawande’s notion of “better” (rather than “scientific knowledge”) as an outcome of improvement-focused educational innovation.

The trick will lie in understanding how these notions have been used to shape and influence broader discourse and understanding in other sectors, and ultimately adapting these strategies to education and other social sectors.

Looking forward

There is no doubt that cultivating a highly developed, coordinated improvement infrastructure will be difficult. However, the emergence of the impact infrastructure provides evidence that it is possible.

Indeed, the development of an improvement infrastructure can actually be interpreted as the essence of innovation: working iteratively to make incremental improvements to well-reasoned, promising, yet inevitably imperfect strategies and plans. In this case, such incremental improvement would center on maintaining the positive benefits that have followed from a keen, sustained focus on impact while, at the same time, developing and sustaining an equally keen focus on continuous improvement.

Viewed from this perspective, balancing impact and improvement is not a matter of doing the impossible. Rather, it is a matter of duplicating success.

Related content

Related collections

Collection

Evidence at the Crossroads

In “Evidence at the Crossroads” we seek to provoke discussion and debate about the state of evidence use in policy, specifically federal efforts to build and use evidence of What Works. We start with the premise that research evidence can improve public policies and programs, but fulfilling that potential will require honest assessments of current initiatives, coming to terms with outsize expectations, and learning ways to improve social interventions and public systems.

Subscribe for Updates