When we began this blog series, we posited that evidence-based policymaking was at a crossroads. In the past six months—despite rancorous partisan debates and a fierce presidential primary season—Congress surprised everyone and passed the long overdue re-authorization of the Elementary and Secondary Education Act, with strong support from both parties.
The Every Student Succeeds Act (ESSA) includes over 80 mentions of “evidence” and “evidence-based,” and a devolution of power to states and districts to implement those provisions. And just last week, the Evidence-Based Policymaking Commission Act, sponsored by Representative Paul Ryan and Senator Patty Murray, was approved by the Senate and the House in another display of cooperation.
It is promising that at a time of heightened political rancor, evidence-based policy is finding bipartisan support. But the road ahead is still tenuous, and much will depend on whether the evidence movement can evolve. Here, I draw on the terrific ideas and insights from the authors of this series to suggest three steps for moving forward: focus on improvement, attend to bodies of evidence, and build state and local capacity for evidence use.
It’s time to position evidence-based policy as a learning endeavor. Implementing and scaling interventions in different contexts with diverse groups is notoriously challenging. Promising results are emerging, but not all are home runs. The history of evaluation research shows that most evaluations yield mixed or null results, and this generation of studies will produce the same. Interventions work in some places for some people, but not others. Even new studies of established interventions turn up findings that are inconsistent with prior studies. What should we make of these results?
One direction we should not take is to obscure these findings or pretend they don’t exist. I fear that already happens too often. The rhetoric of the What Works agenda—funding more of what works and less of what doesn’t—has created an environment that pressures program developers to portray home run results, communications engines to spin findings, and evaluation reports to become more convoluted and harder to interpret.
Improvement could be the North Star for the next generation of the evidence movement. The idea of building and using evidence simply to sift through what works and what doesn’t is wasteful and leaves us disappointed. We need to find ways to improve programs, practices, and systems in order to achieve better outcomes at scale. Let’s not be too hasty in abandoning approaches that do not instantly pay off, and instead learn from the investments that have been made. After all, many established interventions had years to gestate, learn from evidence, and improve. Let’s not cut short this process for new innovations that are just starting out.
This is not to say that anything goes. Patrick McCarthy reminds us that when research evidence consistently shows that a policy or program doesn’t work—or even produces harm—it should be discontinued. Indeed, the next generation of evidence-based policy will need to aim toward improvement while keeping an eye on whether progress is being made.
If evidence-based policy is to realize its potential to improve the systems in which young people learn, grow, and receive care, we need to rely on bodies of research evidence. Too often, public systems are pressured to seek silver bullet solutions. A focus on single studies of program effectiveness encourages this way of thinking. But, as Mark Lipsey writes, “multiple studies are needed to support generalization beyond the idiosyncrasies of a single study.” Just as a narrow aperture can exclude the important context of an image, so too does focusing on a narrow set of findings exclude the larger body of knowledge that can inform efforts to improve outcomes at scale.
State and local leaders need to draw on bodies of research evidence. This includes not only studies of what works, but of what works for whom, under what conditions, and at what cost. What Works evidence typically reflects the average impact of an intervention in the places where it was evaluated. For decision makers in other localities, that evidence is only somewhat useful. States and localities ultimately need to know whether the intervention will work in their communities, under their operating conditions, and given their resources. Evidence-based policy needs to address those questions.
To meet decision makers’ varied evidence needs, the evidence movement also needs to focus greater and more nuanced attention to implementation research. Real-world implementation creates tension between strict adherence to program models and the need to adapt them to local systems. To address this tension, we need to build a more robust evidence base on key implementation issues, such as how much staffing or training is required, how resources should be allocated, and how to align new interventions with existing programs and systems. As Barbara Goodson and Don Peurach argue, we have built a powerful infrastructure for building evidence of program impacts, but we need to match it with equally robust structures for implementation evidence.
And finally, the evidence-based policy movement needs to recognize the importance of descriptive and measurement research that helps local decision makers better understand the particular challenges they are facing and better judge whether existing interventions are well suited to address those problems. For those needs assessments, descriptive and measurement studies can be critical.
As decision making devolves to states and localities, the way the federal government defines its role will also change. In the wake of ESSA, officials in Congress and the U.S. Department of Education are aiming to move beyond top-down compliance. But to do so they will need to identify new means to support states, districts, and practitioners in the evidence agenda. States and localities are not mere implementers of federal policies, nor are they simply sites of experimentation. A key way to foster the success of the evidence movement is to support the capacity of state and local decision makers to build and use evidence to improve their systems and outcomes.
Technical assistance is one way that the federal government can support capacity, and it’ll be important to direct technical assistance to state and local decision makers and grantees in productive ways. While tiered evidence initiatives such as i3 have provided grantees with technical assistance to conduct rigorous impact evaluations, assistance has focused less on other key issues: helping grantees apply continuous improvement principles and practices, vet and partner with external evaluators, and build productive collaborations with districts and other local agencies to implement programs.
Providing technical assistance in these areas would increase the ultimate success of these evidence-based initiatives.
Research-practice partnerships (RPPs) are another way to support state and local agencies. In education, these long-terms partnerships can provide the research infrastructure that is lacking in many states and districts as they seek to implement the evidence provisions in the Every Student Succeeds Act. RPPs can help districts and schools interpret the existing evidence base and discern which interventions are best aligned with their needs. In instances where the evidence base is lacking, RPPs are poised to conduct ongoing research to evaluate the interventions that are put into place. Similarly, in child welfare, research-practice partnerships could provide states with additional capacity as they develop Title IV-E Waiver Demonstration Projects to test new approaches for delivering and financing services in order to improve child and family outcomes.
The federal government is perhaps uniquely situated to build and harness research evidence, so that what is learned in one place need not be reinvented in another and the lessons accumulate. Mark Lipsey suggests that federally funded research require the collection and reporting of common data elements so that individual studies can be synthesized. Don Peurach imagines ways the federal government can support an “improvement infrastructure.” We should consider these ideas and others as we move forward.
Foundations also have a role. Private funders are able to support learning in ways that are harder for the federal government to do. The William T. Grant and Spencer Foundations’ i3 learning community, for example, provided a venue for program developers to share the challenges they faced in scaling their programs and to problem solve with one another. In another learning community, our foundation supported a network of federal research and evaluation staff across various agencies and offices to learn from each other. A learning community requires candor, and can provide a safe and open environment to identify challenges and generate solutions. Foundations can also produce tools and share models that states and localities can draw upon in using evidence. With fewer bureaucratic hurdles, we can often do this with greater speed than the federal government.
The ascendance of research evidence in policy in the past two decades gave way to investments in innovation, experimentation, and evaluation that signaled great progress in the way our nation responds to its challenges. But for all the progress we’ve made in building and using evidence of What Works, we’ve also been left with blind spots. As a researcher, I did not enter my line of work expecting simple answers. Quite the opposite, in fact. Researchers, policymakers, and practitioners know that there is always more to learn than yes or no; more at stake than thumbs up or thumbs down. We build and use research evidence not just to identify what works, but to strengthen and improve programs and systems—to build knowledge that can improve kids’ lives and better their chances to get ahead.
As we approach the next generation of evidence-based policy, it’s essential that we take steps to ensure that practitioners and decision makers at the state and local level have the support they need.