One of the most unheralded aspects of the recent federal investments in scaling evidence-based programs is the novel partnerships formed between researchers and program developers. Both the U.S. Department of Education’s Investing in Innovation (i3) grants and the Corporation for National and Community Service’s Social Innovation Fund (SIF), for example, supported expansions that have more than doubled the participants served in funded programs, while also funding evaluations involving new partnership between researchers and program developers. These alliances have provided new ways for researchers and program staff to work together and have supported programs in their efforts to continuously improve and refine their interventions.
In both the SIF and i3, MDRC worked in concert with program leaders to think about how to use evidence to improve or even re-design interventions. We sometimes learned as much from disappointing findings as we did from programs that “worked.”
BELL (Building Educated Leaders for Life), a recipient of a SIF grant, runs a successful elementary school program and wanted to test its application in a middle school setting. MDRC’s evaluation of their summer program for middle-schoolers found encouraging results for math but not reading. Both organizations then launched a joint diagnostic process to identify possible programmatic solutions for middle school. As Tiffany Gueye, the Executive Director of BELL noted in an Education Week piece, BELL leaders “plugged findings from the study into our continuous-assessment process” and decided to focus enhancement efforts on staff training, curriculum, and student assessment, as well as a greater focus on social and emotional learning. MDRC and BELL are continuing to work together to learn how these efforts are playing out in the programs and to design a learning agenda for the future.
The story was a little different for the Center for Employment Opportunities (CEO), another SIF grantee. An earlier MDRC study of CEO’s transitional employment program in New York City for recently released offenders re-entering the community found that, although the program did not yield long-term employment effects, it did reduce recidivism among program participants. Using what it learned from the evaluation, CEO restructured its employment programming to place more emphasis on post-program employment retention services as it expanded to other jurisdictions. MDRC is working with CEO in an initiative to pilot a cognitive behavioral education component with the hope that it will further improve employment outcomes, particularly for younger offenders.
This type of collaborative approach to building reliable evidence also can be applied to improving systems, not just specific programs or interventions. Outside of the SIF or i3 structure, MDRC is working with New Visions for Public Schools, which supports a network of 77 New York City public schools, to promote evidence-based practices while embedding ongoing evaluation and evidence-building into the system of support it offers schools. In a constantly changing operational environment, a support organization or system of schools finds as much benefit from using evidence to build systems for identifying and supporting students at risk as from introducing specific evidence-based interventions. This next generation of research–practice partnerships is employing a mix of advanced analytic techniques—including using predictive analytics, exploiting variation across schools, and conducting quick turnaround experiments—to inform decision making and improve school and student performance.
As the evidence movement matures, it is increasingly clear that we need to build on lessons not only from clear successes, but also from interventions that have not worked. Neither program developers nor researchers can tackle this task in isolation. Research–practice partnerships are a strategic way to ensure we learn from what does and does not work as we actively translate evidence into practice. These next generation alliances between social policy programs and evaluators also have the potential to tackle more complex challenges by capitalizing on the availability of the data housed in public systems and the opportunity to embed random assignment and other rigorous methods inside organizations’ continuous improvement cycles, heralding a new age in evidence-based practice.