The role of research evidence in improving policy and practice outcomes has been a topic of public debate and focus of research for some time. To date, scholarship has focused largely on two evidence challenges: 1) the role of evidence in policy formulation, for example how to get various types of evidence into the hands of policymakers as they craft policy, and 2) how to improve the use of evidence in practice contexts, with particular attention to the adoption and fidelity of evidence-based practices.
Comparatively less attention has been paid to the role of research evidence in the complicated world of policy implementation. Policy implementation includes the time period from when state and local administrators decide how policies (often vaguely defined in statute) will be operationalized in regulation to when frontline workers must interpret those regulations to determine everyday practice norms. This is where the two evidence challenges mentioned above meet: It is where policies—and whatever evidence supports them (or not)—are literally converted into guidelines for frontline practices used in clinics, schools, and other social service organizations.
How research evidence is used in policy implementation is increasingly important given the growth of federal and state policies that mandate the use of evidence-based practices. Examples of these include the Foundations for Evidence-Based Policymaking Act, as well as the policy that we study in our current research, the Family First Prevention Services Act (FFPSA), which allows states to draw down federal Title IV-E money to be spent on child welfare prevention services for the first time, but only for interventions that are listed on a specific federal evidence-based practice clearinghouse. To be listed on the clearinghouse, practices have to meet a high evidentiary standard based on experimental or quasi-experimental evidence. This means that some practices long embraced in specific local contexts or by communities with unique demographic features (e.g., rural location, specific racial, ethnic, or religious groups) are not “approved”—partially because they simply haven’t been studied.
How are administrators whose jobs are to implement these policies perceiving and responding to these new evidence requirements? Are they being adequately supported in doing so? What is the relationship between sensitivity to local context, equity, and evidence-based policy and practice standards? Our research highlights a couple reasons that we believe make the policy implementation context an important new frontier in studying the use of research evidence—and the consequences of that use.
Balancing Direction and Discretion
First, we note that the growth in evidence-based policy is occurring within a federalist context that bakes devolution of authority into policy implementation. By devolution we mean the practice of allowing states and localities increased discretion about how they will implement federal mandates (e.g., the use of block grants). As a result, state and local policy implementers are often walking a difficult tightrope between direction and discretion when it comes to the use of research evidence. Sometimes this can look like a kick-the-can-down-the-road approach from the federal level: “We want to signal that evidence is important but we don’t want to be the ones that have to decide how to actually operationalize its use.”
While this approach prioritizes discretion and the opportunity to customize interventions to local contexts, it often provides insufficient resources and supports for policy implementers to meaningfully engage research evidence. Decades of research on the use of research evidence, including work funded by the William T. Grant Foundation, has shown that supports like research-practice partnerships and access to knowledge brokers are crucial for organizations to be able to access and integrate high quality evidence in ways that are useable and efficient for their contexts. But such partnerships are extremely labor intensive and not available to most public agencies tasked with implementation. As a result, resource-strapped public agencies are often frustrated by what can be seen as conflicting demands: Find and use evidence-based practices but do so in ways that improve equity and are sensitive to local needs. To support them, we must not rely exclusively on intermediaries to provide bespoke solutions. Rather, policymakers must craft more flexible policies that allow local implementers to adapt directives to local contexts and provide sufficient support to do so. Such support includes allowing implementers adequate time to tailor and integrate new practices and providing additional funds so that needed evidence can be co-produced in local contexts.
Fitting Practices to Local Contexts
Second, we are beginning to see a new approach where federal mandates are accompanied by a particular tool, like an evidence clearinghouse, as we see in our research on FFPSA. In this approach, the federal government continues to devolve authority to local actors but puts additional guard rails around evidence: “By telling policy implementers what counts as evidence-based practice, we take the guess work out for them, but still provide flexibility of choice through a menu of approved practices.”
While this constrained choice approach works well in theory, in our research we find that it remains insufficient for a few reasons. First, clearinghouse-approved interventions still may not be the right fit for the local context—often the list of approved interventions is short and limited to those with significant institutional support (e.g., usually not community-developed practices). Second, and relatedly, many clearinghouse-approved interventions are not culturally tailored. States are receiving pushback from affected communities, asking for 1) support to collect the evidence that their own homegrown interventions work just as well or better for their communities and/or 2) more inclusive approaches that recognize various types of evidence, for example lived experience or intergenerational knowledge sharing practices.
Third, interventions are typically standardized, manualized, and packaged to facilitate export to and adoption in diverse settings quickly. But implementing “ready-made” practices to fidelity takes a lot of financial resources, training, and provider buy-in—all resources that are in short supply in many state contexts. Created in good faith, constrained choice approaches like clearinghouses aim to provide evidence tools about “what works” that make policy implementers’ work easier. Yet, these approaches bring up new equity challenges, including what gets to count as an evidence-based practice and whether those practices address communities’ priorities and organizations’ capacities. These equity concerns are particularly salient for historically marginalized racial and ethnic communities, as well as community-based organizations.
Overall, our research points to significant challenges for those on the frontlines of policy implementation as they engage with mandates to use evidence in specific ways. These challenges often stem from the need of state and local administrators to balance what the research evidence says with the preferences, expertise, and lived experience of multiple stakeholders—nonprofit managers, community members, affected populations, and more. This leads to many questions that research might fruitfully address. For example, how can state and local administrators be supported in finding and implementing evidence that is right for their context in ways that are low-cost and scalable? How can policy and policy tools be designed so that evidence mandates also honor equity concerns in the implementation process? How can we maintain room for evidentiary plurality at the frontline without sacrificing effectiveness? These questions and more will be important for scholars to address as the field moves forward.