A central tenet of evidence-based policy is that society will be better off when research is used.1 Recent efforts to increase the use of evidence in policymaking have focused on improving the quality of evidence and on providing incentives to policymakers to allow evidence from research to guide their decisions. Although well intentioned, these efforts often fail to get evidence used in policymaking because they make unrealistic assumptions about how policy decisions are made and how policies are implemented. An emerging body of evidence featuring the social side of evidence use—infrastructure, capacity, relationships, and trust—points the way toward a more nuanced understanding of evidence use. In this essay, I urge advocates for evidence-based policy to attend to the evidence on getting evidence used, and call on researchers to test new models that take into account the social side of evidence use.
In this essay, I urge advocates for evidence-based policy to attend to the evidence on getting evidence used, and call on researchers to test new models that take into account the social side of evidence use.
Recent approaches to evidence-based policy
Every year the federal government spends more than $600 billion on grants to fund programs and services in education, social sectors, and health (Government Accountability Office [GAO], 2016). When deliberating such commitments, policymakers weigh evidence from research, practice, and their own experience while also navigating political pressure, the demands of their constituents, and regulatory constraints (Cairney & Oliver, forthcoming; Haskins & Baron, 2011; Head, 2010). The weight of each of these factors swings like a pendulum, yielding a policymaking context that privileges ideology and external pressures at one point and gives primacy to evidence in the next. The latter context is often labeled “evidence-based policy,” a catchall term for policymaking largely shaped by evidence—primarily scientific—rather than political ideology.
While the forms and uses of evidence can vary (Nutley, Walters, & Davies, 2007; Weiss, 1977), over the last decade the paradigm informing evidence-based policymaking has largely prioritized research evidence derived from studies in an experimental tradition. The federal Office of Management and Budget, for instance, has made such evidence a factor in funding criteria:
To better integrate evidence and rigorous evaluation in federal grantmaking, the Office of Management and Budget has encouraged federal agencies to use tiered evidence grant programs…. Under this approach, agencies establish tiers of grant funding based on the level of evidence grantees provide on their models for providing social, educational, health, or other services. (GAO, 2016)
One example of this approach is the federal Teen Pregnancy Prevention Program, which was developed, in part, through research about the incidence of teen pregnancy, the consequences of teen pregnancy, and factors that contribute to teen pregnancy. The policy also recounted empirical evidence about programs that can help prevent teen pregnancy. The same legislation offered financial incentives for using programs that have a strong empirical base for reducing teen pregnancy and monitoring and evaluating their implementation (Haskins & Baron, 2011). Provisions were also included to expand the evidence base and to test new and emerging programs (Haskins & Margolis, 2014). Lastly, technical assistance was offered to help states and locales over the grant period and to build capacity for evaluation. A clearinghouse was created to house the growing evidence base.
The rational approach appealed to many and the linearity of the argument was compelling: A discrete challenge was defined, packaged programs with prior research evidence of success were recommended, and states or localities were to be rewarded for adopting those programs. If the program worked, it would make life a little easier for teens as they move into adulthood, while also saving taxpayers money.
The Evidence on whether Rigorous Evidence Is Getting Used
There is limited evidence on whether recent approaches to evidence-based policy live up to their promise.
Only a small number of studies have tested whether the use of high quality research leads to better policy and practice (Oliver, Lorenc, & Innvar, 2014). Among these, Wulczyn and colleagues (2015) found a positive relationship between the use of research evidence by child welfare agencies and how quickly a child was returned home. Similarly, Palinkas and colleagues (2017) found that using research evidence was positively associated with the quality, pace, and success of efforts to implement evidence-based programs in youth-serving organizations. When using evidence-based programs on a broader scale, the relationship between research use and outcomes becomes more tenuous. For example, two of four evidence-based programs that were scaled up as part of the U.S. Department of Education’s Investing in Innovation initiative reported positive results for students, while the other two tell a more complicated story that features not just the evidence base but also the context for the implementation (Lester, 2017).
Thus, recent approaches to evidence-based policy will likely fall short of their goals because the approaches rested on faulty assumptions about what it takes for research to be used in impactful ways. The current paradigm of evidence-based policy overlooked the social side of using research evidence and in doing so fell short of expectations.
Aligning evidence-based policy with the research
Reframing approaches to evidence-based policy requires taking a candid look at what we do and do not know. It means grappling with the evidence available to inform more impactful approaches to evidence-based policy.
Reframing approaches to evidence-based policy requires taking a candid look at what we do and do not know. It means grappling with the evidence available to inform more impactful approaches to evidence-based policy.
For the past ten years, the William T. Grant Foundation has grown a portfolio of studies on understanding and improving the use of research evidence in policy and practice. This portfolio supports qualitative and mixed methods work that centers on decision makers and their environments. It has identified conditions that are associated with using high quality research evidence, and it includes tests about what it takes to build the conditions that support the use of high quality evidence in ways that may benefit youth. The Institute of Education Sciences (IES), the National Institutes of Health, and the National Institute of Justice have also invested in studies with similar goals, and related work is being conducted in Australia, Canada, the United Kingdom, and elsewhere.
What have we learned from this research that leads to more effective constructions of evidence-based policy?
Incentives Fall Short without Investments in Infrastructure and Capacity
What we know
Research on research use suggests that capacity and relationships provide the foundation for using evidence. Incentivizing, monitoring, and evaluating the use of evidence do communicate the value of research. They aspire to, but do not cultivate, the conditions for getting research to the outcomes intended:
Proponents of tiered-evidence grants contend that they create incentives for grantees to use approaches backed by strong evidence of effectiveness, encourage
learning and feedback loops to inform future investment decisions, and provide some funding to test innovation. The evidence and evaluation requirements for tiered-evidence grants represent a deliberate approach to using evidence that may require different policies, practices for outcome and performance measurement, and capacity for agencies and grantee
organizations. (GAO, p. 1, 2016)
In contrast, we know from a growing body of research that to improve research use and transform on-the-ground practices, evidence-based policymaking needs to attend to infrastructure and capacity (Tseng & Nutley, 2014). Using research requires relevant knowledge, skills, and infrastructure to inform decision-making and help to absorb and embed research use into a system (Chorpita & Daleiden, 2014; Farrell, Coburn, & Chong, 2018; Honig, Venkateswaran, & McNeil, 2017).
When these conditions are lacking, challenges will occur. The following two examples, drawn from a GAO report on the implementation of federal tiered-evidence grantmaking, illustrate how overlooking the capacities needed to support evidence-based policy can derail its efforts.
…Some grantees did not have the technical skills to understand the evidence base and infrastructure to understand the evidence base and select the evidence-based model that would best fit their target populations…. When a model did not fit a community, grantees had to make many adaptations to the evidence-based model. As a result, grantees were less likely to achieve the greatest impact….(GAO, p. 18, 2016)
All of the grantees included in our review reported that they faced challenges in fulfilling requirements for rigorous evaluation in tiered evidence grants, for example, when planning for their independent evaluations…. Some grantees had not previously worked with an independent evaluator and were not familiar with the qualifications they should look for in an evaluation. For example, they faced challenges in developing a description of the requirements and hiring an evaluator with the appropriate experiences and skill set. State and local governments also found it difficult to procure an evaluator within the grant’s timeframe. (GAO, p. 21, 2016)
While at first glance these challenges seem trivial, they directly undermine the assumptions informing the structure of evidence-based policy: Incentivizing the use of evidence-based programs will lead to the selection of promising programs; monitoring and evaluation will increase the likelihood that the model is implemented with fidelity. But if there is a mismatch between which program is selected and the needs of the population, and if there is a mismatch between the skills of the data analyst or evaluator and the kind of work that needs to be done, then it is highly likely that the evidence-based program will not achieve its intended impact.
Realignments
New approaches to evidence-based policymaking could include funds or at least guidance to create or repurpose positions to focus explicitly on evidence use, data monitoring, and evaluation (Gamoran, 2018). Alternatively, consultants may be hired to serve these roles, but guidance or coaching—in some form—must be provided about the specific knowledge, skills, and training required for these positions.
Research-practice partnerships represent a notable strategy to bolster capacity and infrastructure. Partnerships are built for the long term and cultivate the conditions that support the use of research evidence. Sustained relationships between researchers and practitioners bridge the different ways that researchers and practitioners define research evidence and provide opportunities for building trust (Coburn, Penuel, & Geil, 2012; Palinkas, Short, & Wong, 2015). Moreover, these collaborations can offer long-term structures that support organizations as they implement evidence-based policy. They create an infrastructure to increase the flow of information between research, policy, and practice, and provide structured interactions to make sense of research findings within the local context and to inform next steps in research. Researchers might develop the tools for monitoring the implementation process, help adapt existing programs, and evaluate others (Penuel & Farrell, 2017). They also offer continuity as new policy actors come and go (Leslie, Maciolek, Biebel, Debordes-Jackson, & Nicholson, 2014; Mosley & Courtney, 2012).
Access to Evidence Is Insufficient without Avenues for Engagement
What we know
Convincing evidence exists that research use requires attention to engagement between researchers, decision makers, and the intermediaries in between. Recent approaches have focused on generating rank-ordered lists of evidence to improve awareness and access. This is an important step, but it does not guarantee understanding or use.
Traditional dissemination channels rarely connect research evidence and potential users (Spybrook, Everett, & Lininger, 2013). This pattern was again confirmed on the recent IES listening tour and partner survey, which revealed that among 510 K–12 educators, half had never heard of one of the primary mechanisms for communicating evidence-based programs in education: the What Works Clearinghouse (Sparks, 2018).
Although such clearinghouses provide access to searchable research repositories, the content within reflects the values of researchers, not decision makers who also evaluate research. While researchers may prioritize the scientific rigor of the evidence above all else, decision makers may view such rigor as a basic foundational requirement and place greater emphasis on the trustworthiness and source of the research, its relevance to their local context, and its feasibility (Palinkas et al., 2016). During a recent convening held by the Annie E. Casey Foundation and the William T. Grant Foundation, more than 50 child welfare leaders articulated some of the questions decision makers are asking: What staffing structure is needed to support an evidence-based program? How long will it take to implement and see changes? How do we coordinate the use of multiple programs, and what will it cost? As it stands now, answers to these questions are underdeveloped or absent in clearinghouses.
Further, the assumption of recent approaches that access to research is sufficient to foster use is misguided. Like federal and state policymakers, local decision makers work in a world of conundrums, sorting through different types of evidence and attempting to balance competing considerations. To meet demands, these individuals need to work efficiently and take shortcuts when processing an abundance of information (National Academies of Sciences, 2017). Nuance quickly deteriorates, and information that is easy to accommodate dominates. When considering this backdrop, it is easy to see that motivating the use of research evidence requires more than cataloguing and translation. These are healthy starts, but as the IES listening tour indicated, more active engagement is needed (Sparks, 2018). Informal and structured contacts are necessary to create opportunities for learning and use (Farrell et al., 2018).
The assumption of recent approaches that access to research is sufficient to foster use is misguided. Like federal and state policymakers, local decision makers work in a world of conundrums, sorting through different types of evidence and attempting to balance competing considerations.
The success of evidence-based programs depends on professional and practice expertise, as well as research evidence. For example, while a protocol for an evidence-based pregnancy prevention program may be carefully tested to determine what topics matter for preventing risky sexual behaviors, professional expertise is needed to develop the rapport to engage youth and elicit candid responses. Brokering is needed to facilitate an ongoing exchange and engagement between research and professional expertise and to support planned adaptations (Chorpita & Daleiden, 2014).
Realignments
Engagement facilitated by intermediaries or research-practice partnerships could facilitate structured and informal opportunities for iterative learning throughout the policymaking process (Coburn et al., 2012; Cvitanovic, McDonald, & Hobday, 2016; DuMont & James-Brown, 2015; Gándara, Rippner, & Ness, 2017; Neal, Neal, Mills, & Lawlor, 2018; Scott, Lubienski, DeBray, & Jabbar, 2014). At their best, intermediaries serve as honest brokers of research evidence and facilitate exchanges in which researchers influence policy and policymakers (Bogenschneider & Corbett, 2010). At their worst, intermediaries can play coercive roles and limit learning (Gándara et al., 2017; Scott et al., 2014).
Studies across a range of policy areas, including the environment, education, and child welfare, indicate that implementing participatory approaches is critical for understanding and use of research evidence (Cvitanovic et al., 2016; Honig et al., 2017; Metz & Bartley, 2015). Participatory strategies might involve developing logic models, co-creating procedure manuals and desk guides, mapping workflow processes, and debating strategies to move from the generalized findings to context-specific uses (Cvitanovic et al. 2016; Fixsen, Naoom, Blasé, Friedman, & Wallace, 2005; McDonnell & Weatherford, 2014; Metz & Bartley, 2015). These strategies may help reorganize existing routines through collaborative problem solving (Palinkas et al., 2011) and lead to better use of evidence-based programs.
The strategies may also trigger active deliberation about the research and facilitate a higher valuation of research. McDonnell and Weatherford (2014) suggest that structures for processing information can result in attitude change if the deliberation process elicits active, reflective processing. Work by Honig and colleagues (2014) supports this idea. Honig and colleagues (2014) found that shifts in practice to use research in the central offices of six schools occurred when staff had the opportunity to learn from research-based ideas, were assisted by others, and had opportunities to respond to and deepen understanding of challenging ideas.
Critical to all these efforts are relationships. The quality of relationships affects opportunities for learning and what information is effectively shared (Asen & Gurke, 2014; Finnigan & Daly, 2012; Neal et al., 2018). Importantly, strained relationships limit the diffusion of less familiar and more complex information, such as research (Barnes, Goertz, & Massell, 2014; Daly, Finnigan, Jordan, Moolenaar, & Che, 2014; Honig, Venkateswaran, & Twitchell, 2014). In contrast, when relationships garner trust, individuals can engage in risk taking, learning, and behavior change (Asen & Gurke, 2014; Honig et al., 2014). This trust and learning comes about, in part, through informal opportunities for contact and exchange (Farrell et al., 2018).
Conclusion
We need to reimagine evidence-based policymaking both to get research used and to do so in ways that better address social challenges. Research on research use offers ideas moving forward. Advocates for evidence-based policy would be well served to follow their own advice and get aligned with the evidence. This reframing would:
- Honor different types of evidence
- Invest in the capacity and infrastructure required to use evidence
- Prioritize relationships and engagement
We need to reimagine evidence-based policymaking both to get research used and to do so in ways that better address social challenges. Research on research use offers ideas moving forward.
Evidence-based policy must support understanding of and engagement with research evidence, value stakeholder involvement in the production of research, and invest in organizational capacity to use research and other types of evidence to affect changes in decision-making, practice, and, ultimately, youth outcomes. These principles and the actions that follow may help limit the gross sways of the pendulum and establish some semblance of equilibrium.
However, the evidence base is not complete and important unknowns remain for researchers to explore. While we know quite a bit about the conditions that support the use of research evidence, we know much less about the many ways to realize these conditions, and even less about the implications of research use for tackling social problems when more evidence-based conditions are realized.
Serious scientific inquiry is needed to investigate under which conditions evidence-based policies achieve their intended outcomes (DuMont, 2015). The field lacks prospective studies that investigate how decision makers develop the capacity to roll out evidence-based policies at the federal, state, and local levels that enhance the quality of services and improve the outcomes of children and youth. Do such policy processes result in more cost-effective responses (National Research Council, 2012)? Federal efforts such the Family First Prevention Services Act (https://www.congress.gov/bill/115th-congress/house-bill/1892), as well as state and local constructions of evidence-based funding, present meaningful opportunities to examine the conditions under which the strategies demonstrably improve the use of research evidence, the quality of policies and practices for young people, and, ultimately, youth outcomes.
Studies to date suggest that encouraging evidence-based policymaking approaches that move beyond merely valuing evidence to actually investing in tools and personnel to reconfigure existing routines and practices are likely to yield practices that more consistently map to the evidence and yield better outcomes (Chorpita & Daleiden, 2014; Farrell et al., 2018; Honig et al., 2017). New research is needed to test this supposition. Likewise, reinventing clearinghouses to become engagement hubs that provide ongoing opportunities for relevant decision makers to meet with model developers may yield programs that better respond to local needs and better choices about programs to fit local needs. Although consistent with available evidence, this model also remains to be fully tested.
Another promising avenue for future research is examination of processes that bring user needs and stakeholder voices into greater prominence in the evidence-building process. Existing research suggests this approach would help create more trust and buy-in about the findings, and, in turn, lead to greater use of the evidence. In addition, the inclusion of these voices is likely to deepen understanding, provide new insights about problems spaces, and offer ways to respond (Tseng, Fleischman, & Quintero, 2017; Tseng & Coburn, forthcoming).
These questions admittedly venture into uncharted territory. We invite the adventuresome to explore; the lives of youth depend on it.