Digest, Issue 8: Winter 2022-23

Learning Across Contexts: Bringing Together Research on Research Use and Implementation Science

Recent publications by implementation researchers underscore promising new directions in implementation science while also acknowledging potential challenges for the field to address in the years ahead (Beidas et al., 2022; Metz et al., 2022).12 At the same time, scholars in the field of research on research use are forging new directions of their own by drawing on a wider range of bodies of knowledge to investigate strategies for improving and assessing research use.

The fields of implementation science and the study of research use in policy and practice travel on many of the same roads and share similar goals, chief among which is improving societal outcomes through the application of research. Both fields also attract interdisciplinary teams and create strikingly similar knowledge across contexts. However, key differences have emerged in these two fields of study and the assumptions they make in the empirical work. These differences provide opportunities to strengthen the next generation of both implementation science and research on research use.

In this essay, we highlight how similarities and differences in the methods, approaches, and evolution of each of these fields can contribute to mutually beneficial insights and potential alignments. We describe each field in detail, with an eye toward highlighting key assumptions and differences, and we conclude by discussing opportunities for learning across fields. Overall, we aim to inspire a dialogue that may build a stronger foundation for supporting evidence use in ways that achieve equitable outcomes for people and communities.

Implementation science

While there have been various approaches to supporting wide-scale use of evidence-based innovations to improve outcomes for people and communities over the last several decades (e.g., diffusion of innovation theory in the 1960s), implementation science entered the scientific and academic discourse with the launch of the foundational journal Implementation Science in 2006. Implementation research, as defined in the inaugural issue of the journal, is “[The] scientific stud­­­y of methods to promote the systematic uptake of research findings and other evidence-based practices in­­to routine practice, and, hence, to improve the quality and effectiveness of health services.” This definition, still the most widely accepted in the field, includes the study of influences on service professionals and organizational behavior (Eccles & Mitman, 2006).

Implementation science is concerned primarily with the barriers and facilitators to the uptake of evidence-based innovations, rather than the efficacy of innovations themselves (Bauer & Kirchner, 2020). At the heart of implementation science is the assumption that establishing the effectiveness of an innovation does not guarantee its effective use in routine service settings, and therefore, without effective use of the innovation, it will not lead to improved outcomes for people and communities.

Implementation researchers seek to build evidence in two main areas of inquiry: 1) identifying barriers and facilitators to the uptake of evidence-based innovations across multiple levels of context (i.e., individuals engaged in services, providers, organizations, and other stakeholder groups), and 2) developing and testing implementation strategies that overcome these barriers and enhance the facilitators to increase the uptake of evidence-based innovations (Damschroder et al., 2009).

Since its formation, implementation science has grown through new journals, international conferences and meetings, and training institutes. For example, intensive training workshops offered by the Implementation Research Institute (IRI) and the Training Institute on Dissemination and Implementation Research in Health (TIDIRH) began training cohorts of researchers in 2010 and prepared trainees to seek funding from the National Institutes of Health and Veterans Administration, thereby supporting the development of a workforce dedicated to implementation science. At the same time, various institutions began to offer implementation science courses, leading to the current array of certificate, masters, and doctoral programs currently offered in the field and further formalizing implementation science as a formal field of study.

Evolution of the discipline

The mission and purpose of implementation science has evolved over time. Implementation strategies represent the foundation of current implementation research designs (Albers et al., 2022; Lewis et al., 2018). Implementation strategies are defined as methods or techniques used to enhance the adoption, implementation, and continuation of a clinical program or practice (Proctor, Powell, & McMillen, 2013). Example strategies include initial training/educational meetings, building a coalition, auditing and feedback, modeling and simulating change, and using data experts, among many others (Powell et al., 2015). Historically, implementation researchers have largely studied the effectiveness of implementation strategies to implement and sustain evidence-based programs in routine service settings.

Over time implementation scientists have come to recognize the complexities of implementation and the critical role of upstream and societal factors in achieving impact. Implementation scientists have since grappled with how to broaden the field’s focus to include multi-level determinants and contextual features in its models and research designs (Beidas et al., 2022). In the last few years, we have seen greater interest in hybrid designs and other evaluation methods to assess both the innovation and the process of implementing the innovation (Lane-Fall, Curran, & Beidas, 2019). At the same time there are now calls for implementation research designs that can demonstrate the range of approaches for building implementation capacity, as well as rationales for approaches that emphasize the dynamic and highly relational nature of using research in practice (Albers et al., 2021; Carey et al., 2019; Huzzard, 2020). Some scholars believe the field of implementation science has overemphasized technical strategies at the expense of relational strategies (Metz et al., 2021) and that implementation strategies need to take into account “the dynamic and highly relational nature of policy and practice implementation involving multiple layers of context and differing norms and values among stakeholders” (Albers et al., 2021). While implementation science acknowledges the wide range of multilevel barriers and facilitators that can determine implementation success, a “lack of fit problem” persists due to the field’s focus on “internal validity and fidelity over external validity and contextual appropriateness” for innovations in diverse service settings and communities (Lyon et al., 2020).

Some who have raised the growing gap between implementation science and practice have recommended that the field embrace multidisciplinary work with political scientists, sociologists, economists, and anthropologists to better understand and address the macro level factors that are less understood or effectively addressed by implementation scientists’ efforts to impact outcomes and advance equity (Thogersen, 2022). Others emphasize the need for partnerships that deeply integrate the perspectives of implementation practitioners and researchers are needed to co-produce the science—from conceptualization, execution, and interpretation of findings (Thogersen, 2022)—thus making it both more rigorous and more relevant. For example, Lyon and colleagues (2020) call for a research agenda focused on human-centered design, where the development and testing of implementation strategies are embedded in the environment where they will be used. This growing area of emphasis, though, has not yet entered the mainstream of research conceptualization and execution in implementation science.

While many implementation researchers speak to the essential role of partnerships in engendering such reciprocity, the challenges in realizing authentic partnerships are numerous. For example, the complexity of multi-level implementation frameworks and multi-faceted implementation strategies to support change can bump against public systems with finite resources. This reduces the relevance and feasibility of many implementation approaches, thereby limiting implementation scientists’ ability to co-create partnerships that feel equitable for all parties. Metz and colleagues (2022) recommend systematically studying the experiences of professionals supporting implementation efforts in order to deepen our understanding of the factors and conditions they perceive as critical to implementation success, rather than what researchers who may not be engaged in partnered work perceive is most important.

Some implementation researchers are raising critical questions of whether co-production of evidence is a necessary condition for overcoming implementation challenges and successfully using evidence to improve outcomes for people and communities (Metz et al., 2021). The theory of co-production can provide useful insights regarding how collaborative work creates the conditions for generating knowledge that is more easily translated into practice (Heaton, Day, & Britton, 2015). Co-production and co-creation methods also address challenges to the “gap” assumption in implementation science, which stipulates that there is a gap between knowing (research evidence) and doing (translating evidence into practice). Metz (2015) argues that moving away from a gap framework to one of co-creation allows for an explicit focus on assessing and understanding how various actors must build trust and pathways for the use of evidence to improve outcomes for people and communities. Other voices echo the belief that co-production and other community-based participatory approaches likely increase the usefulness, scalability, and sustainability of knowledge in real world settings, and that these approaches are central to centering equity and addressing health equity in the research (Ramandhan et al., 2018; Shelton et al., 2012).

Implementation scientists have also called for ways to explicitly address equity and describe how theories, models, frameworks, and strategies can be used to support equitable implementation. Shelton and colleagues (2021) propose including structural racism as an implementation construct and promoting its measurement as a determinant in implementation frameworks, as well as considering evidence-based interventions that address structural racism directly or indirectly to impact equitable outcomes. Baumann and Cabassa (2020) propose using an equity lens to integrate implementation science and research on healthcare inequities. Recent research has identified elements of equitable implementation, including building trusting relationships; dismantling power structures; making investments and decisions that advance equity; developing community-defined evidence; making cultural adaptations; and reflecting critically about how current implementation science theories, models, and frameworks advance (or do not advance) equity (Metz, Loper, & Woo, 2021).

Key assumptions of implementation science

In sum, the following assumptions underlying implementation science have emerged:

  • Population-level outcomes can be achieved through better implementation of existing, evidence-based innovations.
  • Better implementation of evidence-based innovations requires the use of appropriately matched, evidence-based implementation strategies that produce the conditions under which evidence-based innovations can be used and sustained.
  • Evidence is most commonly defined as coming from randomized control trials and quantitative research.
  • Implementation strategies are required at multiple levels of the system to address macro and micro contextual factors to impact change.
  • The goal of implementation science is to take effective innovations and identify strategies to ensure service settings take up those innovations and implement them with fidelity.

Research on the use of research in policy and practice

Parallel to the flourishing of implementation science, a related field of empirical research studying how and when research is used in policymaking has also grown. Starting with the field’s beginnings nearly forty years ago, the dominant model for fostering the use of research in policy and practice has involved pushing information to decision-makers and increasing access to research. Even so, early scholarship on how and when research is used conceived of interactive exchanges wherein researchers and decision-makers collaborated to understand the key issues and problems, as well as how research might inform deliberation (Davies & Nutley, 2008).

The study of research use had its beginnings in the theoretical frameworks put forth by political science and public policy scholars including Kingdon (1984), Knott and Wildavsky (1980), and Weiss (1977). Then and now, the field seeks to deeply center and understand the intended users of research and the conditions that facilitate the use of research. Initial theories highlighted the importance of the match of the particular research with open policy windows, which meant understanding how and when decisions were being made in policy (Kingdon, 1984). Theories emphasized the complex, multifaceted process of knowledge acquisition and use by decision-makers over time, going beyond the idea of research use as a one-time event (Knott & Wildavsky, 1980). These theories, in addition to continued empirical research, have highlighted decision-making as a process (Tseng, 2022). Finally, theories developed nuanced typologies of multiple ways research might be used, going beyond the idea that research use was limited to instances when a piece of research was used in a specific decision, to include how research might inform ideas about the nature of the problem, build an understanding of the utility of research, or support argumentation (Davies & Nutley, 2008; Weiss, 1977).

When the William T Grant Foundation began supporting studies of research use, it built from these models (Tseng, 2008; DuMont, 2015). Tseng (2009) emphasized the importance of empirical research on research use to understand more about when and why research was used by decision-makers, rather than focusing solely on creating more rigorous research to be pushed out to the field. The Foundation thus centered its grantmaking on studies to better understand how decision-makers acquire, interpret, and use research (Asen et al., 2013; Palinkas et al., 2017; Wulczyn et al., 2017; Yanovitsky & Weber, 2018), as well as how the use of that research is affected by the political, economic, and social context in which organizations operate (Coburn, Honig, & Stein, 2009; McDonnell & Weatherford, 2013). As the field of research on research use matured, the Foundation recognized the importance of understanding how systems, including leadership and resources within organizations, support research use and the potential for programs, policies, or practices to strengthen supportive contexts and systems and improve outcomes such as health equity (DuMont, 2015; Garcia et al., 2016).

Over time, the Foundation’s focus has shifted from individual models of research use to the importance of understanding relationships as foundational for research use (e.g., Finnigan, Daly, & Che, 2013; Yanovitsky & Blitz, 2020) and the need for systems models that account for the complex social, organizational, and political contexts in which research is used (e.g., Bogenschneider & Corbett, 2021; Mosley & Gibson, 2017). Similarly, the overall field has shifted from individual models of research use to relational and, ultimately, systems models to understand and intervene to improve research use (Best & Holmes, 2010; Nutley & Boaz, 2019).

What is meant by use of research?

Research evidence is defined as empirical findings derived from systematic research methods and analysis (Nutley, Walter, & Davies, 2007; Tseng, 2008). A difference between research on research use and implementation science is that, rather than focusing on evidence-based programs, the research on research use community takes a big tent perspective in two ways. First, the field is grounded in the premise that research is not the only legitimate form of evidence used in policy and practice. Evidence can include theory, data, clinical insights, patient or participant input, politics, and values (D­­avies & Nutley, 2008; Nutley, Walter, & Davies, 2007; Tseng, 2008). Research evidence is often combined with other forms of knowledge, and the field of research on the use of research seeks to understand how all these forms of evidence interact to support decision-making. In addition, research use is often conceptualized as non-linear and complex.

Second, the field examining research use does not limit itself to a particular study design or method when considering research (Nutley, Walter, & Davies, 2007). The questions asked are often more focused on whether and how research is used than what specific research is used (Weiss, 1977). The field has focused less on whether the research used meets a threshold of quality evidence and more on whether the research was used at all (Sheldrick et al., 2022; Rickinson et al., 2021). As the study of research use continues to grow, theoretical discussion is moving away from simple questions of whether research is used at all, and toward questions of what it means to use research well (i.e., quality use of evidence) in order to make wiser decisions that may yield improved outcomes (Weiss, 1977). These discussions have intentionally examined the individual, organizational, and systems-level factors, in addition to qualities of the specific evidence used, implicated in how the research is understood and implemented in contexts (Sheldrick et al., 2022; Tseng, 2022). For example, emerging frameworks of quality evidence use ask how deeply the end users of the research assimilate the varying sources of evidence and how comprehensive users are in considering evidence (Sheldrick et al., 2022). Rickinson and colleagues (2022) point to early health models of evidence use (Sackett et al., 1996), connecting research with patient input and clinical judgment as similar relational perspectives on research use. These conversations are connecting the ideas the quality of the evidence along with the quality of its use are necessary components to reach the ultimate goal of improving societal outcomes (Tseng, 2022).

In addition to a broad perspective of evidence, the field of research on research use considers multiple potential ways research is used including accountability, setting priorities, learning, and others (Davies & Nutley, 2008). This is different from implementation science, which emphasizes the adoption and sustainment of evidence-based programs. Instead, many of the researchers who study research use ground their theories and studies in the typology developed by Carol Weiss (1977), which illuminates multiple ways research is used: instrumental, process, strategic, or conceptual. Weiss (1977) argues that when research is used it often influences how a decision-maker considers a problem or issue rather than providing a deterministic response (Nutley, Walter, & Davies, 2007). Inherent in many of the current theories and findings is the recognition that when research is used, it is often through interactions between researchers and users of research to collectively make sense of research and apply it to the specific context or co-production (Bandola Gill et al., 2022). Often, co-production requires boundary spanners or intermediaries, i.e., individuals who stand with a foot in the worlds of both research and practice, to understand the problems users are facing, how existing evidence might inform solutions to those problems, and where new research is needed to support decision-makers (Tseng et al., 2022). The field studying research use has invested in such strategies centered on co-production (Bednarek et al., 2018), including research-practice partnerships (Farrell et al., 2021) and boundary spanning (Oliver et al., 2014), though more empirical research is needed to know if research produced in this way is more likely to be used.

Research on research use is most interested in the different ways both individuals and organizations make use of research evidence and what social processes influence whether and how research is used (Tseng, 2008). The field leverages theories from disciplines such as organizational psychology and management to understand the conditions that would support research use in organizations (Farrell et al., 2022). Scholars have sought to understand the users of research and the contexts in which they are embedded (Tseng, 2008, 2012), and how the users make sense of research among other kinds of evidence. The field frames questions around how research is actually used in context, rather than the idealistic way researchers think research should be used (Tseng, 2008). This frame has led to more nuanced questions of how research use can be embedded within existing organizations, systems, and processes, rather than disrupting systems and decision-making processes (Davies & Nutley, 2008; Tseng, 2012). This research integrates the potential needs of end users in the research design and execution process, and some more recent interventions that leverage existing organizational routines and systems are showing promise in their ability to increase research use (Becker et al., 2019; Crowley et al., 2021).

How is knowledge produced in research on research use?

Often researchers discussing research use in policy or practice are sharing post hoc anecdotes of research use. While these can be helpful, the field of research on research use aims to prospectively study research use to understand the mechanisms that support research use and to ultimately test interventions to create the conditions that support research use. Researchers studying research use have used a range of study designs to understand how, when, and under what conditions is research used.

A large proportion of the research in this field comprises case studies (Gitomer & Crouse, 2019). Due to the complex, contextualized nature of understanding research use and identifying the mechanisms that might improve research use, case studies are an important methodological tool for researchers of research use. Prior studies have examined cases ranging from one specific organization (e.g., school district), policy, or implementation of practice (Gitomer & Crouse, 2019). As one example, Coburn and colleagues have used the case study methodology to provide rich descriptions of the conditions under which research is used in school districts (Coburn, Toure, & Yamashita, 2009). The team uses a longitudinal case study to examine how research is used in decision-making and what factors shape the decision-making process. The case study highlights the importance of knowledge and practice to shape the understanding of the problem and what can be done about it (Coburn et al., 2009).

There is a growing body of research testing mechanisms and interventions to support research use that uses experiments and quasi-experimental designs (Gitomer & Crouse, 2019). Designs that can support drawing causal inferences are important tools to understand how we can go beyond describing research use to intervening to improve research use. A recent example comes from the work of Bruce Chorpita and Kimberley Becker testing their Coordinated Knowledge System, which aims to provide relevant, timely research to school mental health counselors in the context of regular clinical supervision (Becker et al., 2019). The results of their randomized clinical trial indicate that the intervention not only improves research use but subsequently improves youth outcomes as well (Becker, Chorpita, & Stahmer, 2021).

Given the relational nature of research use, another important tool in studying research use is social network analysis, which can help explore questions of how information is shared, who would be the best targets for intervention to improve research use, and how the strength of relationships is connected to when and how research is used (Gitomer & Crouse, 2019). As one example, Finnigan and her colleagues explored how research evidence was used in a network of schools within one district as part of school reform efforts (Finnigan, Daly, & Che, 2013). The results of the study highlight the role of the diffusion of research across schools and the limited understanding of the available evidence to solve problems. The results point to places where interventions that create stronger organizational conditions for research use may facilitate school reform efforts (Finnigan et al., 2013).

How is research use measured?

As theory and knowledge on the use of research have advanced, so too has the sophistication of the field’s methods (Tseng, 2022). Studying whether research is used is a complex endeavor. In the broader scientific community, discussions of research use often center around rough-cut metrics such as citations. While these measures may capture some research use, they often miss more subtle uses of research, such as conceptual use. Other research has relied solely on surveys or interviews of whether someone believes they have used research in a decision. These measures often suffer from self-report bias, where respondents either misremember whether and how they used the research, or they understand that they are expected to use research and respond with that in mind. Measures are now starting to ground research use assessment in specific problems or situations (Tseng, 2022). What’s more, as the study of research use has highlighted the central role of relationships, measures are now focusing on understanding the end-users and their perspectives. As we look to the future, we anticipate these designs will continue, and will add in emerging methods such as the use of Altmetrics and Natural Language Processing (Tseng, 2022). As the concept of quality research use is further explored, we anticipate new measures to capture when and how research is used well (Tseng, 2022).

What are the key assumptions of empirical research on research use?

In sum, the following are key assumptions of research on research use:

  • Providing answers to questions of “what works” is just one of many ways research can contribute to policy and practice. Other ways include helping to surface values, advise on broad directions or new ways of thinking, and contribute to debate. Studies need to consider the many ways research is used (Nutley, Walter, & Davies, 2007).
  • Research use is a complex, dynamic, multi-faceted phenomenon that must move beyond individuals to organizations and systems (Nutley, Walter, & Davies, 2007).
  • Research use is highly influenced by context, so interventions to improve research use need to take a broader perspective (Coburn, 2008).
  • Research use is a social, interactive process (Nutley, Walter, & Davies, 2007).

What can implementation science learn from research on research use to address evolving needs and priorities?

While implementation science and research on research use both include interdisciplinary investigators and teams, research on research use has gone further in terms of the diversity of methods used in studies. This is evident, for example, in a recently-launched open-access repository of methods for studying research use, which continues to grow (see uremethods.org). What’s more, current research on research use often draws from a rich seam of previous research, building on existing theories, such as diffusion of innovations and absorptive capacity, in the analysis of research data. Investigators draw insights from a range of disciplines, including public sector management, science and technology studies, organizational psychology, and program evaluation. While it is daunting to continually cast the net widely in pursuit of insights, this breadth has led to rich analytical work and contributed to a tradition of building on previous research rather than reinventing the wheel. Indeed, research on research use is characterized by its embrace of different theories, methods, and data in answering important questions that arise in the process of implementing research into practice. Empirical research and theories from public policy, political science, public administration, and organizational behavior, for example, could be important additions to the theory and empirical study of external contexts that influence implementation. This “wide net” approach may be particularly helpful in supporting implementation scientists in understanding the upstream and societal factors that have such a profound impact on implementation (Nilsen et al., 2013).

Research on research use has also considered whether the ways in which research is produced might be linked to its potential utility. In particular, this line of inquiry has yielded ideas about new approaches to conducting research. The most widespread of these approaches centers on research-practice partnerships, in which researchers and practitioners or policymakers build trust and share power in pursuit of agendas that are relevant to the interests of research users. While research-practice partnerships can be challenging to build and maintain, studies on partnerships funded by the William T. Grant Foundation, as well as lessons shared through the learning community in the National Network of Education Research-Practice Partnerships, are spreading the word about what makes for an effective and sustainable partnership. This work offers an important contrast to implementation science as it suggests revisiting the ways in which implementation research is prioritized, designed, and conducted. In cases where implementation scientists have looked at the potential contributions of research-practice partnerships, they have observed the ways in which partnerships can allow for exchanges among researchers, practitioners, policy makers, and other stakeholders that generate interdisciplinary approaches to understanding problems, identify solutions, and explore the nature and use of research evidence for change (Palinkas et al., 2017).

Researchers studying research use have also recently turned their attention to the role of power in determining what counts as research and how it is used. The impact of race on research practice, for example, has become an area of interest for scholars studying research use. The field has begun to ask questions about whose research is being used, whether the research used inherently reinforces disparities or power dynamics, which researchers are informing the policy or practice conversation, and how power dynamics affect the organizational or relational aspects of research use (Doucet, 2019, 2020; Kirkland, 2019). Surfacing challenges related to equity in research production may be uncomfortable for the research community to address, but it is a vital step toward building and promoting research that helps bring about equitable change for communities. This is just one example of where the research on research use field has brought critical perspectives (in this case critical race theory) to bear on its work. For implementation researchers, incorporating such perspectives might begin with a fundamental challenge to the notion that the science of implementation is in itself neutral and value-free.

What can research on research use learn from implementation science?

Scholars studying research on research use remain fairly fragmented within disciplines, making it challenging to continue to build and share knowledge broadly. We believe research on research use needs to develop a community with stronger connections among professionals contributing to this field, much like implementation science has done so successfully.

For instance, there are regular national, regional, and international meetings for implementation science scholars to come together, including the biannual Global Implementation Collaboration Conference. There are also membership-based societies, including the Society for Implementation Research Collaboration (which also hosts a regular conference). These conferences and societies provide opportunities for scholars to build networks and connections and support the development of research careers. In contrast, the research on research use community has struggled to find spaces to come together, relying internationally on convenings organized by the William T. Grant Foundation (which were designed primarily for its own grantees). The recent development of the Transforming Evidence Network (Transforming Evidence, n.d.) provides one promising avenue for research on research use scholars to develop a greater sense of community and collective identity. Further support for community building has come in the form of training grants and formal training courses (from modules through to master classes) that support researchers in building careers as implementation scientists. Support for early career researchers and training are often on the agenda for discussions of research on research use, but concrete evidence of broader efforts (beyond individual research teams) with implications for building and sustaining the workforce are less common.

The visibility of implementation science has been further enhanced by its own high-profile, international open access journal (Implementation Science), which attracts authors away from their discipline-based journals and is often the first-choice journal for scholars conducting implementation science research. While Evidence & Policy has a growing presence for research on research use scholars, many continue to publish elsewhere, lured away by convention, higher impact factors, and open access publication. A recent analysis of publications on research use, conducted by the William T. Grant Foundation, shows that most articles continue to be published in discipline-specific journals, limiting the cross-sharing of knowledge sharing and the development of a shared scientific identity (Sotolongo, 2022).

Another area of difference is the extent to which the quality of evidence features in implementation science. Implementation science has traditionally focused on evaluating strategies to implement evidence-based innovations, i.e., innovations that have been rigorously tested and have evidence of effectiveness in experimental studies. By contrast, research on research use has placed less emphasis on considering the quality of the evidence that is being implemented, and more emphasis on measuring whether research is used at all. Given that it is widely acknowledged that not all research should be implemented, and that policy and practice change should be supported by a body of evidence as opposed to single studies, there is a lesson to be learned here from implementation science about considering the quality of the research that is potentially used in decision-making. This is not to suggest that certain methodological approaches, such as randomized-control trials, should be considered as a proxy for quality, but that deeper reflection on the quality of the research being implemented would enrich the study of research use in policy and practice.

Finally, implementation science scholars have typically presented key concepts and ideas in the form of models and frameworks. Nilsen (2015) identified five distinct categories of theories, models, and frameworks in implementation science: process models, determinants frameworks, classic theories, implementation theories, and evaluation frameworks. These visual representations have proved useful in communicating implementation concepts to new audiences and are in marked contrast to the largely text-based presentation of research on research use. A potent opportunity exists for these researchers to consider non-text-based ways of communicating seemingly abstract and complex concepts to the field at large.

Conclusion

As the field of implementation science grapples with critical perspectives related to the relevance, feasibility, equity, and impact of implementation science methods, the use of research evidence field may provide models for how co-production and partnerships may advance equitable outcomes in systems and communities. At the same time, the use of research community can benefit from insights into how to build a strong community, to reflect on the quality of research evidence, and to consider how to be more creative in sharing learning away from text-based approaches.

Our hope is scholars in both fields will begin to collaborate to share learnings and even jointly conceptualize research. Funders of both fields may find knowledge from their counterparts that can enhance the research that is conducted and ultimately used. For its part, the William T. Grant Foundation has reached out to implementation scientists interested in applying perspectives from research on research use to understand how and under what conditions research is used or how organizations or systems can be supported to improve research use. Our hope is that funders of implementation science may similarly begin to support research that may be asking similar questions and using research on research use methods to begin to address needs in the implementation science field.

The implementation science field emerged from the evidence-based program movement and a desire to disseminate and scale manualized evidence-based programs. While the field has broadened over the last decade, the origins of the movement still drive much of the methodology and theory. Research on the use of research ran parallel to this movement but began with a broader net of what research was included for use and how to define use. As we travel many of the same roads, even if we approach questions from different perspectives, the fields are heading to the same destinations in terms of outcomes, and so we might consider traveling together more often.

Downloads

In this issue

Today, as the pandemic recedes, its effects are still with us, in education as well as in other domains. We have considerable knowledge about how to respond to growing inequality, ...
Reducing Educational Inequality After the COVID-19 Pandemic: What Do We Know, and What Research Do We Need?
Whether framed as a problem of practice or research use, understanding implementation is critical for identifying and redressing the inequitable distribution of resources, facilitators, and barriers in our systems, which give rise to disparities in student outcomes.
Building Evidence Systems to Integrate Implementation Research and Practice in Education
As the effects of COVID linger on, researchers have a crucial role to play in cataloging them. Even more important, by developing knowledge about exactly who is affected and how, researchers can provide evidence that points the way toward successful ...
Emergency Exits: Avenues for New Research to Improve Youth Outcomes After COVID

More Digest Issues

The Digest

Issue 9: Winter 2023-24

The ninth issue of the Digest features an update on the Institutional Challenge Grant program five years after its launch, with insights from a Foundation stock-taking effort and reflections from two grantee teams. We also share new thinking on the use of research evidence, including ways for researchers to leverage existing findings to bolster studies of effective strategies to improve research use.
The Digest

Issue 7: Winter 2021-22

The seventh issue of the Digest includes fresh insights from program staff and grantees on how researchers can successfully confront the challenges of studying causal mechanisms for reducing inequality, as well as how the global community of researchers and funders focused on improving the use of research evidence can continue to break new ground in the years ahead.
The Digest

Issue 6: Winter 2020-21

Essays in this issue of the Digest focus on the importance of looking beyond individual action and customary metrics in research on reducing inequality, as well as how strengths-based, race-conscious research can be produced and used to uplift communities of color.
The Digest

Issue 5: Winter 2019-20

The fifth issue of the William T. Grant Foundation Digest features insights from program staff and grantees on the importance of looking beneath the surface to consider the underlying factors that create and shape challenges that social scientists seek to address, whether they be related to reducing inequality in youth outcomes or improving the use of research evidence in policy and practice.
The Digest

Issue 4: Winter 2018/19

The fourth issue of the William T. Grant Foundation Digest features insights that may point the way toward a more nuanced understanding of evidence use and inspire new and more wide-ranging examinations of ways to reduce inequality in youth outcomes.
The Digest

Issue 3: Winter 2017/18

The third issue of the William T. Grant Foundation Digest features insights on how research on ability tracking can inform studies to improve the outcomes of English learners, as well as how researchers and school districts can partner to build learning systems based on research evidence.
The Digest

Issue 2: Spring 2017

The second issue of the William T. Grant Foundation Digest features writing on research rigor and relevance, as well as the potential for a new research agenda for improving the outcomes of English learners under the Every Student Succeeds Act.
The Digest

Issue 1: Summer 2016

The introductory issue of the William T. Grant Foundation Digest features essays and commentary on the value of qualitative and mixed-methods research in reducing inequality and the potential for researcher access to big data to yield useful research evidence.

Subscribe for Updates