There is now a robust body of knowledge that can help us understand the nature and conditions of research use, as well as its impact on practice and policy. Given the array of theories and problems that this work examines, it is not surprising that studies of the use of research evidence use a broad range of established research methods to support inferences and hypotheses.
Common methods in the field include surveys, interviews, observations, document analyses, social network analyses, analyses of administrative data, and experiments. Each of these helps to illuminate some characteristics of research use, but none allows us to see the entire picture.
As Moss & Haertel (2016) note, different research methods offer different perspectives on a given question, and while we should consider the usefulness of a methodology, we must also be aware of the “partialness of the perspectives” of any one method. They write, “both the methods we know and the methodologies that ground our understandings of disciplined inquiry shape the ways we frame problems, seek explanations, imagine solutions, and even perceive the world” (p. 127). With these considerations in mind, we are attempting to understand the interplay of methods and research questions in the field of studies on the use of research evidence.
Applying traditional measurement conceptions such as validity, reliability, and generalizability does not readily characterize the appropriateness of methods in this field. Instead, we find the concept of illumination more germane. Specifically, to what extent does a study illuminate some insight about the use of research evidence, and to what extent do the research methods and methodological quality contribute to that illumination?
The common thread characterizing studies that illuminate is alignment of theory, purpose, and method(s). Illuminating studies begin with an explicit theory of research use. These theories can be relatively general and abstract, such as Gibbons’s (2008) theory of knowledge transfer, Weiss and Bucuvalas’s (1980) theory of social science research use, and Wenger’s (2008) theory of communities of practice, all of which have been used to guide URE studies. They can also be more specific and developed through the study of particular settings of research use. In many cases, such as in Honig and Coburn’s (2008) work reviewing studies of evidence-based decision making in school district offices, the researchers bring together overarching theories with deep understanding of particular contexts of use to build frameworks that clarify the agents, relationships, processes, substance, and outcomes of research use. For example, Spillane and Jennings (1997) observed nine classrooms to understand how teachers actually implemented research-based district policies to improve literacy instruction.
Aligned methods logically follow from theories of research use for particular purposes. When researchers are first attempting to bound and understand the nature and processes of research use within particular contexts, primary methods tend to be those that will provide rich descriptive evidence of how research use is related to policy and practice. As understanding of use develops and methods are established to easily identify instances of such use, research questions can focus on the impact of the practices and policies that are impacted by the use of research evidence, including targeted outcomes and calls for methodologies that can estimate the effects on those outcomes.
Studies of how research is used often focus on intermediate mechanisms such as the creation of legislation or policy. In these instances, the outcomes of interest are not the results of policy change, but rather the process through which individuals use research to develop policy in the first place. These studies use methods such as interview, document analysis, and social network analysis to illuminate how research use, relationships, and processes shape intermediate outcomes.
For example, McDonnell and Weatherford (2013) were interested in how research and other evidence were used in the process of creating and enacting the Common Core State Standards in education. Using document review and interviews of relevant stakeholders, the researchers developed a case study to understand research and evidence use influences on the various stages of policymaking. Through a similar case study approach, Mosley and Courtney (2012) drew on document review and interviews to explore the use of research evidence in the development and implementation of child welfare legislation that provided additional services even when the state was faced with the need to make severe budget cuts.
To understand how research influenced policy and practice in the area of children’s mental health in response to a court-ordered remediation plan, Leslie, Maciolek, Biebel, Debordes-Jackson, and Nicholson (2014) conducted a comprehensive document analysis as part of a multi-method approach that also included surveys and interviews. Their review, which included court documents, governmental agency progress reports, and presentations to stakeholders, enabled the team to describe the timeline for policy planning and program implementation, as well as to identify specific instances of policy development that were influenced by research.
Studies that examine the impact of research use often focus on outcomes targeted by the policies and practices influenced by research. Methods that are appropriate for these kinds of studies allow for generalized causal inference through experimental and quasi-experimental approaches. These studies are designed to support inferences about whether specific policies, practices, or features of institutions and systems are related to desired social outcomes.
For example, research teams led by Meredith Matone and Peter Gierlach (2017) as well as David Rubin and Zachary Meisel (2017) are studying whether the way that public health research is communicated relates to the prescription behavior of medical professionals. These studies test whether research findings packaged in the form of narratives and anecdotes are more effective in changing psychotropic drug prescription behavior than traditional communications that relay research findings directly without narrative context. Similarly, Wulczyn and colleagues (2015) articulated a theory of how child welfare agencies make clinical and administrative foster care decisions by examining characteristics and interactions among staff, agencies, and operating context. They tested the hypothesis that personal characteristics and reported use of research evidence leads to better outcomes—in this case, more permanent foster care placements for children. Methodologically, they used interviews to provide an understanding of the nature of interactions and a multi-level model that draws upon survey data to predict outcomes for children who are overseen by particular agencies.
In these studies, researchers operationalized the use of research evidence in specific terms to estimate the relationships of research use to key outcomes. In order to rigorously address particular questions of impact, such studies, by themselves, provide fewer insights into the details of research use. They do, however, show that research use can impact outcomes. For instance, Wulczyn’s team (2015) found that higher levels of self-reported direct research use were positively associated with permanent foster care placements for children, whereas Matone and Gierlach (2017) found that doctors with access only to direct research were likely to over-prescribe medication.
The relationship or positionality of the researcher to practitioners also has implications for theories, methods, and questions. Research-Practice Partnerships (RPPs), for example, have emerged as potentially fruitful structures for better integrating research production and use. Penuel, Allen, Coburn, and Farrell (2015) contrast theories of research translation with cultural-historical theories of learning across boundaries defined by roles in partnerships. This type of theory calls for methods that explore how researchers and practitioners work together, and focuses on border crossing, or situations where researchers and practitioners alter their normal role in order to develop a relationship with peers in a complementary profession. For example, Penuel and colleagues conducted a set of interviews with school district leaders to understand both the characteristics of research that district leaders find useful as well as the district work practices in which research is used.
In order to improve the use of research in policy and practice, we must first understand how to create conditions that foster such use. As highlighted in the examples above, our findings thus far suggest that researchers who align theory, methods, and measures are able successfully navigate the challenging variables of amorphous relationships and processes to shed light on the most important dimensions of the use of research evidence.
We hope that by clarifying how measures of research use currently shape the field, our work can help lay the groundwork to move the field forward.
Gibbons, M. (2008). Why is knowledge translation important? Grounding the conversation (FOCUS Technical Brief No. 21). Austin, TX: SEDL: National Center for the Dissemination of Disability Research. Retrieved from http://ktdrr.org/ktlibrary/articles_pubs/ncddrwork/focus/focus21/Focus21.pdf
Honig, M. I., & Coburn, C. (2008). Evidence-based decision making in school district central offices: Toward a policy and research agenda. Educational Policy, 22(4), 578–608. https://doi.org/10.1177/0895904807307067
Leslie, L. K., Maciolek, S., Biebel, K., Debordes-Jackson, G., & Nicholson, J. (2014). Exploring knowledge exchange at the research–policy–practice interface in children’s behavioral health services. Administration and Policy in Mental Health and Mental Health Services Research, 41(6), 822–834. https://doi.org/10.1007/s10488-014-0535-7
McDonnell, L. M., & Weatherford, M. S. (2013). Evidence use and the Common Core State Standards movement: From problem definition to policy adoption. American Journal of Education, 120(1), 1–25. https://doi.org/10.1086/673163
Mosley, J. E., & Courtney, M. E. (2012). Partnership and the politics of care: Advocates’ role in passing and implementing California’s law to extend foster care. Chicago, IL: Chapin Hall at the University of Chicago. Retrieved from http://www.chapinhall.org/research/report/partnership-and-politics-care-advocates%E2%80%99-role-passing-and-implementing-california%E2%80%99s
Moss, P. A., & Haertel, E. H. (2016). Engaging methodological pluralism. In Gitomer, D. H. & Bell, C. A. (Eds.), Handbook of research on teaching (5th ed., pp. 127–247). Washington, DC: American Educational Research Association.
Penuel, W. R., Allen, A.-R., Coburn, C. E., & Farrell, C. (2015). Conceptualizing Research–Practice Partnerships as joint work at boundaries. Journal of Education for Students Placed at Risk (JESPAR), 20(1–2), 182–197. https://doi.org/10.1080/10824669.2014.988334
Spillane, J. P., & Jennings, N. E. (1997). Aligned instructional policy and ambitious pedagogy: Exploring instructional reform from the classroom perspective. Teachers College Record, 98(3), 449–481.
Weiss, C. H., & Bucuvalas, M. J. (1980). Social science research and decision-making. New York, NY: Columbia University Press.
Wenger, E. (2008). Communities of practice: Learning, meaning, and identity (18th pr). Cambridge, United Kingdom: Cambridge University Press.
Wulczyn, F., Alpert, L., Monahan-Price, K., Huhr, S., Palinkas, L. A., & Pinsoneault, L. (2015). Research evidence use in the child welfare system. Child Welfare, 94(2), 141–165.