Research on the use of research in policy and practice is rapidly growing in recent years across a broad range of academic disciplines and professional fields. As a result, we now know more than ever before about what use of research looks like and what kinds of things we can do to promote research use among policymakers and practitioners.
At the same time, further progress on this front is impeded by the persistent challenge of studying and measuring use of research evidence. Elizabeth Farley-Ripple has argued that research use remains rather elusive as a target of measurement largely because it is not well defined or fully connected with actual practice. In particular, research use appears to be a multi-dimensional construct that involves a sequence of actions (e.g., searching, filtering, and interpreting research), different levels of use (specific vs. general), and different decision making contexts (e.g., individual vs. group decisions). This, she suggests, makes it difficult to employ a standard measure of research evidence use that can be used to compare and synthesize findings of research studies on this topic.
Drew Gitomer and colleagues have stated that standardization of methods and measures for studying the use of research evidence makes little sense, since different methods and measures are needed to illuminate our understanding of research evidence use across different actors, settings, and decision making processes. Taken together, they argue, the scientific research on this topic can illuminate our understanding of the processes and contexts of research use, how research is used to inform decisions, the impact of research use, and the structures and relationships that influence research use. However, it is clear that in order to influence conditions that improve the use of research in policy and practice we must find a way to connect these pieces together.
Think “Use,” Not “Evidence”
I’d like to propose that a crucial first step in this direction is to recognize that research use is largely under-conceptualized and under-operationalized in current research on this topic. The primary reason for this is that use is often measured by tracking the move of evidence from research producers to users. That is, we infer research use when research evidence is present in records and documents created by users or in the accounts they provide.
Such conception and operationalization of research use has a number of important limitations. First, it tends to reproduce an artificial dichotomization of use vs. non-use, whereas use is more appropriately measured on a continuum—for example, as a function of dimensions of engagement with research evidence (e.g., systematic, critical, deliberate, generalized, habitual, etc.)—much like the way we rank research evidence as a function of rigor and consensus among scientists. Second, it imposes a normative expectation regarding what counts as use, which is derived from the norms and practices of research producers, but is often divorced from the constraints imposed on such use of research evidence in reality, such as the feasibility, acceptability, and perceived utility of using evidence. Put differently, research use is determined by the needs, capabilities, and circumstances of users, not by the characteristics and availability of research evidence. Third, any measure of use that is limited to research evidence necessarily excludes user-generated evidence that may be equally or more consequential to our understanding of users’ decisions and actions. This is particularly true regarding experiential evidence, or evidence that is based on professional insights, un¬derstanding, skills, and expertise that users accumulate over time and form their tacit knowledge. Lastly, it seems rather intuitive to align the measurement of research use with what users actually do with research evidence. In many policy and practice fields, for example, use of research evidence is not limited to that which informs the thoughts and actions of individuals. Rather, it is also frequently shared or exchanged with others, whether to make them aware of problems, persuade or influence their thoughts and actions, or to negotiate solutions. Capturing and representing the “social life of research evidence” requires measures of use that are far more dynamic than the ones commonly used.
Evidence Use is Best Measured as Behavior
If the use of research evidence is inherently about what users do with evidence, then it makes complete sense to study it as a behavior. The most obvious advantage of doing so is that we already have valid and reliable tools—theories, models, frameworks, methods, and measures—to measure and analyze behavior in a systematic way. These can be rather easily adapted to create measures of the who, what, why, how, when, and where aspects of evidence use that are comparable across different actors, settings, and circumstances as well as over time.
Beyond offering potentially effective solutions to core measurement challenges, a behavioral approach to research evidence use has considerable synergetic power as a framework of reference for connecting and organizing results of different studies and decoupling evidence use both from the factors that influence use and the outcomes of using research evidence. In essence, all human behaviors can be predicted from the combination of three elements: capacity, motivation, and opportunity to act. Capacity is generally defined as the individual’s psychological and physical capacity to engage in the activity concerned. It is a function of having the necessary knowledge and skills, but also the tools needed to perform the behavior. Motivation is defined as the cognitive and affective processes that energize and guide a person’s behavior. It is a function of held attitudes, perceptions, and emotions regarding the enactment of a specific behavior, but can also be induced externally through the use of incentives and disincentives. Opportunity is defined as objective factors in a person’s environment (physical, legal, economic, social, and cultural) that enable or impede the enactment of the behavior. This framework can be productively employed to organize and synthesize major findings of studies that examine use of research evidence, as I recently did with regard to data-informed decision making in educational settings. It can be equally useful for guiding the development, implementation, and evaluation of programs and interventions to improve use of research evidence, for example, capacity-building interventions.
Finally, a behavioral approach to the conceptualization and operationalization of research evidence use has significant potential to facilitate a more complete and nuanced understanding of the mechanisms that underlie use of research evidence. These include cognitive mechanisms (e.g., information processing and learning), social or relational mechanisms (e.g., diffusion, contagion, and social influence), and structural mechanisms (e.g., institutionalization of evidence use in policies, procedures, professional norms, and interactions with the external environment), as well potential interactions among these mechanisms. The insights generated from studying these mechanisms is already informing the development of decision and behavior support systems that accelerate the transfer of research-based knowledge into practice.
The frameworks and tools of behavioral science have significant potential to overcome persistent challenges regarding the measurement, tracking, and analysis of research evidence use. The same frameworks and tools can be employed synergistically to connect and synthesize existing pools of scientific knowledge on the topic and develop effective interventions to improve research evidence use.