Blog Post

Conceptual Use of Research Evidence May Be More Common Than You Think

In a recent interview we conducted with a congressional aide, she remarked that she was often inundated with research when working on a new policy issue. The aide’s primary goal in seeking out research evidence was to bring a new perspective to how her team was thinking through a given policy issue. This type of research use, wherein the research is a source of ideas, information, and orientations, is frequently defined as conceptual use of research.

The interview was part of a study that assessed the scope, type, context, and timing of the use of research evidence by federal policymakers (legislators and their staff) engaged in producing policy solutions to the childhood obesity epidemic from 2000-2014. Overall, we observed conceptual use in 68% of all instances of research evidence use (N = 6,684). By comparison, instrumental use (or direct application of specific research findings to specific decisions) and tactical use of research evidence (i.e., use of research evidence to justify preexisting preferences and actions) were far less common in our data (13% and 12% of all research evidence use, respectively).

The study involved systematic content analysis of a comprehensive set of publically available congressional documents (N = 786 texts of bills, committee hearings, and floor debates) as well as key-informant interviews with a diverse group of 12 congressional staffers. Our analysis found that congressional policymakers routinely considered a mix of research and non-research evidence in formulating and adopting childhood obesity-related policies, and that they routinely use both types of evidence conceptually, to inform an understanding of problems, consequences, and potential remedies. This finding is consistent with other research that examined policymakers’ evidence use routines, including state policymakers, and goes counter to the expectation that policymakers make instrumental use of research evidence.

In their blog post on the topic, Caitlin Farrell and Cynthia Coburn explain why conceptual use of research is important in guiding policy and practice. Here, we reflect on how we define, measure, and facilitate conceptual use of research evidence in public policymaking based on our longitudinal investigation of federal policymakers’ evidence use routines.

Defining Conceptual Use

The popular conception of conceptual use draws on Carol Weiss’s notion of enlightenment, or use of research evidence as a source of ideas, information, and orientations. While not explicitly stated in writings on this topic, this particular conception of conceptual use is congruent with the logic of schema theory. A schema is a cognitive framework that helps us process, organize and interpret information regarding a particular person, object, behavior, or issue. As a sense-making mechanism, schemas bridge what we know or learn from information (research evidence included) and what we believe to be true. This knowledge, in turn, influences the particular judgments and conclusions we make based on this information. In that sense, schemas act as filters: new information or ideas are accommodated if they fit our existing schema, but rejected or contrasted if they don’t. This explains why conceptual use of research evidence is both more prevalent and consequential in the public policymaking context. Policy decision making is complex as it requires policymakers to consider the needs, interests, and inputs provided by various stakeholders, some more powerful than others, and to engineer policies that will provide effective remedies to complex problems. To navigate the ambiguity inherent to this process, policymakers rely on their schemas for interpreting information, making judgements, and drawing conclusions concerning policies. Research-based information or insights are therefore most consequential when they influence how policymakers think about an issue, and have less power to influence what they think.

we are proposing that conceptual use of research evidence ought to be defined as use of research evidence for the purpose of informing one’s own perspective and understanding of an issue as well as others’ perspective and understanding.

While credible, this cognitive conception of conceptual use largely overlooks what Kim DuMont has referred to as the social side of evidence use. That is, policy is not simply an outcome of individual decisions. It involves a great deal of information exchange, deliberation, and bargaining among policymakers before agreement on policy can be achieved. For this reason, research evidence is frequently used to influence other policymakers’ understanding of problems, feasible solutions, and barriers or facilitators to the implementation of considered solutions. The findings of our content analysis, coupled with the insights we gained from the key-informant interviews with policymakers, corroborate this. For example, 73% of all observed instances of conceptual use in our data (N = 4,545 instances) involved an attempt to persuade other policymakers. In addition, virtually all of the policymakers we interviewed reported that the persuasive potential of research evidence (e.g., “to win an argument”) guide their selection and use of research evidence. This could be considered a variation on what Carol Weiss referred to as truth tests and utility tests: truth tests assess whether research evidence is believable and accurate; utility tests assess the relevance of research evidence to one’s interests or political agenda. Thus, we are proposing that conceptual use of research evidence ought to be defined as use of research evidence for the purpose of informing one’s own perspective and understanding of an issue as well as others’ perspective and understanding.

Measuring Conceptual Use

This more complete definition of conceptual use has implications regarding the observation and measurement of conceptual use. William Penuel and Anna-Ruth Allen have proposed three approaches for detecting conceptual use of research evidence: 1) identifying uptake of big ideas through interviews and policy documents, 2) observing conceptual use in meetings, and 3) conducting surveys of conceptual use. These are useful tools for assessing the cognitive dimension of conceptual use, but they are not designed to also capture the social side of conceptual use. Our own measure of conceptual use, which is grounded in persuasion and argumentation theories, tracks and analyzes how policymakers communicate with and about research evidence when making or responding to policy arguments. What evidence policymakers choose to communicate and the way in which they communicate about evidence—whether through documents, public statements, and interactions they have with other policy actors—provide a window into their underlying schema and reveal their purpose or goal for using research evidence. In particular, how policymakers incorporate research evidence into policy arguments can be quite telling. For example, when research evidence is presented first, followed by interpretation and/or conclusion, we can infer conceptual use; when a conclusion is stated first and research evidence that supports this conclusion is provided next, we suspect tactical use of research evidence.

Influencing Conceptual Use

If conceptual use of research is a potentially powerful way to inform policy, what strategies and conditions can encourage conceptual use? Our study offers several important clues in this regard. First, conceptual use can be stimulated by weaving research evidence into cogent, persuasive policy arguments. Such arguments may be about the causes, consequences, and/or potential remedies to public problems, but may also be about the relative advantages and risks of possible policy solutions, as policymakers’ schemas and political interactions with one another revolve around these aspects. Storytelling devices such as metaphors, analogies, and narratives (e.g., news stories, data visualizations, and documentaries) can be particularly useful in this regard because they help policymakers to interpret and contextualize research evidence relative to their existing schemas.

Second, whereas strategies and tools for communicating more effectively about research evidence are available from the field of science communication, encouraging conceptual use of research evidence in public policymaking processes also requires careful tailoring of research evidence to policymakers’ needs and preferences. For example, we found through interviews that junior staff serving individual legislators have busy schedules, work under tight deadlines, and have limited capacity to interpret research. As a result, they are more responsive to short research briefs, written for lay people, and writing that clearly states the relevance to policy and/or implications to constituents and key stakeholders. In contrast, congressional committee staff often are experienced users of research, possess expert knowledge, and are familiar with all major policy stakeholders. Thus, they find in-depth policy analyses and scientific consensus reports most useful, but they also rely heavily on reputable and recognized experts.

Lastly, conceptual use of research evidence tends to emerge when policymakers have an opportunity to convene and debate policy (e.g., during committee hearings). A social network analysis of our data revealed that the most effective knowledge sharing occurs when staffers and legislators interact in medium-sized groups; groups of one to two legislators are relatively ineffective, but larger clusters of eight or greater are equally ineffective. Small groups of five to seven legislators and staffers are ideal for engaging policymakers in research evidence-focused interactions that stimulate conceptual use. Thus, introducing research evidence in small group gatherings such as briefings and seminars may be optimal for encouraging conceptual use and learning.


A growing body of research highlights the importance of conceptual use of research evidence in public policymaking processes. A more complete understanding of the role that conceptual use of research evidence has in this context requires firmer grounding in theories of information processing and sense-making (such as schema theory) and measurement that is sensitive to both dimensions—the individual (cognitive) and social (transactional)—of conceptual use, both of which can be captured, empirically, from the way policymakers communicate with and about research evidence. Successful efforts to encourage conceptual use of research evidence may draw on a range of established strategic communication approaches (e.g., persuasion, argumentation, and framing) as well as creating opportunities for policymakers to exchange and debate policy-relevant research evidence.

Related content

Related collections


Measuring Research Use

Subscribe for Updates