Since Carol Weiss advanced our collective thinking on how research is or is not used in policymaking processes, scholars have devoted considerable attention to the factors that predict research use. Many have focused on the types of research evidence taken up in policy discussions, the conditions in which research is used, the intermediary organizations or factors that may facilitate or impede the “brokering” of research to policymakers, and the various ways that policymakers may then utilize that research.
Given what we already know, what steps can the field take to develop our understanding of how to collectively bring about optimal circumstances for effective uses of research?
Our work to this point has involved a multi-year analysis of the role of intermediary organizations (IOs) in influencing research use in places like New York, Denver, and New Orleans, as well as at the federal level. We centered our inquiry on “incentivist” education reforms: policies such as charter schools or teacher merit-pay, which seek to leverage competitive incentives in order to shape individual or organizational behavior.
We conducted scores of interviews with individuals from organizations that play a brokering role between research producers and users (policymakers), including advocacy organizations, think tanks, and media outlets. We were particularly interested in how IOs conveyed research evidence that would illuminate policy discussions, and we focused specifically on the “push” and “pull” factors that would facilitate the movement of that information.
“Pull” factors relate to the demand for information among policymakers as they framed problems or considered potential solutions—things like commissioning reports, seeking out studies, or just staying abreast of research news on incentivist policies. “Push” factors include an organization’s acumen or propensity in promoting particular research, the place of individual brokers within social and policy networks, or even the nature of the evidence in question—i.e., whether the given research is rigorous, accessible, and easy to understand, as well as if it is associated with a known institutional “brand.”
For a number of reasons, we expected that there would be some noticeable paucity of “pull” dynamics in play. Education research is often ignored, since the terrain is already fraught with ideological assumptions (especially in a controversial area such as incentivist reforms). Indeed, policymakers often refer to their own experiences, anecdotes, or perceptions of “common sense” in addressing education policy questions. Still, we anticipated some degree of research uptake by policymakers—if not in an instrumental manner (i.e., specific decisions being based on research evidence), then in more conceptual ways (e.g., research on charter school outcomes generally illuminating discussions of possible policy approaches on the expansion of charter schools).
But what was perhaps most interesting was the degree to which research played virtually no part in decision making for policymakers, despite their frequent rhetorical embrace of the value of research. While many interviewees spoke of the importance of research evidence, nearly all were unable to point to an instance where research evidence shaped their position on an instrumentalist issue.
Such findings, of course, offer some contrast with other studies on the conceptual use of research, which tend to highlight instances in which specific research can shape the thinking within particular organizations.
There are different ways to understand this contrast between findings. Certainly, the types of policy issues at hand may be an important factor. Looking at questions of classroom practice, for instance, where there may be rigorous and insightful studies about effective alternative approaches, is different than examining controversial topics where ideological assumptions may pre-empt the demand for research evidence to illuminate an issue. Similarly, the way studies are designed may explain some of the different findings. Tracking an influential piece of research as it shapes behaviors in an organization will likely produce different results than examining the basis for a more nebulous policy agenda in a broader context. Indeed, a discernable organization, such as a district, may devote specific resources to the collection and consideration of research. At the same time, multiple organizations (often overlapping and competing) in a broader context, such as a metropolitan area or a state, often don’t have a singular, discernable entity devoted to the collection, interpretation, and application of evidence, so it may be harder to track potential instances of research use.
Even while research evidence did not appear to play a key role for policymakers in our study, there was a remarkable amount of “pushing” of research to policymakers by IOs. This raises an interesting question as to why so much in the way of time and resources would be devoted to getting evidence to policymakers even while they don’t seem to “use” it in shaping policy positions. With this in mind, here are a few observations that may be useful as researchers start thinking about the next steps to creating the optimal conditions for research to be used:
We see a scenario where we have largely inert “consumers” of research, but very active IOs brokering or selling particular versions of research evidence to them. This is not always the case, though, and we suspect that the controversial nature of the policies we focused on allowed policymakers to look to trusted contacts and organizations to shore up their pre-existing perspectives. What we can say, though, is that when research is used, it is generally used to define a problem (say, the failure of U.S. schools) where the solution is seemingly self evident. The research to evaluate alternatives, then, is unimportant.