The Push and Pull of Research: Lessons from a Multi-site Study of Research Use in Education Policy

Since Carol Weiss advanced our collective thinking on how research is or is not used in policymaking processes, scholars have devoted considerable attention to the factors that predict research use. Many have focused on the types of research evidence taken up in policy discussions, the conditions in which research is used, the intermediary organizations or factors that may facilitate or impede the “brokering” of research to policymakers, and the various ways that policymakers may then utilize that research.

Given what we already know, what steps can the field take to develop our understanding of how to collectively bring about optimal circumstances for effective uses of research?

Pushing and pulling evidence to policy

Our work to this point has involved a multi-year analysis of the role of intermediary organizations (IOs) in influencing research use in places like New York, Denver, and New Orleans, as well as at the federal level. We centered our inquiry on “incentivist” education reforms: policies such as charter schools or teacher merit-pay, which seek to leverage competitive incentives in order to shape individual or organizational behavior.

We conducted scores of interviews with individuals from organizations that play a brokering role between research producers and users (policymakers), including advocacy organizations, think tanks, and media outlets. We were particularly interested in how IOs conveyed research evidence that would illuminate policy discussions, and we focused specifically on the “push” and “pull” factors that would facilitate the movement of that information.

“Pull” factors relate to the demand for information among policymakers as they framed problems or considered potential solutions—things like commissioning reports, seeking out studies, or just staying abreast of research news on incentivist policies. “Push” factors include an organization’s acumen or propensity in promoting particular research, the place of individual brokers within social and policy networks, or even the nature of the evidence in question—i.e., whether the given research is rigorous, accessible, and easy to understand, as well as if it is associated with a known institutional “brand.”

For a number of reasons, we expected that there would be some noticeable paucity of “pull” dynamics in play. Education research is often ignored, since the terrain is already fraught with ideological assumptions (especially in a controversial area such as incentivist reforms). Indeed, policymakers often refer to their own experiences, anecdotes, or perceptions of “common sense” in addressing education policy questions. Still, we anticipated some degree of research uptake by policymakers—if not in an instrumental manner (i.e., specific decisions being based on research evidence), then in more conceptual ways (e.g., research on charter school outcomes generally illuminating discussions of possible policy approaches on the expansion of charter schools).

But is it actually being used?

But what was perhaps most interesting was the degree to which research played virtually no part in decision making for policymakers, despite their frequent rhetorical embrace of the value of research. While many interviewees spoke of the importance of research evidence, nearly all were unable to point to an instance where research evidence shaped their position on an instrumentalist issue.

Such findings, of course, offer some contrast with other studies on the conceptual use of research, which tend to highlight instances in which specific research can shape the thinking within particular organizations.

There are different ways to understand this contrast between findings. Certainly, the types of policy issues at hand may be an important factor. Looking at questions of classroom practice, for instance, where there may be rigorous and insightful studies about effective alternative approaches, is different than examining controversial topics where ideological assumptions may pre-empt the demand for research evidence to illuminate an issue. Similarly, the way studies are designed may explain some of the different findings. Tracking an influential piece of research as it shapes behaviors in an organization will likely produce different results than examining the basis for a more nebulous policy agenda in a broader context. Indeed, a discernable organization, such as a district, may devote specific resources to the collection and consideration of research. At the same time, multiple organizations (often overlapping and competing) in a broader context, such as a metropolitan area or a state, often don’t have a singular, discernable entity devoted to the collection, interpretation, and application of evidence, so it may be harder to track potential instances of research use.

Lessons learned

Even while research evidence did not appear to play a key role for policymakers in our study, there was a remarkable amount of “pushing” of research to policymakers by IOs. This raises an interesting question as to why so much in the way of time and resources would be devoted to getting evidence to policymakers even while they don’t seem to “use” it in shaping policy positions. With this in mind, here are a few observations that may be useful as researchers start thinking about the next steps to creating the optimal conditions for research to be used:

  • Regardless of whether any specific study gets “taken up” by a policymaker, to the extent that it aligns with the IO’s agenda, the IO will collect and push research to policymakers. Thus, although it is typical that no single study is cited by policymakers, they still form an overall impression or worldview about an issue or policy direction based on the findings they get from the IOs that have their ear. (Unfortunately, the skewed collection and presentation of research can lead to an “echo chamber” where IOs and their policymaker contacts only see evidence that supports their preconceptions.)
  • Policymakers often reported an appreciation for “research” as an abstract idea, but when asked to name their sources for research evidence, would point to popular media, blogs, personal contacts, or social networks—channels that often conveyed research of questionable quality. (Thus, they didn’t do much “research” on research.) In lieu of looking for traditional markers of rigorous research, they would often mention not study design or peer-review, but the institution or individual that produced or conveyed the research to them (particularly if prestigious) as a sign of assumed quality. In that regard, research is somewhat of a positional good that reflects on the status of the consumer.
  • The types of research matter. University-based research was often ignored (except by union-affiliated IOs) because academics writing for peer-reviewed journals were said to produce studies that don’t offer clear conclusions, use arcane language, and are often not timely. IOs are better at translating and presenting research, even if it is of poor quality, in easily understandable and digestible formats. Especially when there are conflicting findings, or the research evidence is in the form of inaccessible statistical models, policymakers often simply turn to a trusted IO to collect and translate the findings for them.
  • Likewise, the types of “consumers” matter. While we tend to think of “policymakers”—potential “users” or research evidence—as operating in the public sector, in our research, some high-capacity IOs administered federal initiatives (such as the TIF in New Orleans) and thus blurred the boundaries between public and private policy realms. Yet each realm has different imperatives and incentives for using research evidence in making decisions.

We see a scenario where we have largely inert “consumers” of research, but very active IOs brokering or selling particular versions of research evidence to them. This is not always the case, though, and we suspect that the controversial nature of the policies we focused on allowed policymakers to look to trusted contacts and organizations to shore up their pre-existing perspectives. What we can say, though, is that when research is used, it is generally used to define a problem (say, the failure of U.S. schools) where the solution is seemingly self evident. The research to evaluate alternatives, then, is unimportant.

Post Categories: