Because it can be hard to pin down exactly how someone’s ideas have shifted in response to research, it’s often difficult to see the conceptual uses of research.
But as part of a study of research-practice partnerships in three districts, and through our work developing measures of research use in the National Center for Research in Policy and Practice, we have identified three approaches that can help us see the uptake of research ideas in practice. Each approach relies on a different data source, including interviews, observations, and surveys.
From 2013 to 2015, we studied three research-practice partnerships to investigate research use in partnerships. We focused on two different research partners working in three districts. Each partnership was studying and working to improve middle-school math. Much of our data collection included interviews with district leaders, which we conducted twice in each year of the study. We also analyzed policy documents that district leaders produced during this time, and observed their meetings.
One way to understand the conceptual use of research is to identify and track the uptake of “big ideas” from research by district leaders. This lets us see how research-based concepts play a role in district leaders’ thinking over time. We used this approach to see how key concepts from the partnership did or did not make their way into district leaders’ understanding. We began identifying the partnerships’ big ideas by drawing on field notes, policy documents, and other writings, as well as interviews with researchers.
Using interviews to analyze big ideas can illuminate the conceptual use of research in several ways. First, it can help us see what ideas are useful to leaders who span different departments and roles. Paying attention to the pervasiveness of concepts being taken up across role groups is important because ideas can be a means for coordinating the work of multiple people implementing district reforms. Second, we can look at whether the ideas that are taken up are consistent with what the researchers intended. This is important because local sensemaking often transforms the meaning of ideas from outside the system, and ideas may be taken up in ways that are not consistent with the way they are initially described. This incongruence may signal a potential problem in conceptual use.
In addition to looking for the uptake of big ideas in interviews, we can also look for evidence of research-based ideas in district documents and policies. Policy documents allow us to trace the ways research influences organization practices. When people attribute ideas to certain researcher partners in interviews or policy documents, we can be more confident that the ideas weren’t just “in the air.”
A second strategy we used in our study of partnerships was observing conceptual use in meetings. We found many examples of leaders using ideas from research in their routine activities, such as when they are deliberating about district policy, planning professional development for teachers, or discussing issues facing the district.
Figuring out how to identify these instances of conceptual use reliably took us a while. References to ideas from research were mostly brief, and they often varied in degree of specificity. Most examples we found didn’t resemble the kinds of deep discussions of books and articles researchers might associate with a graduate seminar. Rather, many take the form of statements like, “Research says we should pay attention to student discourse,” or “So-and-so’s framework says this.”
To help with this issue, we used a structured observation protocol to study references to research in naturally occurring activity. We developed the protocol by analyzing video recordings of meetings of district leaders, as well as meetings where district leaders and their research partners met together. To do so, we first needed to decide on a unit of analysis. We settled on specific time intervals as a way to test reliability, given that references to research happen less frequently in meetings where researchers are not present. We also had to figure out what would count as a “reference to research.” We decided that references to research needed to include explicit mentions of research, but could be either a reference to research in general (for example, “research says…”) or to a specific researcher’s work or publication.
We took a broad view of what would count as research, and counted anything that the participants themselves named as research. This is a strategy a number of our colleagues have used to see research use in a new light. We also included tools that evidence suggested were research-based. While research-based programs and tools are not research findings, research perspectives and concepts are “designed into” the product. This is particularly important to consider if you want to see conceptual use, because ideas from research are often reflected in programs and are made explicit in things like teachers’ guides.
Once we identified references to research and the source of the research invoked, we analyzed the aspect of the research highlighted (findings, concepts, methods). Then, we analyzed how the research referenced figured in the deliberation, and whether the speaker was using the research to argue for a particular way of understanding an issue or solution (conceptual use), argue for a particular programmatic or policy action (instrumental use), or provide evidence for a decision already made (symbolic use). This is where the analysis of conceptual use really comes in: looking at how and when references to research introduce new ideas for discussion, a different way of looking at a problem, or a framework for guiding action.
In contrast to the big ideas analysis, this kind of detailed coding of observations from meetings provides a more micro-level look at how ideas from research are invoked. It also focuses on how ideas are invoked and taken up in the interactive context of meetings. In this way, it gives a different lens than interviews, and allows us to see whether ideas that leaders say they use actually come up in deliberations. At the same time, a zoomed in view of a single meeting doesn’t allow us to see how ideas develop across meetings or over time the way that the interview analysis does, or that multiple observations over a long period of time would.
It is not always feasible to interview dozens of leaders or conduct hours of observations of district meetings, especially when one wants to compare conceptual use across multiple districts. Survey self-reports are a more efficient means to collect data from hundreds of individuals and districts.
We recently concluded a study of how 733 school and district leaders across the country use research. We followed a construct-centered approach to designing survey scales of different types of research use, including conceptual use. The items on this scale ask educational leaders to characterize how frequently research has expanded or changed their understanding of an issue, provided a common language or framework to guide policy, or brought attention to a new issue. Over a few iterations, we refined the items and calculated reliability. The internal consistency of the 6-item scale was good (α = .84).
We also included an open-ended item asking leaders to report on a piece of research that changed their ideas about an issue in the past year. We asked:
Think about a time when a piece of research you encountered changed your thinking or opinions about possible solutions to your district’s/school’s problems. What was that piece of research? Please provide as much information as you can about this piece of research so that we can locate it ourselves.
We then coded their responses by topic, source, and place of publication. This particular analysis lets us see not just what leaders value when it comes to conceptual use, but what they think “counts” as research. Research that leaders cite as useful isn’t limited to studies published in peer reviewed journals, but at the same time isn’t only short pieces in practitioner journals.
While our survey allows us to take a broad view, what conceptual use actually looks like in practice can’t be seen. Still, the portrait that our study is painting allows us to put our in-depth interview- and observation-based case studies in a larger context.
Our research has given us a good window into tradeoffs among different methods for studying conceptual use. Interviews allow us to trace big ideas over time and where they live within large central offices divided into different departments. Analysis of policy documents allows us to see how research-based ideas are invoked and how they become part of policies. Observations of meetings also allow us to see how and when ideas from research are invoked, as well as whether these ideas change or do not change the conversation about big issues and how to solve them. Survey data allow us to get a broad view of how research is used conceptually among a large sample of district leaders.
If we want to see the conceptual use of research, we will need to follow the ideas that get brought up in one context and trace their movement across meetings, documents, and one-on-one interactions. While it may be difficult to see the conceptual use of research, using a variety of methods and approaches can help reveal the ways that research can shape ideas. Researchers in the field should take a broad view of what will count as research, and focus on how and when research changes how people think and see problems and solutions over time.