For those of us working to improve opportunities and outcomes for children, research can be an invaluable tool that supports the development of more effective policies and practices. Thus, research use becomes a central objective for many scholars, foundations, community organizations, and advocacy groups seeking to promote evidence-informed improvements. As efforts to bridge research and practice communities move forward, a key question is how best to measure research use in order to both understand current behaviors and assess the impact of efforts to increase use.
In October, I represented my colleagues at a meeting aimed at advancing the global discussion on the use of research in policy, hosted by the William T. Grant Foundation and the editorial board of Evidence and Policy. Our small group, which was diverse in nationality, expertise, sector, and discipline, had the purpose of sharing ideas about how to move the field forward. Methodology was a central concern in our discussions. And while we did not focus specifically on measurement per se, it’s significance lingers in my mind as a central problem for the field of knowledge utilization.
When my colleagues and I first decided to compete for the IES-funded center now known as the Center for Research Use in Education, I had mixed feelings about how to proceed. Central to the work would be the large scale measurement of research use, which by nature would be challenging. My goal now, as it was then, is to produce a meaningful measure, and I know that doing so is a tall order. But our team is up to the challenge, and we expect to learn a great deal about measuring research use and to shed light on many aspects of the connections between research and practice.
So I am optimistic, but cautiously so. Here’s why:
Use as a practice is ill defined
Research use is not a yes or no phenomenon. It is not a point-in-time occurrence. Not “seeing use” in a decision doesn’t mean research didn’t have a role in some way that we can’t observe, nor does it mean it didn’t have a role in a prior or subsequent decision. Furthermore, we don’t know a lot about how research use happens in reality, and we tend to make assumptions about how we expect it should happen. So the measurement of use becomes complicated.
Measurement cannot focus only on dimensions of normative use—for example, explicit references to identifiable research evidence in the context of a decision. It cannot attend only to the decision maker or decision making moment itself. We suspect that in order to understand the practice of use and observe shifts over time, we need to unpack the various dimensions of the practice itself. Drawing on the literature, we’ve identified six key dimensions of research use in practice (i.e., evidence, search, interpretation, participation, frequency, and stage of decision-making), which, together, may help us create a better, multi-dimensional model of use, and move us closer to the goals of observing and ultimately impacting research use in schools.
A second issue is the absence of use
The field is riddled with research and anecdotal evidence of the lack of research use in educational decision making. If that is the case, what hope do we have of understanding it, much less observing change over time? We would argue that we don’t know the extent of use because there are so few large scale studies, but also because the range of practices associated with use may not have been captured in prior research. Perhaps use is largely absent in most contexts, making it impossible to meaningfully differentiate among them. If that is the case, it would be an important finding in and of itself, giving us more information about the required scope and focus of efforts to increase research use in education. On the other hand, we anticipate that use might simply be more nuanced than much of the prior research has assumed. Large scale research in particular seems especially limited in its survey instruments, and it is our hope to contribute to advancing the development of more accurate, useful measures of use.
Another consideration is the user
Measures of research use that survey and treat individuals as the unit of analysis surely contribute to the problem of absence of use for several reasons. First, we can only sample so many individuals, and it is often unclear who is or ought to be the user of the research. Second, a focus on individual use leaves practice vulnerable to issues such as turnover and individual capacity, and certainly complicates the assessment of change in use over time. Third, it ignores the larger context of decision making and use. Major decisions about policy and practice, and the collection and interpretation of evidence to inform those decisions, rarely involve only one person. To address these issues, we are working on measures administered at the individual level to all members of a school community, with results aggregated to the organizational level via multilevel analyses. This avoids the problem of relying on any one respondent, allows identification of all key individuals engaged in research use within an organization, and provides a broader perspective on research use in schools.
Measuring research use is complicated by the tension between the specific and the general
Using research has become an expectation for educational decisions, whether practiced or not. Therefore there is a socially desirable response to questions about whether one or one’s school uses research (“Of course it does!”). And while this might be true, it is hard to distinguish truth from fiction when measures are abstract or general in nature.
This also complicates the idea of measuring change. For example, it is harder to detect increases in the frequency of research use through an instrument focused on generalities (e.g., “We use research in some/most/all of our decisions”) than it would be in an instrument focused on specific decisions. Anchoring questions in specific decisions offers an advantage of getting to the specific evidence used and other dimensions of practice that allow a more accurate picture of research use. On the other hand, we’ve already argued that use is not a single, point-in-time occurrence, such that any single decision may not represent broader decisions made within the school. Hence the tension. However, this is further complicated by the need to define conceptually, but also for the respondent, what constitutes a “decision” or “problem” in which research might be used.
We’ve anchored our work in what we think is a middle ground: asking individuals to recall a specific decision (admittedly we are still working on that definition!) and aggregating from individuals to schools and districts to better understand organizational practices across decisions.
A work in progress
There are many other potential complications in the measurement of use, but I’d argue that the ones I’ve described here are central considerations for all endeavoring to do it. We are working on approaches to mitigate or at least minimize these problems in our development of instruments through our Center, and we are collaborating with others who have taken different approaches to the problem of measurement.
We are still in the early stages of this project. Will we be able to differentiate use within the various dimensions of practice? Will aggregating to the school or district level produce more reliable and valid measures of organizational practices? We don’t know yet, but, at a minimum, we believe our efforts to address these issues are a solid step in the right direction and will provide more nuanced, useful measures of research use.