Blog Post

Measuring Research Use and The Promise of Big Data

The authors are Co-directors of the Michigan School Program Information (MiSPI) Project

Why is it hard to measure research use (quantitatively)?

One critical step toward improving the use of research evidence is being able to measure it. But to measure the use of research evidence (URE), it is first necessary to define it. What is URE and how do we identify it when we see it in action?

Although seemingly straightforward, defining URE has been challenging for scholars interested in understanding knowledge utilization and evidence-informed decision-making among practitioners and policymakers. Despite some variation in conceptualization and terminology over the years, however, these efforts have resulted in three broad types of URE: instrumental, conceptual, and symbolic. Instrumental use and its variants (i.e., direct use) refer to the specific use of research to solve a problem, make decisions, or complete a task. Conceptual use and its variants (i.e., enlightenment, indirect use) refer to the more general use of research evidence to inform one’s thoughts on an issue. Finally, symbolic use and its variants (i.e., political, tactical, persuasive, strategic, or imposed use) refer to the use of research evidence to legitimize decisions or to redirect criticism.

Much of the methodological work on measuring research use, and many of the empirical examples of its measurement, have relied on qualitative approaches including interviews and observations. Such qualitative approaches yield rich and detailed data on the processes and contexts of research use, but they are resource-intensive and difficult to use in large samples.

In contrast, while quantitative approaches to measuring the use of research evidence may offer a narrower focus with less detail than qualitative studies, they can be used to track large samples, can be compared longitudinally, and can be implemented in formal experimental designs. A number of promising quantitative measures are now available, including the Structured Interview for Evidence Use (a survey despite the name) and the Evidence-based Practice Attitudes Scale. Of course, existing quantitative measures also have some limitations:

First, all quantitative measures of research use that we have been able to locate are focused on the individual level. That is, they are designed to measure whether, or the extent to which, individuals (e.g., a principal, a legislator, a social worker) use research evidence. However, these measures do not offer ways to measure the use of research evidence at the organizational level, which would shed light into the large, complex bureaucratic systems that are the most common users of research.

Second, nearly all quantitative measures are designed to be collected by self-report. This method of collection is convenient, but risks introducing biases of social desirability. For several decades, school administrators, social workers, nurses and others have been bombarded with messages encouraging evidence-based practice. Thus, when asked whether they use evidence in their practice, it is difficult to imagine a respondent selecting “no.” Self-report measures also create opportunities for non-response, and thus for missing data, which can further distort the picture.

Finally, as is typical for modern survey instruments, quantitative measures of URE are typically large, multi-item scales. For example, the Structured Interview for Evidence Use has 45 items, while the Evidence-based Practice Attitudes Scale has 51 items. While still faster to collect than an in-depth qualitative interview or repeated field observations, such multi-item scales can nevertheless be time-consuming for respondents to complete, which, in turn, can reduce response rates or create respondent fatigue.

Can big data help?

These challenges are not unique to measuring research use, but are common in the application of many traditional measurement techniques like surveys. But, perhaps some newer approaches to measurement—like big data—can help.

To explore this possibility, we have been developing and testing the Archival Search for Use of Research Evidence (ASURE). ASURE measures research use by counting the number of pages on an organization’s website that reference research or evidence. It therefore measures URE at the organizational level and can be collected very quickly (a few hours for all ~600 school districts in Michigan). Additionally, because it is collected from websites, not people, it almost eliminates biases from social desirability and missing data. Our preliminary test in Michigan suggests it is stable over a six-month period (i.e. exhibits test-retest reliability). Additionally, associations with the Evidence Based Practice Attitudes Scale provide evidence of its convergent validity, while associations with both indicators of school district capacity (e.g., enrollment, expenditures) and indicators of student achievement (e.g., percent proficient in state standardized tests) provide evidence of the measure’s applied validity, which Messick (1995) identifies as another important demonstration of validity.

Qualitative examination of the web pages themselves suggests that ASURE is a broad measure of the use of research evidence that captures the extent of different varieties of research use, including instrumental, conceptual, strategic, and political uses. Thus, while ASURE is not able to distinguish one type of use from another, it nonetheless offers a convenient way to measure how much school districts or other organizations engage in multiple types of use of research evidence.

The ASURE measure relies on website text, which is only one kind of big-data resource. The large-scale analysis of other archival documents also offer promising possibilities, in part due to the wide range of documents in which research use might be reflected. For example, recent studies have involved coding amicus briefs, media stories, and congressional hearings. Like a school district’s website, these documents may offer indications of uses of research evidence: for example, when a reporter refers to a research study, or when a politician invokes the phrase “the research says…”.

To be sure, big-data is not a panacea for measuring research use, or indeed for the measurement of anything. Ultimately, these approaches are indirect approximations: mentioning using research evidence on a website is a reflection of research use, but is not use per se. Additionally, they are subject to biases of selective deposit: well-written amicus briefs are filed, while poorly-written ones may be discarded. Thus, big data approaches should be viewed as a complementary source of information that can be used in conjunction with more traditional quantitative and qualitative techniques. For example, ASURE might be used to rapidly assess all schools in a region, from which a subset are purposively sampled for more detailed qualitative study via interview or observation. But, to the extent that both practitioners and researchers are busy, these big-data and archival approaches can provide a feasible alternative that still provides significant insights, but with a more favorable balance of costs and benefits.

Related content

Related collections

Collection

Measuring Research Use

Subscribe for Updates