Blog Post

An invitation: Help us to conceptualize what “quality” research evidence use means

For many years, the William T. Grant Foundation has funded research on ways to improve the use of research evidence in ways that benefit young people ages 5-25 in the United States. Initially, this work focused on building theory. Investigators examined how research evidence is used in policy or program decisions, including which factors are most influential to research use and the many ways that research evidence may inform policy and practice decisions. More recently, the Foundation has focused on developing innovations to improve research use, as well as assessing whether research use in policy and practice ultimately improves youth outcomes.

While many studies look at the extent to which research is used in a given situation (i.e., the quantity of research use), studies that account for how well or how poorly research is used in such situations (i.e., the quality of research use) remain far less common. Whereas an earlier generation of studies gave rise to conceptual frameworks for understanding the phenomenon of research use as a whole, we lack a similar conceptualization for the quality of research use. Given past accomplishments and future ambitions to improve research use in policy and practice, we believe that the time is ripe for the URE field to take stock of its theoretic foundations and address the question of how best to conceptualize and characterize what it means to use research evidence well.

For at least two reasons, conceptualizing what it means to use research evidence well is critically important for the field.

First, advancing the field depend on our collective ability to conceptualize and operationally define what quality research evidence use is. Unless we can conceptualize quality, we cannot establish the construct validity of our measures. Without strong measurement, we cannot reliably assess progress. Without reliable evidence on progress, we cannot discern which practices, programs, and policies are most effective. Much like the famous proverb that attributes the loss of kingdom to the want of a nail, we would not want our efforts to improve the use of research evidence to falter for want of a clear conceptualization of quality.

Second, conceptualizing what we mean by the quality of research use is important because we live in a time in which scientific claims are routinely contested or even misrepresented in public discourses. Polls suggests that scientists are among the most trusted members of our society, but a growing number of people question the objectivity of scientists’ judgments and claims. How can we promote the use of research evidence as a means to improve youth outcomes if we cannot clearly articulate what we mean by the “quality” of research use?

While the need is clear, the challenges are formidable. After hundreds of years of philosophical debate, a clear consensus on what constitutes quality URE remains elusive. Even the words we use can be difficult to define; for example, “evidence” has a variety of accepted definitions, ranging (according to the Oxford English Dictionary) from “facts and observations” to “grounds for belief.” On a practical level, consider “quality of life” as a construct. If you agree that improving “quality of life” is an appropriate goal for social science, do you think that it can be operationalized based purely on “facts and observations”, or do beliefs and values also play a role? Is there scientific consensus about what constitutes quality URE with respect to measuring and improving quality of life? If not, then figuring out what we mean by “quality URE” is clearly important.

To address the challenge of conceptualizing the quality of research use, the Foundation has formed a special interest group dedicated to this topic. Concretely, we have grounded our initial conversations on a variety of practical examples supplied by group participants from different disciplinary orientations and areas of content expertise. Thus far, our goal has been to consider potential characteristics that contribute to or indicate the quality of research use. For example, we have discussed characteristics of evidence use like:

  • Rigor: e.g., the extent to which systematic methods that meet standards of scientific rigor were used in the production of the research evidence.
  • Comprehensiveness: e.g., the breadth of research evidence considered, including “local” and “global” evidence as articulated by Palinkas and colleagues, as well as both confirming and disconfirming evidence.
  • Depth of processing: e.g., triangulating and synthesizing disparate sources of evidence, as well as questioning underlying assumptions and beliefs.
  • Acknowledgement of scientific uncertainty: e.g., the extent to which uncertainty of available research evidence is addressed during interpretation and application.
  • Clarity about the role of values: e.g., articulating and interrogating the underlying values of those using the research evidence, including how their values may influence prioritization and interpretation of research evidence.

We hypothesize that the relative importance of these characteristics may depend on the goals of the specific user—e.g., whether the research is used to convince, to educate, to persuade, to design, or to engineer—and on factors like cost, time, and access that constrain the extent to which the characteristics of quality can be prioritized. Another aspect of interest is whether and when to conceptualize quality research evidence use as an individual as opposed to a group process.

Although the members of our special interest group believe that our preliminary conceptualization offers a useful starting place, we are under no illusion that it is perfect or that we will arrive at a single set of answers that are convincing to everyone. Indeed, our meetings continue to evoke thought-provoking conversation and disagreement that motivate further revisions. For example, a discussion of an earlier draft of this blog post resulted in our debating whether one can use research evidence well and still not achieve the desired outcomes, or, in contrast, whether one can achieve the desired outcome while not having used research evidence well. If we can continue to encourage these dialogs across diverse disciplines and perspectives, we are optimistic that we can identify key areas of consensus as well as critical points of disagreement—both of which are necessary for a research agenda that will truly advance the field.

This post is written on behalf of the William T. Grant Foundation Special Interest Group on the Quality of the Use of Research Evidence, by R. Chris Sheldrick, Tom Mackie, Lauren Supplee, Gracelyn Cruden, Liz Farley-Ripple, Bill Firestone, Brittany Gay, Jonathan Purtle, and Alicia Wilson-Ahlstrom.

Related content

Related collections


Measuring Research Use