Trying to understand how evidence may have influenced a policy decision is like trying to pick out the noise of a recorder from a whole symphony orchestra. This memorable metaphor was introduced by Maureen Dobbins at the start of the William T. Grant Foundation’s Use of Research Evidence meeting last week, and wove its way throughout the discussions about evidence-informed decision-making (EIDM) over the next three days.
A broad and diverse set of conversations reflected the makeup of the participants—in part, reflecting the field itself. Practitioners and decision makers (from education, health, criminal justice, and social policy) brought experiences about trying to find and use evidence. Knowledge translation and mobilization experts discussed initiatives and strategies to increase uptake and improve the use of evidence. And academics (from political science, sociology, science and technology studies, psychology, communications, and others) provided theoretical framings and methodological contributions to this debate. In other words, people trying to “do” EIDM, people advocating for EIDM, and people studying EIDM came together to learn from one another. Without attempting to untangle the individual threads, what overall picture was woven by these conversations?
First, there is a lot we can learn from one another. Many of us come from very different disciplinary and epistemological backgrounds, pulling behind us huge comets of discipline-specific theory and tools; we just happen to have mutual interests in how evidence, policy, and practice relate to one another. Mapping this field shows us that from improvement science come practical models of change management for those interested in developing strategies to increase evidence use. Political science and policy studies provide us with models and theory to understand the processes of decision making and deliberative democracy. Sociology teaches us how forms of knowledge are constructed and valued in different social arenas. The journal Evidence and Policy (among others) is a key forum to bring these strands together, and provides the field with a space to negotiate potentially unfamiliar theoretical or methodological terrain.
Second, discussions focused on defining and understanding the “use” of evidence. Cynthia Coburn outlined methodological developments in how to measure conceptual use of evidence, parsing out “good use of evidence” from “good decision making.” This distinction allowed the conversation to focus on what “use” looks like—still using Carol Weiss’ six “meanings of research utilization”, but also considering how these models may be further broken down and reimagined. For example, symbolic use of evidence could be understood as legitimizing or substantiating, as outlined in Christina Boswell’s work. We still have more work to do in theorizing the interactions around evidence use.
Exciting methods are being widely applied to track, measure, and identify evidence use, including social network analysis, contribution mapping, coding of archived records and briefs, and more. A recent systematic review brings together insights on the Science of Using Science, where ideas from across social science identify new places for us to look for inspiration, including marketing, communications, and computer studies.
Finally, we talked about the big questions for the field, which meant being honest about the assumptions we were bringing along. Do we really have a theory of evidence use, and is one possible? We think that interactions promote evidence use, but do they? When? What kinds? And what changes as a result? We believe that trusting relationships are important, but what kinds (stakeholder engagement, advice, collaboration, co-production)?
Understanding the roles knowledge plays in decision making processes is less like hearing one instrument in an orchestral texture, and more like understanding how one change to one note for each instrument may influence the overall sound. A complex and difficult challenge—but an exciting one.