Blog Post

Evidence at the Crossroads Pt. 9: Promoting Evidence-Based Teacher Preparation

In many of the posts in this series, evidence-based approaches to education are portrayed in the contexts of government agencies and research organizations. As Vivian Tseng has said, “the past 15 years have not created a meaningful role for practitioners in building evidence agendas.”

Teacher education is one piece of the education landscape that illustrates how applying evidence-based strategies can make a powerful difference to the effectiveness of practitioners and educational services organizations.

Both data and research evidence can inform these strategies. For instance, data can help practitioners identify their level of performance and track changes over time. Research evidence can reveal factors associated with teaching quality and strategies to improve teaching, thereby helping practitioners modify teacher preparation in ways that are likely to improve instructional quality.

The Council for the Accreditation of Educator Preparation (CAEP), the nation’s new education accreditator, provides an example of how aspirations for focused use of data, research, and continuous improvement can foster purposeful and effective preparation of teachers. As its first official act, in August 2013, the CAEP Board of Directors adopted standards for accreditation of teacher preparation. These were to employ the powerful leverage that an accreditation function can exert through challenging standards backed up by the rigor of strong and relevant evidence. CAEP accreditation actions, that is, would be “evidence informed.”

A culture of evidence in the CAEP accreditation context

Since the standards were adopted, CAEP and the educator preparation providers (EPPs) have been transitioning to the new evidence-based framework, preparing for 2016, when all EPPs coming up for accreditation must employ it.

Three sources had particular influence on CAEP’s approach to evidence-informed accreditation. The first is a regional accreditor, Western Association of Schools and Colleges, which calls on colleges and universities to use evidence in assessment, decision making, planning, resource allocation, and other institutional processes. A second is the Baldrige Education Criteria for Performance Excellence, criteria structured to help any educational institution achieve its goals and to improve its effectiveness through use of data. The criteria describe key attributes of performance and provide scoring rules to evaluate processes and results. And a third source is the compelling advocacy of the Carnegie Foundation for the Advancement of Teaching for improvement science: learning quickly, at low cost, and systematically using evidence from practice to improve it.

Teacher education is one piece of the education landscape that illustrates how applying evidence-based strategies can make a powerful difference to the effectiveness of practitioners and educational services organizations.

Here are some features of CAEP’s interpretation of a culture of evidence in accreditation:

Data emphasize results

Accreditation data inform a diverse array of preparation practices and results. Some are familiar, such as GPA, licensure test scores, and clinical observation evaluations of candidates (i.e., college students preparing to teach). Some have not previously been expected as part of accreditation: for example evidence that clinical experience partnerships with schools and school districts are collaborative and mutually beneficial.

Three examples illustrate the new rigor in CAEP accreditation evidence:

  1. One is a requirement to document effects that candidates have on P-12 student learning and development—both during pre-service clinical experiences and again after completers are on the job as teachers.
  2. The second is annual progress monitoring. CAEP and individual EPPs will make data available annually on eight dashboard indicators of accomplishment.
    • Four of these represent preparation outcomes: licensure rate, employment rate, employment in the field of preparation, and consumer information, such as initial salaries or places of employment
    • Four more describe the effects of preparation after completers are employed: evidence that teachers have a positive impact on P-12 student learning, teacher instruction evaluations through observations and student perception surveys, employer satisfaction and teacher retention, and completer satisfaction with preparation.

    These annual measures are one example of a shift in accreditation to a continuing process for gathering, interpreting, and using data—not just a procedure undertaken once each seven years and then set aside.

  3. The third is data from admissions criteria and recruitment. Providers prepare recruitment plans, moving toward alignment of fields of preparation with changing employment opportunities for their completers (e.g., more science and math teachers, fewer elementary school teachers), and toward evidence that each year’s class of candidates is academically able and diverse. EPPs monitor progress and modify plans as needed to meet candidate quality and employment goals.

Research probes more deeply

The new rigor in accreditation comes, in part, from evidence in the form of research and case studies. For instance, one of the CAEP accreditation pathways is structured so that an EPP conducts research on some major challenge in educator preparation, such as different models for clinical practice or recruitment and admissions policies. EPPs that choose this pathway follow standard research protocols to ensure validity that encompass literature reviews, appropriate study designs, data analysis, and interpretations of results. The work is to be of publishable quality.

Case studies will also be a frequent source of accreditation evidence. One example would be developing and testing new assessments, such as ones in which candidates demonstrate their abilities to teach through problem solving and critical thinking skills, or judging the effects of grit, perseverance, leadership or communication skills on candidate’s instructional success with P-12 students.

Continuous improvement is the focus

These uses of data and research require that providers have capacity to gather, store, access, and analyze data and to interpret results. An EPP must maintain a quality assurance system that comprises valid data from multiple measures and builds capacity to disaggregate and inter-relate data so that analyses and interpretations of the findings can be conducted. Most important, the analyses and interpretations are to serve as the basis for continuous improvement—informing EPP judgments about how the courses and experiences offered to candidates can be made more effective.

A culture of evidence that shapes the accreditation of educator preparation programs can have an enormous influence over the education landscape. Now comes the big test—will it work?

CAEP has taken on responsibilities both to use data for improvement in its own efficacy and to make data better for the field. Unfortunately, the state of data in teacher preparation has been notoriously poor, characterized by very few common measures (primarily state licensure tests) and by an array of others that are developed uniquely in each EPP (e.g., clinical observation evaluations). These data characteristics make valid comparisons virtually impossible, so that there are no accepted norms or benchmarks for EPP performance. They are a significant challenge for evidence-based accreditation decisions. CAEP is seeking, in collaboration with states and EPPs, common data definitions and data gathering procedures for the 8 dashboard indicators as well as other aspects of preparation such as the characteristics of clinical experiences. And it has created a director of research and strategic data initiatives to focus these efforts and oversee an evaluation of the impact of CAEP’s standards and data emphases as they unfold in the coming five to ten years.

Next steps

A culture of evidence that shapes the accreditation of educator preparation programs can have an enormous influence over the education landscape. Now comes the big test—will it work?

Success depends, ultimately, on the response of educator preparation providers. Will they perceive that evidence-based accreditation makes use of tools and capacities that are valuable for them? Will the investment to build and maintain these tools and capacities be undertaken and maintained?

CAEP’s perspective is that evidence-based accreditation will encourage EPPs to build efficient oversight capabilities as well as capacity to modify preparation courses and experiences. These tools can ensure that those who graduate go on to become knowledgeable and skilled teachers, helping America’s increasingly diverse P-12 students master challenging school curricula. And this is where evidence-based decision making can make a great difference in education outcomes.

Related content

Related collections

Collection

Evidence at the Crossroads

In “Evidence at the Crossroads” we seek to provoke discussion and debate about the state of evidence use in policy, specifically federal efforts to build and use evidence of What Works. We start with the premise that research evidence can improve public policies and programs, but fulfilling that potential will require honest assessments of current initiatives, coming to terms with outsize expectations, and learning ways to improve social interventions and public systems.

Subscribe for Updates