While efforts to produce and use evidence of What Works have garnered increased attention, issues of cost and cost effectiveness have been notably absent from the conversation.
Researchers are increasingly producing rigorous evidence on the effectiveness of specific educational reforms in order to improve teaching and learning. This research, which provides decision makers more accurate information on the probable impacts of education interventions, is at the heart of evidence-based decisions. But even the most rigorous studies of educational outcomes are incomplete if they do not consider the costs that must be covered to obtain those outcomes.
Issues of cost are always present in allocating resources efficiently, but they are particularly pressing when economic challenges arise. Consider that at least 31 states provided less state funding per student in the school year ending in 2014, several years after economic recovery, than they did in 2008. And local government funding for education fell in at least 18 states over the same period.
Many educational decision makers are challenged by severe economic constraints that restrict choices among alternatives for improving educational outcomes. For this reason, decisions must be premised not only on the promise of positive outcomes, but also on the costs for obtaining those outcomes. Simply being provided with effectiveness information, while not being informed of the costs for obtaining those effects, can be highly misleading and wasteful.
The costs of effectiveness
How serious are these concerns? Do costs vary enough across different interventions to have a powerful impact on cost efficiency? To answer these questions, the Institute of Education Sciences (IES) of the Department of Education provided funding to the Center for Benefit-Cost Studies in Education (CBCSE) at Teachers College, Columbia University, to compare the cost-effectiveness of dropout prevention studies and early literacy studies, using findings from the What Works Clearinghouse (WWC) of the U.S. Department of Education. The WWC uses systematic criteria to evaluate evidence on effectiveness, but does not consider costs.
Cost-effectiveness analysis enables comparison of the costs and effectiveness of the educational actions being considered, informing the decision maker of those alternatives that promise the lowest cost per unit of effectiveness. Employing a rigorous cost accounting method applied to the interventions in each group, the CBCSE obtained comparative cost-effectiveness results for the different dropout prevention strategies and, in a separate study, the early literacy strategies. Among dropout prevention approaches, there was a range of six to one in cost-effectiveness results, meaning that the most cost-effective approach could provide six times as many new graduates as the least. Among early literacy studies, the differences in costs per given increase in reading skills were typically three or four to one. By choosing among the most cost-effective programs for addressing dropout prevention or early literacy, schools could presumably save considerable resources to allocate to other important educational endeavors. Since no comparable cost information on alternatives was provided by WWC, decision makers were not informed about cost-effectiveness and had only the purported effectiveness results (irrespective of their costs).
Considering the true costs of outcomes
In most educational evaluations, discussion of costs is completely absent from consideration. The tacit message is that they don’t matter, or that differences are likely to be trivial. But even in rare cases where some cost information is reported, it is usually erroneous because it is not the product of an acceptable cost method. To paraphrase Lee Shulman’s famous phrase on comparative measurement, effectiveness is measured with calipers, and costs are measured with a witching rod. Most evaluators lack background in cost measurement and rely on easily accessed information, such as crude budgetary reporting or an “estimate” from a contract administrator. School budgetary documents simply list the allocation of financial resources among different spending categories—they were not designed to provide cost accounting for specific programs or interventions.
The true costs of an intervention represent the value of all of the resources required to obtain the outcomes found in a valid effectiveness study. Some resources may be reflected in a specific project budget, but others may be financed from other sources, such as reallocations from other school programs, or resources in kind, such as the time of volunteers. The overall cost of an intervention is determined by the required personnel, facilities, equipment, technical assistance, and other ingredients regardless of their source. The value of all of the resources used for the intervention must be acknowledged and included in costs. The most widely accepted method of cost-accounting for educational interventions is the ingredients method, which, as its name implies, identifies the ingredients that were used to obtain a particular evaluation outcome. The value of the ingredients is determined by market prices or some equivalent.
Strengthening the field
The ingredients method has been used in educational cost research for four decades and has been continually refined over that period. Examples of educational studies that document the method and its application include cost-effectiveness studies of class size reduction, computer-assisted instruction, increases in length of the school day, peer tutoring, high school completion, early literacy, and programs to increase graduation rates in post-secondary education. However, reliable cost-effectiveness information is still rare in the overall evaluation literature, and cost-effectiveness studies merit much wider availability and dissemination to assist decision makers in making more efficient use of resources.
Where cost analysis has been used to evaluate educational reforms, the ingredients method has been favored because of its reliance on the economic concept of opportunity cost and its validity as a cost-accounting method. Unfortunately, a major obstacle to its use is that few educational evaluators have familiarity with cost concepts and cost-accounting. To increase accessibility of these tools to evaluators and provide guidance on its use, the procedures have been incorporated into a free tool kit called COSTOUT, developed by CBCSE, which provides an expert application of accounting procedures for estimating costs, using a method that enables valid comparisons among alternative interventions. It can be used for studies of cost feasibility or comparisons of the cost-effectiveness of different educational alternatives that pursue similar educational goals.
In order to accommodate the needs of decision makers with a fuller and more useful set of findings on educational interventions, including economic consequences, evaluators should include measures of both costs and effectiveness in their evaluations. These should be based upon appropriate methods for providing valid and reliable costs in parallel with efforts to provide valid and reliable estimates of effectiveness. Cost-effectiveness comparisons can help decision makers take economic constraints into account when choosing educational reforms, ultimately improving evidence-based policy decisions and strengthening education systems.