One common misconception about evaluation is that it is something done after a project has been implemented, by an outside group who then judges whether or not the project was effective. While this scenario is often true in practice, it is not the only or necessarily the right way to conduct an evaluation.
A good evaluation plan should be developed before a project is implemented, should be designed in conjunction with the project investigators, and should ensure that the evaluation serves two broad purposes. First, evaluation activities should provide information to guide the redesign and improvement of the intervention. Second, evaluation activities should provide information that can be used by both the project investigators and other interested parties to decide whether or not they should implement the intervention on a wider scale.
These two purposes correspond with two broad types of evaluation: formative and summative. The goal of formative evaluation is to improve an intervention or project. The goal of summative evaluation is to judge the effectiveness, efficiency, or cost of an intervention.
The purpose of formative evaluation is to provide information to the project team so that their intervention can be modified and improved. It focuses on whether the intervention is being carried out as planned. Formative evaluation activities can include materials and software development and beta testing, focus groups to assess students’ attitudes and responses to aspects of intervention design and materials, and experimental studies to determine the effect of specific design characteristics on students’ mastery and retention of concepts and skills. While some of these activities also yield data related to intervention effectiveness, their primary goal is to provide information for intervention improvement.
The purpose of summative evaluation is to produce information that can be used to make decisions about the overall success of the intervention. There are three specific and sequential types of summative evaluation questions that should be addressed for any intervention:
1. Intervention Efficacy:
Efficacy evaluation asks the question: “Under research (ideal) conditions, can the intervention lead to the desired outcomes?” Efficacy questions assess whether an intervention is associated with improvements in students’ performance when implemented in small groups, by teachers who receive special instruction, with motivational support provided for student participation.
2. Intervention Effectiveness:
Effectiveness evaluation asks the question: “When implemented on a wider scale, under conditions similar to those that occur in regular teaching, does the intervention continue to lead to desired outcomes?” Effectiveness questions usually assess whether the intervention continues to be associated with improvements in students’ performance when carried out under normal classroom conditions, by teachers who have not received special instruction, and without additional motivational support for participation.
3. Intervention Costs:
Both developmental and recurrent costs associated with the intervention must be assessed. Issues here relate to the time, support, and effort required implementing the intervention both by individual faculty members and by departments. Generally, investigators should try to determine how much investment would be needed for another program to implement the intervention.
The use of a staggered approach to summative evaluation should allow one to identify and address operational difficulties in the use of the intervention. Too often, summative evaluations simply measure efficacy. If an intervention is to go beyond being a simple “pilot project,” the investigators must also evaluate intervention effectiveness and cost.
1. Decide on the purpose(s) of the project evaluation.
2. Determine what primary questions need to be answered.
3. Determine in what order these questions should be answered.
4. Decide who will be the primary audience(s) for the evaluation results.
5. Determine how the results will be used.