An impact evaluation details an intervention’s intended and unanticipated effects, excellent, adverse, and direct and indirect effects. An impact evaluation must determine the causal attribution—the reason for the observed changes.

An impact evaluation is more likely to produce inaccurate findings and bad decisions if causal attribution is not conducted consistently. For instance, it is choosing to scale up a program that is ineffective or effective only in a few particular circumstances or ending a program that could improve if addressing limiting constraints.

A decision regarding whether to continue, stop, replicate, or scale up an intervention can be informed by an impact evaluation, which can also be used to improve or refocus the intervention.

On the other hand, many formative evaluations concentrate on processes rather than outcomes and look for crucial elements to monitor and control. For example, impact evaluations may be used to improve program implementation for the next intake of participants.

Impact evaluation is essential for understanding the impact of a particular project or program. It helps assess whether the objectives of a specific intervention were achieved and also measures unintended consequences.

There are different types of impact evaluations, each with its benefits and drawbacks. This blog post will discuss the different types of impact evaluations and tips on planning an impact evaluation study.

The Main Differences Between Outcomes and Impacts

People frequently mistake project outcomes for impact. It’s crucial to realize that intermediate outcomes, as opposed to the intervention’s long-term effects, are visible during the evaluation. In contrast to results, which are higher-level strategic objectives or long-term effects of an intervention, outcomes are the advantages that an intervention is intended to produce.

Getting the intermediate results may help create the desired ultimate impression. For instance, women’s improved economic, social, and physical well-being could result from a project’s increased engagement of women in community decision-making (intermediate outcome), which would be the intervention’s long-term result (impact). In other words, effects come first and typically set the stage for a clash.

The Best Time to Conduct an Impact Evaluation

Planning for impact evaluation should start as soon as a project is conceived. The effect evaluation approaches need a lot of planning, ample time for baseline data collecting, and, when necessary, the development of a randomized control trial or comparison group or the use of other strategies to look at causal attribution.

Including effect evaluation in integrated monitoring and evaluation (M&E) plan is crucial rather than using it as a standalone component. We cannot conduct meaningful impact evaluations without data from other ongoing M&E activities and features.  

By providing data on the nature of the intervention, context for the data, and further evidence on how the intervention has been going, M&E facilitates impact evaluation by determining whether impact evaluation is required and when it is appropriate to carry it out.

Even though it is a good idea to start impact evaluation earlier because it gives helpful information for project modifications, increasing efficiency, and benefits, one must be careful because when impact evaluation is started too early, there is a chance that impacts could be underestimated or go unnoticed.

In some cases, the impacts might not have had enough time to develop. They may miss the timing window if it is deployed too late.

Key People to Engage in the Evaluation Process

In creating a  suitable and context-specific participatory approach to impact evaluation, it is critical to consider who should be included, why, and how in each phase of the evaluation process, regardless of the type of assessment.

Any stage of the impact evaluation process can be participated in, including the decision to conduct an evaluation and its design, data collecting, analysis, reporting, and management. The first crucial step in managing expectations and directing the execution of participatory approaches in an impact evaluation is to be clear about their intended goal.

Is it to ensure that the opinions of individuals whose lives ought to have been enhanced by the program or policy are at the heart of the conclusions? Is the goal to guarantee a pertinent evaluation focus?

Is it more important to hear people’s accounts of change than to get a list of indicators from an outside evaluator? Is it to foster a donor-funded program’s sense of ownership? Stakeholders will participate in the impact evaluation depending on these and other factors.

Participatory approaches to impact evaluations might be chosen for various reasons, including pragmatism, ethics, or a combination of the two. Ethical because it is moral; pragmatic because it results in better assessments (better data, greater understanding of the data, more appropriate recommendations, and better uptake of findings).

Any impact evaluation design can use participatory methods. In other words, they are not limited to collecting and analyzing quantitative or qualitative data, nor are they confined to particular evaluation methodologies.

Clarifying the benefits this will bring to the assessment itself, and the participants are the first step in any impact evaluation that aims to use participatory methodologies (but also includes potential risks of their participation). Each case requires a response to three questions:

  • What function would stakeholder involvement in this impact evaluation serve?
  • Whose involvement counts, when, and why?
  • When is it practical to participate?

They can only address the question of how to make impact evaluation more participatory after dealing with these, 

Challenges of Impact Evaluation

Impact evaluations are known to be time-consuming, expensive, require specialized skills to undertake, and present several administrative and technological problems; this is frequently a challenge for organizations that lack the funding and technical know-how to complete evaluation work.

Determining the ideal moment to carry out impact evaluations is also quite challenging because the timing of the assessment might affect its goals and outcomes. Another significant issue is the requirement for early agreement on many impact evaluation approaches, particularly those that rely on baseline surveys or randomization.

It might be challenging in more complex interventions where goals and objectives change over time. In these circumstances, techniques that don’t call for large baselines may be more acceptable.

Finding what would have happened in the community in the absence of the intervention is known as the counterfactual, and it is a significant difficulty in impact evaluations.

To determine that, they must choose a comparison group to represent the counterfactual, which requires careful consideration. An incorrect comparison group may invalidate the evaluation results if it’s not done correctly.

Types of Impact Evaluation

While they can do an impact evaluation during and after an intervention, preparation must start as soon as possible. Impact evaluation is divided into two sorts based on when it is conducted and what it is intended to measure.

  • Formative impact evaluation: This evaluation is carried out while intervention is being developed or put into practice. It guides judgments about reorienting or modifying, and improving an existing intervention.
  • Summative impact evaluation: This kind of evaluation is carried out near the end to assess the effectiveness of the intervention. This information is based on whether to continue, stop, duplicate, or scale up an intervention.

How to Plan and Manage Impact Evaluation

Organization employees and pertinent stakeholders must address a few issues before planning and implementing impact evaluation. They should only move forward with the assessment if it is suitable and essential. Here are some things to think about and actions to take:

Determine how pertinent the evaluation will be for the development strategy of your firm first. Only when there is a pressing need to comprehend the effects of an intervention and when impact evaluation is the best method for addressing the intervention’s unknowns can it be used.

Once the company is evident on the issues mentioned earlier, they can:

  • Determining what needs to be examined and then developing appropriate evaluation questions.
  • Locate available resources and figure out how to deploy them. Organizations can use a budget analysis template provided by other organizations or consult the budgets of earlier studies that were similar to estimate the resources needed for impact evaluation.
  • They must decide if the results will be reliable and pertinent given the time and resource constraints.
  • Getting pledges from everyone involved in the process is crucial when choosing who to apply in the evaluation, decision-making, and management, as well as when laying out the evaluation team’s necessary competencies.
  • It’s also essential to know when they should conduct impact assessments.
  • Create a work plan for the implementation, methodology, and assessment design.
  • Create and disseminate evaluation reports.
  • Promote the use of evaluation outcomes. When it is possible to use the information from the current intervention to guide decisions about future projects, impact evaluation is helpful. Therefore, it’s critical to understand how and by whom the impact evaluation results will be used.
  • They are preserving the standard of the evaluation throughout the project.

Methods for Impact Evaluation

The causal relationship between a program and the result is revealed by impact evaluation. Any developmental program’s impact must be measured, which is challenging.

This article discusses the many quantitative methods available for evaluating impact and recommends using a combination of methodologies to increase the impact’s robustness.

Framing the Boundaries of the Impact Evaluation

The justification for performing an impact evaluation is the evaluation purpose. When evaluating to support learning, it is essential to be clear about who will benefit from it, how the group will participate in the evaluation process to make sure it is considered valid and reliable, and whether there will be any specific decision points that will apply this learning. 

Who is being held responsible, to whom, and for what should be made explicit in evaluations conducted to enhance accountability?

In determining the value of an intervention, the evaluation uses a combination of facts and values (i.e., principles, traits, or characteristics seen as inherently good, desirable, significant, or of broad importance, such as “being fair to all”).

Boundaries are established by evaluating criteria, which outline the values employed in the evaluation.

Defining the Key Evaluation Questions (KEQS), the Impact Evaluation Should Address

Impact assessments should be centered on responding to a small number of high-level key evaluation questions (KEQs) that can be resolved using various evidence. These inquiries must be directly related to the evaluation standards.

The next step is to construct a variety of more in-depth (mid-level and lower-level) evaluation questions that specifically address each evaluating criterion. To guarantee that the requirements are fully addressed, each evaluation question should be directly linked to the evaluative criteria.

  • Descriptive questions inquire how something was and is now and what changes have occurred due to the intervention.
  • These changes are often overlooked, and it’s unclear whether the intervention is responsible for the observed changes rather than other factors.
  • Examine both intended and unforeseen effects; evaluative questions explore the intervention’s overall usefulness. It determines whether the intervention is a success, an improvement, or the best choice.

Defining Impacts

Impacts are typically thought to happen after intermediate results and as a result. For instance, attaining the intermediate goals of bettering women’s access to land and involvement in community decision-making may occur before and as a result of the final goal of improving women’s health and well-being.

The distinction between impact and outcomes can vary depending on the explicit goals of the intervention. Additionally, it should highlight that not all impacts can be expected because they may surface suddenly.

Defining Success to Make Evaluation Judgments

By its very nature, evaluation provides solutions to issues concerning worth and quality. Because of this, evaluation is far more relevant and informative than the simple measurement of indicators or summaries of observations and accounts.

Before conducting any impact assessment, it is crucial to clarify what ” success ” means (quality, value). One method of achieving this is using a specific rubric that establishes various performance levels (or standards) for each evaluation criterion.

It involves choosing what will acquire data and how it will be analyzed to draw valid conclusions regarding the intervention’s value.

What trade-offs would be suitable in balancing various impacts or distributional consequences should at the very least be obvious. It is a crucial component of an impact evaluation since development initiatives can have several unevenly dispersed effects.

Using a Theory of Change

Suppose evaluations look at links along the causal chain between activities, outputs, intermediate outcomes, and impacts and links between actions and effects. In that case, they will come up with more decisive and insightful conclusions.

A “theory of change” that narrates how activities are understood to produce a series of outputs that help achieve the ultimate intended impacts is helpful in direct causal attribution in an impact evaluation.

Every impact assessment should utilize a theory of change in some capacity. A group can apply it to any study design that seeks to infer causality; it can use a variety of qualitative and quantitative data, and it can enable the triangulation of the results of a mixed-methods impact evaluation.

Any current theory of change for the program or policy should be assessed for appropriateness, comprehensiveness, and accuracy and amended as necessary when planning an impact evaluation and formulating the terms of reference.

If the intervention or your understanding of how it operates—or is supposed to operate—changes during the evaluation should update it accordingly.

However, some interventions cannot be thoroughly planned, such as programs in environments where implementation must react to new possibilities and obstacles, like supporting the enactment of legislation in a contentious political context.

In these situations, it will require various approaches to create and apply a theory of change for impact assessment.

Deciding the Evaluation Methodology

The evaluation methodology will address the key evaluation questions (KEQs). It details data collecting and analysis procedures and plans for causal attribution, including whether and how they will create comparison groups.

Techniques and models for establishing causal causation: There are three main methods for determining causality in impact assessments:

  • Performing Calculations for the hypothetical value (i.e., what would have happened in the absence of the intervention, compared to the observed situation).
  • Examine the evidence’s consistency regarding the causal connections stated explicitly in the theory of change.
  • Excluding potential causes in using a systematic, evidence-based approach.

Some people and organizations define impact evaluations more narrowly, only including evaluations that incorporate some counterfactual. These many meanings are significant for choosing whether study designs or methodologies will be accepted as credible by partners or funders and by the evaluation’s target user.

Approach to data management, analysis, and collection: For all evaluations, carefully selected and correctly used data gathering and analysis procedures are crucial. Impact assessments must go beyond determining the average impact, or the amount of the effects, to determine who and how a program or policy has been successful.

As data collection should be targeted toward the mix of evidence needed to make appropriate judgments about the program or policy, it is essential to consider what “success” is and how the data will be processed and synthesized to answer the key evaluation questions (KEQs).

The Key Benefits of Impact Evaluation

Impact evaluation shows the success or failure of a project and holds all stakeholders, including donors and beneficiaries, accountable.

By showcasing the magnitude of the impact and how it occurred, it aids in determining whether and how effectively an intervention worked to bring about a change in a specific community of interest or the lives of your target demographics.

Impact evaluations are also helpful in identifying the actual needs on the ground and providing answers to project or program design questions.

They help identify which alternative, among several options, is the most effective approach, represents the most significant benefits to the target communities, offers the best value for money, and is the most suitable for scaling up and replicating.

It gives business organizations the data to decide whether to alter a current initiative or plan for new actions. Impact evaluation aids organizations in advocating for behavior, attitudes, policy, and legislation changes at all levels using the evaluation’s findings. Impact evaluation is crucial for several reasons, including:

Costs

The lowering of costs in a company context is the most evident advantage. You should conduct a practical analysis and determine how to maintain the goods safely in transit if you sell a particular product, but half of them are destroyed in transit.

Thus, there will be minor damage and costs associated with free replacements. Your expenses will be cut, which could significantly increase your profit.

Reputation

Your company’s reputation may be crucial. It is ethical to provide a free replacement when anything is harmed in transit. Customer satisfaction is maintained while your earnings are hurt.

Nevertheless, regardless of how excellent your customer service is, your consumers will grow impatient if the same problem persists. Even though they will receive a free replacement, they will still have to wait longer for the item.

In today’s immediate gratification environment, most people are not inclined to do this. It’s much more difficult to recover your reputation than to ruin it; therefore, it can be spread if half of your products arrive broken.

Morale

The workers doing this production or packing procedure are most likely to notice a problem. These are also the best people to consult to determine how to solve the problem. Additionally, your workers will feel heard and valued if you consult with them and, more specifically, if you act on a proposal they provide.

It will raise everyone’s spirits at work and motivate them to suggest other areas where you can make improvements. By doing an impact evaluation, you might significantly boost your revenue and attract a more devoted workforce!

Conclusion

Verify the intervention to be investigated is well specified and will be supplied unchanged during the evaluation period before committing to an impact evaluation of a pilot study or novel intervention.

If not, other study kinds, such as operations or action research, which plan for changes throughout a study, can be more appropriate.