A bit about evaluation
What is it… why do it… and how Tonita Taylor Consulting can help?
What is it… why do it… and how Tonita Taylor Consulting can help?
Evaluation is the systematic inquiry into the operations, effectiveness and impact of programs, products or people. In practice, evaluation involves identifying important questions, collecting and interpreting data to answer those questions, using the insights to form a judgement of value, and then sharing the findings with key stakeholders for the purposes of improvement, accountability and advocacy.
Typically, to reach a judgement of value, we work progressively through a sequence of steps known as the logic of evaluation:
On what dimensions must the program or product or person (the evaluand, or the thing you are evaluating) do well? These need to be determined specific to the evaluand, but often include criteria such as efficiency, effectiveness, timeliness, fidelity to an operating model, ease of implementation, acceptability, accessibility, appropriateness, reach and quality.
How well should the evaluand perform on each of the relevant dimensions? Eg, what would be deemed good / not good in terms of effectiveness, timeliness, appropriateness or reach for a particular program? Standards should be bespoke, as they are dependent on the nature and maturity of the evaluand, and the context in which it is being implemented.
This is the comparison step – how well did the evaluand perform? This step combines the data collected from your evaluation activities, and compares it against the standards you set in the previous step. How well does your evaluand perform against the dimensions you identified as being important? Eg, your program may perform really well on the timeliness dimension, but less well in its reach. In digging further, you may learn that your program performs really well against the effectiveness dimension, but only in some locations, or for some people, or under certain conditions, but less well for others.
Depending on the purpose of the evaluation, the synthesis step isn’t always necessary, but giving considered thought to how performance against the various dimensions is integrated into an overall judgement can be useful. Perhaps you are needing to compare the merit or worth of a number of programs or products to inform future resource allocation – an overall evaluative judgement may prove beneficial in this instance.
There are many reasons for undertaking an evaluation, and each evaluation activity should be tailored to achieve its desired purpose. Typical purposes include:
Evaluations can demonstrate accountability and can be designed to report on what has been achieved with the resources that have been invested. Accountability can be to the funder or sponsor, to a governance structure (eg Board of Directors), or to the community an organisation serves. Different dimensions of performance may be more or less important to different audiences.
Evaluations can focus on yielding insights about the reach, timeliness, effectiveness or quality… with a plan to review and adjust implementation to improve performance. Evaluations that focus on improving performance will need to incorporate time and resources to respond to insights, make adjustments and re-assess performance at a future date. Ultimately, this type of evaluation serves to make the best use of limited resources, to achieve the best outcomes possible.
Evaluations can yield new, previously-unknown insights. A program or service design is usually informed by a wealth of evidence, but sometimes a variation in its implementation may demonstrate surprising outcomes – either positive or negative. Designing an evaluation with the intent of documenting and sharing insights can further contribute, or start building a new knowledge-base, for others to build on.
Evaluations that are designed to be used for advocacy, or to lobby for policy change, will need to incorporate considerations that are valued by those in a position to effect the desired change. This may mean monitoring and reporting on different indicators. Evidence of effectiveness may look different depending on whether you’re the service user, the service provider, or the funding body.