Tag Archive for: impact evaluation

NECESSARY AND SUFFICIENT – LET’S THINK ABOUT THOSE TERMS FOR A MOMENT

We use the words necessary and sufficient almost everyday – but they have a specific meaning in evaluation, and play an important role in Impact Evaluation.

According to Dictionary.com:

  • Necessary:  being essential, indispensable or requisite; and
  • Sufficient:  adequate for the purpose, enough.

These absolutely hold true in evaluation nomenclature as well… but let’s take a closer look.

When we undertake an Impact Evaluation, we are looking to verify causality.  We want to know the extent to which the program caused the impacts or outcomes we observed.  The determination of causality is the essential element to all Impact Evaluations, as they not only measure or describe changes, but seek to understand the mechanisms that produced the changes.

This is where the words necessary and sufficient play an important role.

Imagine a scenario where your organisation delivers a skill-building program, and the participants who successfully complete your program have demonstrably improved their skills.  Amazing – that’s the outcome we want!

But, can we assume that the program delivered by your organisation caused the improvement in skills? 

Some members of the team are very confident – ‘yep, our program is great, we’ve received lots of comments from participants that they couldn’t have done it without the program.  It was the only thing that helped’.  Let’s call them Group 1.

Others in the team think that the program definitely had something to do with the observed success, but they think it also had something to do with the program the organisation ran last year in confidence-building, and that they build on each other.  We’ll call them Group 2.

Some others in the team think the program definitely helped people build their skills, but they’re also aware of other programs delivered by other organisations, that have also achieved similar outcomes.  Let’s call them Group 3.

Who is correct?  The particular strategies deployed within an Impact Evaluation will help determine this for us, but hopefully you can start to see an important role for the words necessary and sufficient.

  • Group 1 would assert that the program is necessary and sufficient to produce the outcome.  Their program, and only their program, can produce the outcome.
  • Group 2 would assert that the program is necessary, but not sufficient on its own, to cause the outcome.  Paired with the confidence-building program, the two together might be considered the cause of the impact.
  • Group 3 would claim that their program isn’t necessary, but is sufficient to cause the outcome.  It would seem there could be a few programs that could achieve the same results, so whilst their program might be effective, others are too.

Patricia Rogers has developed a simple graphic depicting the different types of causality – sole causal, joint causal and multiple causal attribution. 

Sole causal attribution is pretty rare, and wouldn’t usually be the model we would propose is at play.  But a joint causal or multiple causal model can usually explain causality. 

Do you think about the terms necessary and sufficient a little differently now? Whilst we use them almost every day, when talking causality, they are very carefully and purposefully selected words – they really do mean what they mean.

NEW EVALUATION RESOURCE AVAILABLE

Late last year (December 2021), the Commonwealth Government released a swanky new evaluation resource, complete with a bunch of useful information, resources and templates to help guide evaluation planning, implementation and use.

“The guide is designed to help Commonwealth entities and companies meet existing requirements. It provides a principles-based approach for the conduct of evaluations across the Commonwealth, and can be used to help plan how government programs and activities will be evaluated across the policy cycle in line with better practice approaches.”

The Guide is comprised of the Commonwealth’s Evaluation Policy, together with an Evaluation Toolkit, inclusive of a swag of templates, completed exemplars, case studies and other frameworks and tools to support evaluation activities.

Check it out – maybe there’s something that might be of use for your next evaluation activity.

WHAT’S NOT WORKING – THE INTERVENTION, OR IMPLEMENTATION OF IT?

I watched a really informative webinar on applying implementation science to evaluation a few years back that really struck a chord with me.  The facilitator, Dr Jessica Hateley-Brown walked participants through the foundations and key concepts of implementation science, including the Consolidated Framework for Implementation Research (CFIR), which if you get a chance – definitely dig into a little more… but it was a little snippet on the differentiation between intervention failure and implementation failure that blew my mind.  In hindsight, it’s still a little embarrassing that I hadn’t understood it so clearly prior to this, but I guess sometimes we can’t see the forest for the trees.

Having spent many years delivering services as a front-line clinician, and then managing and commissioning services from a bit further afar, explaining the obvious difference between intervention failure and implementation failure was like giving me a language that I could finally use to explain what I hadn’t been able to put words to.  I had lived and breathed the difference between intervention failure and implementation failure so many times – but I’d never thought about it so simply.

The concept of implementation outcomes – which are the result of deliberate actions and strategies to implement our interventions – was not new to me, and won’t be new to most of you.  We often collect data about the implementation of our services… but we don’t review it and use it as much as we should.  Implementation outcomes are the things that give us an indication of the quality of the implementation of our program or intervention.  It’s things like reach, acceptability, fidelity, cost and sustainability.  These things don’t tell us anything about the service outcomes or client outcomes.  They are the outcomes of successful implementation – if we’re lucky – and therefore give us an indication of the quality of the implementation.  Hopefully the quality is high, which lays the foundations for achieving our desired service outcomes and ultimately the client outcomes.

Service outcomes give us an indication of the quality of the service.  This might include things like safety, person-centredness, timeliness, and satisfaction.  These things don’t tell us about the quality of implementation, nor about any outcomes experienced by the client. 

And finally, client outcomes are the ultimate outcomes we are hoping clients experience – and might look like changes in knowledge, skillset, confidence, wellbeing, functioning or health status.

The outcomes of implementation are distinct from service outcomes, which are distinct from client outcomes.  Obvious yet mind-blowing at the same time!! 

As front-line staff working with people and implementing programs every day would well be aware, program implementation is dynamic.  Of course there’s a service or operating model guiding practice, but minor adjustments are made often to accommodate people, or meet people where they’re at.  We may learn that some staff are better suited to certain tasks, or some clients are more engaged on certain days of the week.  Noticing these things, and making adjustments, can have significant effects on the reach or acceptability of our programs.  It’s an early step towards the client outcomes we are hoping eventuate.

But sometimes… programs don’t work.  The people we are working with don’t experience the outcomes both they and the provider were hoping for.  Is it intervention that failed, or did we fail in the implementation of it? 

Maybe staff weren’t trained adequately to deliver the program; maybe the program was new and never fully embraced by the organisation; maybe the workplace had poor communication channels; maybe the program was seen as an add-on to existing work, and attitudes towards it were negative.  All of these things will likely effect implementation quality.  In some situations, it might be that the program or intervention never really got a chance, and it was deemed ineffective and phased out… when in fact it was poor implementation, or implementation failure that caused its demise.

When thinking about the programs you deliver, support or manage – can you articulate the outcomes of successful implementation, as distinct from service outcomes and client outcomes?  It might be a useful task to undertake as a team.  Of course, some programs or interventions are flawed in their design… but in many cases, failure to achieve client outcomes is not always due to intervention failure… but could be partially, of fully, the result of implementation failure.

WHAT’S MORE IMPORTANT – FINANCIAL STABILITY OR POSITIVE IMPACT?

I guess depending on your upbringing, your educational background, or the role you play in an organisation – you may pick one option more easily over the other.

Obviously I feel more strongly about the impact you or your organisation has.  What does it matter if you’re in a good position financially if you’re not achieving great things? 

But, the converse is also not ideal – achieving great things, but with the very real likelihood that you’ll lose great staff because of poor job security, or needing to let people go because you simply cannot fit them into the budget.  Before too long, this starts impacting the quality and reach of your programs… and then all of a sudden, you’re not achieving great things anymore.  You really can’t have one without the other.

Unsurprisingly, the title of this post is purposefully misleading, as both these areas of your business are critically important and deserve dedicated attention to ensure their success… but it struck me recently that the foundations for both areas are not the same. It’s much easier to reach a shared understanding, and therefore have meaningful conversations about our financial position, than it is for us to reach a shared understanding about our impact.

Think of regular reporting to Boards as an example.  It’s pretty commonplace to have a dedicated section of the Board report that details the organisation’s financial position.  Skilled staff within organisations prepare complex reports which speak to the solvency, liquidity, and cashflow of the organisation, and Boards usually have a director or two with similar skillsets who speak the same language.  When complex tables, charts and ratios are presented to the Board – they know what it means.  They can quickly form a judgement about whether things are good, or whether there is cause for concern.  The transition from the ‘what?’ to the ‘so what?’ happens pretty seamlessly.

For some reason, the same situation doesn’t exist for our impact reports.  I absolutely acknowledge that some organisations prepare meaningful impact reports for their Executive and Boards, but lots don’t, and of those that do, there doesn’t seem to be the same common language spoken.  In financial nomenclature, solvency and liquidity ratios have a specific and objective meaning.  The same can’t be said about effectiveness, efficiency, impact and reach.  It also certainly isn’t as commonplace for there to be Board members with particular skillsets in the performance monitoring or evaluation space.  This generally results in less questions being asked about performance in terms of impact, and a less than concrete awareness of how the organisation is performing when it comes to the impact on the people and communities they are funded to serve.  The translation of the ‘what?’ to the ‘so what?’ is not as easy.

The nebulous space that performance assessment occupies is often complex and subjective – but it doesn’t need to be. 

Organisations can certainly progress work to agree on what success looks like regarding the impact of their various programs.  They don’t need to bite off more than they can chew at their first bite either – it can be a staged process.

My experience has been that Board Directors enjoy the increased awareness they have about performance, and are genuinely excited by the prospect of learning more about the great things that are being achieved.  The same could be said about most people really – in that most of us enjoy knowing and learning more about something than we did previously.

So… how can we remedy this?  What can we do to make conversations about performance more mainstream?  How can we normalise a language that people understand and aren’t afraid to use?  How do we create an environment where the skilled staff who understand and champion better performance in both these areas of our businesses, can flourish, and can increase awareness and excitement in others?