DEMYSTIFYING THEORIES OF CHANGE AND PROGRAM LOGICS

Do theories of change and program logics excite you or bore you?  Are you overwhelmed by them, or maybe don’t see what all the fuss is about? 

My observation from the people I have worked with has been quite variable.  Some people love a clearly articulated theory of change, and can’t wait to see the outcome of its implementation; whilst others are a bit confused, and might slink down in their chair when asked to describe their program or intervention’s theory.

For those in the first group… me too !!  I love a good theory, and get excited about the prospect of testing it, learning from the results, making some adjustments, and then testing again.  Monitoring and evaluation in action – beautiful!!

For those in the second group… chances are you probably understand parts if not all of the theory that underpins your program or intervention, but maybe implicitly… and maybe you’ve just never been asked the right questions to make it explicit.

A theory of change, or the theory or theories that underpin your program or intervention, help to explain why the program has been designed and is delivered in a certain way, and how the program is thought to bring about the desired outcomes.

The design of a program is informed by someone’s knowledge (hopefully).  This knowledge may have come from formalised study, reviewing the literature, previous experiences, cultural beliefs… and may be a mix of fact, opinion or assumptions.

Guided by this knowledge, we make decisions about how to design and implement our programs, with the strong hope that they lead to our desired outcomes. 

  • Maybe a weight-loss program, smoking-cessation program or anger-management program is informed by social learning theory, or behaviour modification theories. 
  • Maybe the way we design and implement training programs to improve employment opportunities is based largely on adult learning theories, which may have been tweaked or adapted based on our learnings from previous experiences. 
  • Maybe we design our communication and engagement activities to take advantage of social network theories. 
  • Maybe we ensure that our clients always have a choice about the gender of their clinician, or the location or modality of their sessions, because we know that the strong rapport between client and clinician is a good predictor of positive outcomes.

Some of the knowledge we use to inform our programs, activities and interventions may be based on long-standing, strongly-held, widely-acknowledged theories, such as attachment theory or social learning theory, whereas other knowledge may have emerged very recently from our own observations, or reflective practice, or careful monitoring of our programs.  Regardless of whether your theories have emerged from formalised training, a review of the literature, your previous experiences and observations, or your cultural beliefs – they are all still theories that guide and explain the assumptions behind why you think your program will achieve the desired outcomes.

The implementation of your program is really just testing to see if the theory is correct.  Hopefully you’ve employed some rigour, and consulted broadly to ensure you’ve made good decisions in your program design, to optimise the desired outcomes.  As service providers, we really are obligated to ensure our theories make sense, as they inform or underpin the services we offer our clients.

Program logics are really just the plan to operationalise our theories of change.  Program logics usually map out (graphically or in table form) the inputs and activities necessary, and the outputs and outcomes we expect to occur, when our theory is brought to life. 

Donna Podems, in her fabulous book “Being an Evaluator – Your Practical Guide to Evaluation” presents a hybrid model (see example below), where she incorporates elements of the theory in the program logic. She highlights the prompting question what makes you think this leads to that? as helping to make the implicit, explicit, as we move from left to right.

When was the last time your team spoke openly about your theory of change?  Is everyone on the same page?  Does everyone understand what underpins what you do, and how you do it? Does everyone have a consistent view of what makes them think this… leads to that? 

It’s critically important that they do… otherwise some of your team may be deviating from your program, without even realising it. 

Reach out if you’d like to chat more about theories of change and program logics, or if you’d like to more clearly and explicitly articulate your theories and logics – they really are core to the success of your program.

REFLECTING ON PAST REFLECTIONS

I wrote this post a few years back, when I was officially transitioning from a career working in the mental health space, to a new role focusing more on monitoring, evaluation and learning. As I reflect back on my reflection – that is a little bit meta for this time of day – it strikes me that not only is it all still applicable, but somehow more so. I am so grateful for the years I spent working in the mental health space. The people I worked with, the insights I gained, the stories people shared, the strength I witnessed… have all shaped me into the person I am today. I know I originally penned this post in a moment of gratitude, but somehow, a few years on, I am even more grateful. Apologies to those who remember reading this previously… but it felt right to share again 🙂


After 26 years living, learning and working in the mental health sector – I am moving on. It’s not the case of actively moving away from the mental health sector, but rather actively moving towards something new – but there is much that I will take with me, and with an attitude of gratitude, I decided to take a few moments to reflect on what I’ve learned, how I’ve grown, and who has helped me along the way.

In keeping with my usual penchant for structure – I challenged myself to come up with 26 things – one for each year I’ve spent living, learning and working in the space. These insights are not revolutionary, and will not be news for most people, but there’s something about the process of reflection that resets your mind and prepares you for your next journey – so here’s my 26 insights.

1. Everyone has a story – ask people about theirs and make a connection. You might find something in common – shared experiences, shared fears, shared dreams. There’ll be a gem in there somewhere for you – if you’re paying attention.

2. Mental illness and mental health are not inter-changeable terms – and using them more appropriately will help to normalise conversations about both. Someone can have a mental illness, and can still be living well, remain connected with friends, work productively and positively engage in their life of choice. They can be mental healthy in the same way someone is physically healthy – living with their mental illness. It is also true that someone might not have a mental illness, but they have poor mental health. Your mental health can affect your mental illness, and your mental illness can affect your mental health. Everyone, whether living with mental illness or not, can do things to positively affect their own, and others mental health – it’s these behaviours and activities that are core to our wellbeing as people.

3. We have a lot to learn from our Indigenous peoples. Whether Indigenous to Australia or other countries, there are common threads that tie Indigenous cultures, and the way people lived, worked and played, together. Their connection to country, their reverence of customs, their reminders of history, their respect of their elders and admiration of their youth, their storytelling as a way to share important life lessons, their honouring of the arts and dance… all these things provide a solid and strong foundation for mental health and wellbeing – regardless of illness. In fact, my experiencing working with Aboriginal and Torres Strait Islander people certainly suggests that the removal of these customs and connections leads to illness. Let’s learn from one of the oldest continuing cultures in the world and connect to our country, create customs, remember history, respect our elders, admire our youth, tell stories, make art… we’ll all be better for it.

4. Most of us are stronger than we think. The time will never be perfect, and you’ll always doubt yourself… but take the risk. Maybe tell someone first, in case you stumble… but chances are you’ll find yourself on some solid ground… even if there were a few rocky steps along the way.

5. We all know the prevalence of mental illness in this country… but I think we sometimes forget how common it is when we’re engaging with people. Chances are, you’re engaging with someone who is struggling with something themselves, or supporting someone else who is. For the most part, mental illnesses are invisible. Someone can turn up to work, but be struggling with depression; coordinate an event, but be battling anxiety; or be surrounded by friends, but feel incredibly alone. Be kind – remember your behaviours can affect other people’s mental health, which can in turn positively affect their mental illness. Be the one who affects people positively.

6. We don’t understand everything – and we probably never will. Some illnesses just don’t follow the rules. Don’t doubt them, or try and make them fit a category. Remember the Cynefin Framework – sometimes we need to spend time probing, sensing and learning, before responding.

7. Mental illness sucks, and people don’t choose it.

8. Trauma sucks too, and people don’t choose that either. Unfortunately, too often, trauma leads to mental illness.

9. Authenticity and being present count for a lot. Most of us aren’t experts in mental illness – but being real and genuine and present is something we can all be, and is sometimes all someone needs in that moment.

10. Not everyone needs to be an outspoken advocate – you can use your power in other ways. I absolutely respect and admire the role that strong advocates play in challenging archaic practices, standing up for people’s rights, fighting stigma and challenging the systems that perpetuate these things… but for some people, other paths are just as powerful. Throw some money at your favourite cause; volunteer your time to help someone with their cause; be a mentor for others; be kind; be present; listen, learn… the world needs more of all of these things. Find what works for you.

11. “The system is harder to navigate than the illness”. This quote from a Mum of a young person with mental illness still resonates with me. This should never be true. I know the teams I have worked with have definitely worked to improve this situation, but this is still true too often and we all need to do better. Resolving this issue should be the sector’s driving force.

12. People don’t care who funds their care, they just want good people who care to listen and try to help. Unfortunately, our Federal/State/Local government systems aren’t always structured to respond to the person’s needs – but rather are structured based on election commitments, available funding, eligibility criteria and reporting requirements. Things don’t start out this way of course, but in a system stretched to respond, that’s often where we end up. I have seen this changing, but we have a long way to go.

13. Experiencing something doesn’t make you an expert in everything related to that something – it makes you an expert in experiencing that something. Use that experience to teach others, and change things for the better, but not to the detriment of others’ experiences. They are experts in their own something.

14. The more you learn, and the more specialised you become, the more you realise there’s so much more to learn. Be a lifelong learner. Reflect on your experiences and take something from them – whether they’re conversations with people, online courses, relationships with mentors or formalised study – be an active learner and look for the gems.

15. Joy and pain, or happiness and hurt, can coexist. The dual-axis of mental illness and mental health is testament to that. We can be struggling, but still laugh in the moment; we can be in pain, but still look forward to the spider weaving its web each night on our front porch, and be in awe of its patience and persistence; we can know our faults and failings, and have regrets, but still show up and be there for others. We are complex creatures and all those feelings and experiences are legitimate.

16. Believe people when they say I care. These aren’t words people use flippantly. Even if they said them a while ago, they’ve come from a place of love, and that doesn’t usually change.

17. Different strategies work for different people – find your reset switch and prime yourself for the change that you know will happen. For me, it’s a steaming hot shower and brushing my teeth; it’s taking the time to water and fertilise my plants; it’s baking something from scratch; it’s washing my dog – there’s some metaphor in all these activities that I apply to myself to allow myself to reset, find my bearings, and start taking small steps in my desired direction. Find your strategies, and use them unapologetically whenever you need.

18. Some people can’t see beyond the tunnel – there’s darkness, pain, failure, nothing in every direction. They’re not making things difficult for themselves, it’s just how it is for them. Kindness, persistence, patience and acknowledgement will all be necessary, along with professional support, and even then, the darkness might still remain. Sometimes the darkness is all they know. Sometimes the darkness is comfortable. Sometimes the darkness is preferred over the alternative. Do your best, your kindness and persistence never goes unnoticed.

19. Animals are amazing healers – whether you prefer fur, feathers, scales or shells – they seem to have this innate ability to understand your mood, and be just the friend you need them to be. If you find comfort with your furred, feathered, scaled or shelled friend – make more time for them in your day. It’s not rocket-science – just spend more time doing the things that bring you joy and comfort.

20. Wear the fancy shoes; burn the fragrant candles; use the fancy tea cups; buy the bunch of flowers… don’t save all the fancy stuff for special occasions, make the ordinary extraordinary. Plan it, look forward to it, experience it, and reflect on it – re-live that joy in as many ways as possible.

21. “People start to heal the moment they feel heard”. Not my words, but I definitely believe them to be true.

22. COVID-19 has highlighted our need for connection – it is the essence of our wellbeing. Plan for it in your life, for yourself and for those you love.

23. Notice the people in your life who bring you joy – whether it’s the old man who says hi each morning as you cross paths walking your dogs; or the extraverted guy who makes your coffee; or the driver who motioned you to merge into traffic; or the friendly school-crossing lady who chats cheerily to parents dropping their kids at school every morning. Take notice of them. Take on those good vibes – they were meant for you. Some of us are hard-wired to focus more on the negative aspects of our lives – but if we try hard, we’ll can train ourselves to see the positives. Often a single negative experience can outweigh a bunch of positives – but if we pay attention, and actively look for the positives – they are there. Challenge yourself to notice and acknowledge them.

24. Most of us have some special people in our lives. Whether they make living with your mental illness a little easier, or they’re a key ingredient in your mental health – they’re people who make our lives better. Tell them! They may have their own struggles, and your gratitude might just make their day.

25. “Sometimes I’m the mess. Sometimes I’m the broom. On the hardest days, I have to be both”. There’s so much truth in this. There will be bad days, where your thoughts won’t let you do the things you want to do, or say the things you want to say. You might let people down. You might be the mess. There will be other days where you will be the one supporting others – you’ll be the broom. Being both is tough, but is sometimes necessary – but you are stronger than you think. Find your strategies, reset, clean-up. Repeat if necessary.

26. There are countless people who have shared their time, their experiences, and their wisdom with me. I’m know I haven’t always been the best student. I haven’t always listened, I haven’t always trusted, and I haven’t always followed through. But I am grateful. In this moment I am grateful – for their patience, for their persistence, for their generosity, and for their grace.

NECESSARY AND SUFFICIENT – LET’S THINK ABOUT THOSE TERMS FOR A MOMENT

We use the words necessary and sufficient almost everyday – but they have a specific meaning in evaluation, and play an important role in Impact Evaluation.

According to Dictionary.com:

  • Necessary:  being essential, indispensable or requisite; and
  • Sufficient:  adequate for the purpose, enough.

These absolutely hold true in evaluation nomenclature as well… but let’s take a closer look.

When we undertake an Impact Evaluation, we are looking to verify causality.  We want to know the extent to which the program caused the impacts or outcomes we observed.  The determination of causality is the essential element to all Impact Evaluations, as they not only measure or describe changes, but seek to understand the mechanisms that produced the changes.

This is where the words necessary and sufficient play an important role.

Imagine a scenario where your organisation delivers a skill-building program, and the participants who successfully complete your program have demonstrably improved their skills.  Amazing – that’s the outcome we want!

But, can we assume that the program delivered by your organisation caused the improvement in skills? 

Some members of the team are very confident – ‘yep, our program is great, we’ve received lots of comments from participants that they couldn’t have done it without the program.  It was the only thing that helped’.  Let’s call them Group 1.

Others in the team think that the program definitely had something to do with the observed success, but they think it also had something to do with the program the organisation ran last year in confidence-building, and that they build on each other.  We’ll call them Group 2.

Some others in the team think the program definitely helped people build their skills, but they’re also aware of other programs delivered by other organisations, that have also achieved similar outcomes.  Let’s call them Group 3.

Who is correct?  The particular strategies deployed within an Impact Evaluation will help determine this for us, but hopefully you can start to see an important role for the words necessary and sufficient.

  • Group 1 would assert that the program is necessary and sufficient to produce the outcome.  Their program, and only their program, can produce the outcome.
  • Group 2 would assert that the program is necessary, but not sufficient on its own, to cause the outcome.  Paired with the confidence-building program, the two together might be considered the cause of the impact.
  • Group 3 would claim that their program isn’t necessary, but is sufficient to cause the outcome.  It would seem there could be a few programs that could achieve the same results, so whilst their program might be effective, others are too.

Patricia Rogers has developed a simple graphic depicting the different types of causality – sole causal, joint causal and multiple causal attribution. 

Sole causal attribution is pretty rare, and wouldn’t usually be the model we would propose is at play.  But a joint causal or multiple causal model can usually explain causality. 

Do you think about the terms necessary and sufficient a little differently now? Whilst we use them almost every day, when talking causality, they are very carefully and purposefully selected words – they really do mean what they mean.

CLARITY OF PURPOSE IS SO IMPORTANT

Everything always comes back to purpose.

Have you been part of evaluations, where 6-12 months in, you’re starting to uncover some really important learnings… but you can’t quite recall exactly what you set out to explore when you started, and now you’re overwhelmed with choices about what to do with what you’ve learned… and sometimes you don’t end up doing anything with the learnings?

Or perhaps the opposite, where 6-12 months in, the learnings that are starting to emerge are really not meeting your expectations, and you’re wondering if this whole evaluation thing was a waste of time and resources?

Whilst there are many types of evaluations, one evaluation cannot evaluate everything.  A good evaluation is purposely designed to answer the specific questions of the intended users, to ensure it can be utilised for its intended use.  It’s critically important to ensure the evaluation, and all those involved in it, remain clear about its intended use by intended users.

A simple taxonomy that I find helpful is one proposed by Huey T. Chen (originally presented 1996, but later adapted in his 2015 Practical Program Evaluation).

Chen’s framework acknowledges that evaluations tend to have two main purposes or functions – a constructive function, with a view to making improvements to a program; and a conclusive function, where an overall judgement of the program’s merit is formed.  He also noted that evaluations can be split across program stages – the process phase, where the focus is on implementation; and the outcome phase, where the focus is on the impact the program has had.

The four domains are shown below:

  • Constructive process evaluation – provides information about the relative strengths and weaknesses of the program’s implementation, with the purpose of program improvement.
  • Conclusive process evaluation – judges the success of program implementation, eg, whether the target population was reached, whether the program has been accepted or embedded as BAU.
  • Constructive outcome evaluation – explores various program elements in an effort to understand if and how they are contributing to outcomes, with the purpose of program improvement.
  • Conclusive outcome evaluation – provides an overall assessment of the merit or worth of the program.

This simple matrix can serve to remind us of the purpose of the particular evaluation work we are doing at any given time.  It is simple, and there are of course nuances, where you may have an evaluation that spans neighbouring domains, or transitions from one domain to another, but despite its simplicity, I have found it a useful tool to remind me about the focus of the current piece of work or line of enquiry.

ARE WE TRACKING THE RIGHT INDICATORS?

Not all indicators are created equal, and if you’ve ended up with less-than-optimal ones – they’ll be a constant thorn in your side. 

Indicators tell us the state of something – but there’s two critical elements to that statement that need to be clearly defined.  The ‘us’ and the ‘something’. 

Let’s start with the ‘something’.

Good indicators should tell us the extent to which we are achieving our objective or goal – this is the ‘something’.  Good indicators are derived from selecting the best evidence that would indicate that we are achieving our objective or goal.  Let’s use a pretty common objective as an example – Improved client satisfaction.  What have been some indicators your organisation has tracked to give you an indication that this objective is on track or heading in the right direction?  How about numbers of complaints or did-not-attend rates or program completion rates?  Do these give you the best evidence of improved client satisfaction?  Whilst they might be loosely linked, they do not give you the best evidence of the extent to which you are improving client satisfaction. 

  • Tracking numbers of complaints is flawed, as would tracking any spontaneously provided feedback – positive or negative, as it is biased.  Only a subset of the population are motivated to spontaneously provide feedback about their experience.  It is not a true representation of client satisfaction for people who access your program… therefore, not the best evidence.
  • Monitoring did-not-attend rates might give you some indication of client satisfaction, but people may not attend for a host of other reasons, which may speak more to the appropriateness or accessibility of your program, and not so much about its quality, or how satisfied clients are with it.  Maybe you run a weekly group program, and part way through the program, you make a change to the start time from 10:00am to 9:00am, and subsequent to this, two of the 10 participants do not attend for two weeks straight.  Their non-attendance could be due to poor satisfaction, or it could just as well be due to accessibility issues, as the earlier time clashes with school drop-off.  The converse could also be true, where improved attendance may not necessarily indicate improved satisfaction, but rather could be due to other circumstances that have changed in the person’s life.
  • Using program completion rates to give an indication of satisfaction is similarly flawed.  It could just as well give you an indication of accessibility or appropriateness – not necessarily satisfaction.

Good indicators need to convince us that we are achieving our objective, by presenting evidence that is as indisputable as possible. It can’t just be loosely aligned – and this is where the ‘us’ comes in.

For indicators to convince ‘us’, we need to be the ones determining the best evidence.  The best evidence for improved client satisfaction, or improved outcomes, or improved performance in any area needs to be determined by the people who will use the indicator to inform a judgement or decision, and obviously should take into consideration the person most impacted by the decision.

Let’s use another common example of improved client outcomes.  The evidence that will convince a funder that clients are experiencing improved outcomes might be changes in a standard outcome measure (eg the widely used K10) whereas the evidence that would convince a front-line service provider, or more importantly, the client, might be different. We need to imagine what success looks like – what would we see; what would we hear; how would people be different; what would have changed? These tangible things that align with our definition of achieving the objective, should be considered as evidence.

The indicators we commit to tracking must be bespoke and fit-for-purpose.  It is a burdensome task collecting, tracking and reporting on indicators – so if they’re not the right ones, it’s a waste of precious time and resources.

DO WE REALLY TARGET OUR TARGET POPULATION?

When designing programs, we usually have a specific target population in mind.  In fact it would be highly unusual if we didn’t.  The target population, after all, is the group of people for whom the program is purposely designed.  Entry into the program is often restricted by certain criteria that help to differentiate between people who are, and are not, the target population. 

Once our program is operational, we actively recruit the target population into our programs, we advise our referrers who the target population is, and is not, and we try and be clear and transparent about this in all our comms.  We may have an alternative pathway for people who are referred to our programs who are not the target population.

Examples of target populations could be:

  • adults experiencing mild psychological distress;
  • young people aged 12-25 experiencing mild to moderate mental illness;
  • adults experiencing two or more chronic illnesses;
  • adults with a BMI of 30 or over;
  • young people who identify as part of the LGBTIQ+ community;
  • people from Culturally and Linguistically Diverse backgrounds;
  • young people disengaged from schooling;
  • people who smoke and reside in a particular geographic area…

Whilst all the above populations are different, it’s helpful to think of target populations as groups defined by a common need profile – people within a target population are connected by a common set of needs.

There may be a range of needs, with people at the lower and upper end of a need profile still being considered part of the target population, but it’s important to acknowledge that there are boundaries to our target population.  These boundaries are critically important, because we have designed our program to target a particular set of needs. 

No matter how amazing your program is, it can never meet everyone’s needs, which is what’s great about the diverse market of health and social service providers we have in Australia – we have a variety of services and service providers, with a variety of practitioner types, skilled in a variety of areas, located in a variety of settings.  Each individual program cannot appropriately and effectively serve every need profile, by design.

So, when we end up with people in our programs who don’t improve as we expect they should – what do we do?  A lot of the time, when we look closely at where programs have failed people, we find that their need profile is not within the range of the target population.  They are either beyond the lower or upper limits.  Maybe their needs were beyond the lower limits of our target population, and therefore they didn’t really need our program, and didn’t find value in it, and therefore didn’t genuinely commit… or maybe there needs were very high, and beyond the upper limit of our target population, and the program or service wasn’t comprehensive, intensive or frequent enough to genuinely meet their needs.

There are many reasons why people beyond the range of our target populations end up enrolled in our programs, but do we know who they are, and do we take them into consideration when we’re assessing our program’s performance.  When you look at the effectiveness of your program, or the appropriateness or quality of your program as determined by those who use it – do you think it would make a difference if you split your sample into two groups – those in the target population (who have a need profile that the program was designed to support), vs those outside the target population

It’s reasonable to expect, if you have a program that’s well designed, that effectiveness, appropriateness, and quality would be higher for the target population.  If you did discover that your program really wasn’t meeting the needs of people outside your target population – what would you change?  Are we doing people a disservice by enrolling them in our program if the evidence suggests it’s not going to be helpful?  If you discover that your program is actually reasonably effective for people with a more complex need profile than it was originally designed for – perhaps you could expand your scope.  This evidence could be used to lobby for additional funding.

Are you targeting your target population?

ARE WE NATURAL EVALUATORS?

Think about the last time you bought a car, chose where to live or decided which breakfast cereal to throw in the trolley (or in the online cart if you’re isolating like we have been for the past few days) … without overtly realising it, you quite possibly followed the logic of evaluation.

For those interested, you can read more about Michael Scriven’s work on the logic of evaluation here, but to paraphrase, evaluative thinking has four key steps.

  1. Establishing criteria of merit
  2. Constructing standards
  3. Assessing performance
  4. Evaluative judgement

Let’s apply this to buying your next car. 

Establishing criteria of merit

What criteria are important to consider in buying a car?  Maybe we have a firm budget, so any car we’re going to consider will need to perform well against that criteria.  Maybe we’re climate conscious, so we will place more value on a car that has a better emission rating.  Maybe our family has just expanded, so need to find a car that will accommodate a minimum number of people.  Maybe we don’t like white cars, maybe we’re looking for a car with low kms, maybe we’re only interested in cars made by a certain company, because we believe them to be a safe and trust-worthy company.

The list of possible meritorious criteria is extensive, and quite dependant on who needs to make the evaluative judgement.  Two people shopping for a car at the same time may have quite different criteria.

Constructing standards

This is where we define how well any potential car needs to perform against the various criteria we identified earlier.  If our budget is a firm $10,000, then any car that comes in over budget is not going to score well against that criteria.  Some of us may even write a list of our must haves and nice to haves, and include our standards in that list.  Maybe our next car must be within our budget, must be able to carry five people, must a be no older than 10 years old.  We’d also, if possible, like to find a purple car, that’s located within 50km of where we live, so we don’t have to travel too far to collect it, and has had less than 2 previous owners.  We now have a list of criteria that are important to us, and we have set standards against each of those criteria.

Assessing performance

Now’s the point at which we assess how any contenders stack up.  Regardless of whether we’re scrolling through an online marketplace for cars, browsing various websites, or physically walking through car yards – we are assessing how each car we review compares to our defined criteria.  Maybe we find a purple five-seater car within budget, but it’s 200km away.  Maybe we find a blue five-seater car within budget and within 50km, but it’s 15 years old.  In the same way that our programs, policies, products and people don’t perform ideally against all criteria – our potential cars are the same.  We will take note in our heads, or some of us will actually take notes in a notebook or prepare a spreadsheet, where we track how each car performs.  We may find that one or two of the criteria we determined were important to us, were simply unable to be assessed because data wasn’t available.  We should learn from this, and perhaps rethink our criteria and/or standards for next time.

Evaluative judgement

Now we need to make our evaluative judgement, which will inform our decision.  Which car performed the best?  Often not all criteria are equally important, and we apply different weightings.  Maybe we determine that blue is pretty close to purple, so despite the car not being the right colour, it’s still a contender.  We need to determine how we will synthesise all we have learned from our evaluation activities, and make an evaluative judgement. (For those interested in more about how to integrate or synthesise multiple criteria, Jane Davidson has a nice chapter on Synthesis Methodology in her book Evaluation Methodology Basics).

We often work through these steps fairly quickly in our heads, especially if the decision is about what cereal to buy, or what to have for dinner.  Are all the kids going to be home? Does one child have a particular allergy or dietary preference? Is the budget tighter this week because we needed to spend extra on fuel?  We might not go to the effort of making a spreadsheet, or getting evaluation support to make these decisions… but we do employ some logical evaluative thinking more often than we realise.

NEW EVALUATION RESOURCE AVAILABLE

Late last year (December 2021), the Commonwealth Government released a swanky new evaluation resource, complete with a bunch of useful information, resources and templates to help guide evaluation planning, implementation and use.

“The guide is designed to help Commonwealth entities and companies meet existing requirements. It provides a principles-based approach for the conduct of evaluations across the Commonwealth, and can be used to help plan how government programs and activities will be evaluated across the policy cycle in line with better practice approaches.”

The Guide is comprised of the Commonwealth’s Evaluation Policy, together with an Evaluation Toolkit, inclusive of a swag of templates, completed exemplars, case studies and other frameworks and tools to support evaluation activities.

Check it out – maybe there’s something that might be of use for your next evaluation activity.

WHAT’S NOT WORKING – THE INTERVENTION, OR IMPLEMENTATION OF IT?

I watched a really informative webinar on applying implementation science to evaluation a few years back that really struck a chord with me.  The facilitator, Dr Jessica Hateley-Brown walked participants through the foundations and key concepts of implementation science, including the Consolidated Framework for Implementation Research (CFIR), which if you get a chance – definitely dig into a little more… but it was a little snippet on the differentiation between intervention failure and implementation failure that blew my mind.  In hindsight, it’s still a little embarrassing that I hadn’t understood it so clearly prior to this, but I guess sometimes we can’t see the forest for the trees.

Having spent many years delivering services as a front-line clinician, and then managing and commissioning services from a bit further afar, explaining the obvious difference between intervention failure and implementation failure was like giving me a language that I could finally use to explain what I hadn’t been able to put words to.  I had lived and breathed the difference between intervention failure and implementation failure so many times – but I’d never thought about it so simply.

The concept of implementation outcomes – which are the result of deliberate actions and strategies to implement our interventions – was not new to me, and won’t be new to most of you.  We often collect data about the implementation of our services… but we don’t review it and use it as much as we should.  Implementation outcomes are the things that give us an indication of the quality of the implementation of our program or intervention.  It’s things like reach, acceptability, fidelity, cost and sustainability.  These things don’t tell us anything about the service outcomes or client outcomes.  They are the outcomes of successful implementation – if we’re lucky – and therefore give us an indication of the quality of the implementation.  Hopefully the quality is high, which lays the foundations for achieving our desired service outcomes and ultimately the client outcomes.

Service outcomes give us an indication of the quality of the service.  This might include things like safety, person-centredness, timeliness, and satisfaction.  These things don’t tell us about the quality of implementation, nor about any outcomes experienced by the client. 

And finally, client outcomes are the ultimate outcomes we are hoping clients experience – and might look like changes in knowledge, skillset, confidence, wellbeing, functioning or health status.

The outcomes of implementation are distinct from service outcomes, which are distinct from client outcomes.  Obvious yet mind-blowing at the same time!! 

As front-line staff working with people and implementing programs every day would well be aware, program implementation is dynamic.  Of course there’s a service or operating model guiding practice, but minor adjustments are made often to accommodate people, or meet people where they’re at.  We may learn that some staff are better suited to certain tasks, or some clients are more engaged on certain days of the week.  Noticing these things, and making adjustments, can have significant effects on the reach or acceptability of our programs.  It’s an early step towards the client outcomes we are hoping eventuate.

But sometimes… programs don’t work.  The people we are working with don’t experience the outcomes both they and the provider were hoping for.  Is it intervention that failed, or did we fail in the implementation of it? 

Maybe staff weren’t trained adequately to deliver the program; maybe the program was new and never fully embraced by the organisation; maybe the workplace had poor communication channels; maybe the program was seen as an add-on to existing work, and attitudes towards it were negative.  All of these things will likely effect implementation quality.  In some situations, it might be that the program or intervention never really got a chance, and it was deemed ineffective and phased out… when in fact it was poor implementation, or implementation failure that caused its demise.

When thinking about the programs you deliver, support or manage – can you articulate the outcomes of successful implementation, as distinct from service outcomes and client outcomes?  It might be a useful task to undertake as a team.  Of course, some programs or interventions are flawed in their design… but in many cases, failure to achieve client outcomes is not always due to intervention failure… but could be partially, of fully, the result of implementation failure.

WHAT’S MORE IMPORTANT – FINANCIAL STABILITY OR POSITIVE IMPACT?

I guess depending on your upbringing, your educational background, or the role you play in an organisation – you may pick one option more easily over the other.

Obviously I feel more strongly about the impact you or your organisation has.  What does it matter if you’re in a good position financially if you’re not achieving great things? 

But, the converse is also not ideal – achieving great things, but with the very real likelihood that you’ll lose great staff because of poor job security, or needing to let people go because you simply cannot fit them into the budget.  Before too long, this starts impacting the quality and reach of your programs… and then all of a sudden, you’re not achieving great things anymore.  You really can’t have one without the other.

Unsurprisingly, the title of this post is purposefully misleading, as both these areas of your business are critically important and deserve dedicated attention to ensure their success… but it struck me recently that the foundations for both areas are not the same. It’s much easier to reach a shared understanding, and therefore have meaningful conversations about our financial position, than it is for us to reach a shared understanding about our impact.

Think of regular reporting to Boards as an example.  It’s pretty commonplace to have a dedicated section of the Board report that details the organisation’s financial position.  Skilled staff within organisations prepare complex reports which speak to the solvency, liquidity, and cashflow of the organisation, and Boards usually have a director or two with similar skillsets who speak the same language.  When complex tables, charts and ratios are presented to the Board – they know what it means.  They can quickly form a judgement about whether things are good, or whether there is cause for concern.  The transition from the ‘what?’ to the ‘so what?’ happens pretty seamlessly.

For some reason, the same situation doesn’t exist for our impact reports.  I absolutely acknowledge that some organisations prepare meaningful impact reports for their Executive and Boards, but lots don’t, and of those that do, there doesn’t seem to be the same common language spoken.  In financial nomenclature, solvency and liquidity ratios have a specific and objective meaning.  The same can’t be said about effectiveness, efficiency, impact and reach.  It also certainly isn’t as commonplace for there to be Board members with particular skillsets in the performance monitoring or evaluation space.  This generally results in less questions being asked about performance in terms of impact, and a less than concrete awareness of how the organisation is performing when it comes to the impact on the people and communities they are funded to serve.  The translation of the ‘what?’ to the ‘so what?’ is not as easy.

The nebulous space that performance assessment occupies is often complex and subjective – but it doesn’t need to be. 

Organisations can certainly progress work to agree on what success looks like regarding the impact of their various programs.  They don’t need to bite off more than they can chew at their first bite either – it can be a staged process.

My experience has been that Board Directors enjoy the increased awareness they have about performance, and are genuinely excited by the prospect of learning more about the great things that are being achieved.  The same could be said about most people really – in that most of us enjoy knowing and learning more about something than we did previously.

So… how can we remedy this?  What can we do to make conversations about performance more mainstream?  How can we normalise a language that people understand and aren’t afraid to use?  How do we create an environment where the skilled staff who understand and champion better performance in both these areas of our businesses, can flourish, and can increase awareness and excitement in others?