Tag Archive for: performance indicators

ARE WE TRACKING THE RIGHT INDICATORS?

Not all indicators are created equal, and if you’ve ended up with less-than-optimal ones – they’ll be a constant thorn in your side. 

Indicators tell us the state of something – but there’s two critical elements to that statement that need to be clearly defined.  The ‘us’ and the ‘something’. 

Let’s start with the ‘something’.

Good indicators should tell us the extent to which we are achieving our objective or goal – this is the ‘something’.  Good indicators are derived from selecting the best evidence that would indicate that we are achieving our objective or goal.  Let’s use a pretty common objective as an example – Improved client satisfaction.  What have been some indicators your organisation has tracked to give you an indication that this objective is on track or heading in the right direction?  How about numbers of complaints or did-not-attend rates or program completion rates?  Do these give you the best evidence of improved client satisfaction?  Whilst they might be loosely linked, they do not give you the best evidence of the extent to which you are improving client satisfaction. 

  • Tracking numbers of complaints is flawed, as would tracking any spontaneously provided feedback – positive or negative, as it is biased.  Only a subset of the population are motivated to spontaneously provide feedback about their experience.  It is not a true representation of client satisfaction for people who access your program… therefore, not the best evidence.
  • Monitoring did-not-attend rates might give you some indication of client satisfaction, but people may not attend for a host of other reasons, which may speak more to the appropriateness or accessibility of your program, and not so much about its quality, or how satisfied clients are with it.  Maybe you run a weekly group program, and part way through the program, you make a change to the start time from 10:00am to 9:00am, and subsequent to this, two of the 10 participants do not attend for two weeks straight.  Their non-attendance could be due to poor satisfaction, or it could just as well be due to accessibility issues, as the earlier time clashes with school drop-off.  The converse could also be true, where improved attendance may not necessarily indicate improved satisfaction, but rather could be due to other circumstances that have changed in the person’s life.
  • Using program completion rates to give an indication of satisfaction is similarly flawed.  It could just as well give you an indication of accessibility or appropriateness – not necessarily satisfaction.

Good indicators need to convince us that we are achieving our objective, by presenting evidence that is as indisputable as possible. It can’t just be loosely aligned – and this is where the ‘us’ comes in.

For indicators to convince ‘us’, we need to be the ones determining the best evidence.  The best evidence for improved client satisfaction, or improved outcomes, or improved performance in any area needs to be determined by the people who will use the indicator to inform a judgement or decision, and obviously should take into consideration the person most impacted by the decision.

Let’s use another common example of improved client outcomes.  The evidence that will convince a funder that clients are experiencing improved outcomes might be changes in a standard outcome measure (eg the widely used K10) whereas the evidence that would convince a front-line service provider, or more importantly, the client, might be different. We need to imagine what success looks like – what would we see; what would we hear; how would people be different; what would have changed? These tangible things that align with our definition of achieving the objective, should be considered as evidence.

The indicators we commit to tracking must be bespoke and fit-for-purpose.  It is a burdensome task collecting, tracking and reporting on indicators – so if they’re not the right ones, it’s a waste of precious time and resources.

WHAT’S NOT WORKING – THE INTERVENTION, OR IMPLEMENTATION OF IT?

I watched a really informative webinar on applying implementation science to evaluation a few years back that really struck a chord with me.  The facilitator, Dr Jessica Hateley-Brown walked participants through the foundations and key concepts of implementation science, including the Consolidated Framework for Implementation Research (CFIR), which if you get a chance – definitely dig into a little more… but it was a little snippet on the differentiation between intervention failure and implementation failure that blew my mind.  In hindsight, it’s still a little embarrassing that I hadn’t understood it so clearly prior to this, but I guess sometimes we can’t see the forest for the trees.

Having spent many years delivering services as a front-line clinician, and then managing and commissioning services from a bit further afar, explaining the obvious difference between intervention failure and implementation failure was like giving me a language that I could finally use to explain what I hadn’t been able to put words to.  I had lived and breathed the difference between intervention failure and implementation failure so many times – but I’d never thought about it so simply.

The concept of implementation outcomes – which are the result of deliberate actions and strategies to implement our interventions – was not new to me, and won’t be new to most of you.  We often collect data about the implementation of our services… but we don’t review it and use it as much as we should.  Implementation outcomes are the things that give us an indication of the quality of the implementation of our program or intervention.  It’s things like reach, acceptability, fidelity, cost and sustainability.  These things don’t tell us anything about the service outcomes or client outcomes.  They are the outcomes of successful implementation – if we’re lucky – and therefore give us an indication of the quality of the implementation.  Hopefully the quality is high, which lays the foundations for achieving our desired service outcomes and ultimately the client outcomes.

Service outcomes give us an indication of the quality of the service.  This might include things like safety, person-centredness, timeliness, and satisfaction.  These things don’t tell us about the quality of implementation, nor about any outcomes experienced by the client. 

And finally, client outcomes are the ultimate outcomes we are hoping clients experience – and might look like changes in knowledge, skillset, confidence, wellbeing, functioning or health status.

The outcomes of implementation are distinct from service outcomes, which are distinct from client outcomes.  Obvious yet mind-blowing at the same time!! 

As front-line staff working with people and implementing programs every day would well be aware, program implementation is dynamic.  Of course there’s a service or operating model guiding practice, but minor adjustments are made often to accommodate people, or meet people where they’re at.  We may learn that some staff are better suited to certain tasks, or some clients are more engaged on certain days of the week.  Noticing these things, and making adjustments, can have significant effects on the reach or acceptability of our programs.  It’s an early step towards the client outcomes we are hoping eventuate.

But sometimes… programs don’t work.  The people we are working with don’t experience the outcomes both they and the provider were hoping for.  Is it intervention that failed, or did we fail in the implementation of it? 

Maybe staff weren’t trained adequately to deliver the program; maybe the program was new and never fully embraced by the organisation; maybe the workplace had poor communication channels; maybe the program was seen as an add-on to existing work, and attitudes towards it were negative.  All of these things will likely effect implementation quality.  In some situations, it might be that the program or intervention never really got a chance, and it was deemed ineffective and phased out… when in fact it was poor implementation, or implementation failure that caused its demise.

When thinking about the programs you deliver, support or manage – can you articulate the outcomes of successful implementation, as distinct from service outcomes and client outcomes?  It might be a useful task to undertake as a team.  Of course, some programs or interventions are flawed in their design… but in many cases, failure to achieve client outcomes is not always due to intervention failure… but could be partially, of fully, the result of implementation failure.