ARE WE TRACKING THE RIGHT INDICATORS?
Not all indicators are created equal, and if you’ve ended up with less-than-optimal ones – they’ll be a constant thorn in your side.
Indicators tell us the state of something – but there’s two critical elements to that statement that need to be clearly defined. The ‘us’ and the ‘something’.
Let’s start with the ‘something’.
Good indicators should tell us the extent to which we are achieving our objective or goal – this is the ‘something’. Good indicators are derived from selecting the best evidence that would indicate that we are achieving our objective or goal. Let’s use a pretty common objective as an example – Improved client satisfaction. What have been some indicators your organisation has tracked to give you an indication that this objective is on track or heading in the right direction? How about numbers of complaints or did-not-attend rates or program completion rates? Do these give you the best evidence of improved client satisfaction? Whilst they might be loosely linked, they do not give you the best evidence of the extent to which you are improving client satisfaction.
- Tracking numbers of complaints is flawed, as would tracking any spontaneously provided feedback – positive or negative, as it is biased. Only a subset of the population are motivated to spontaneously provide feedback about their experience. It is not a true representation of client satisfaction for people who access your program… therefore, not the best evidence.
- Monitoring did-not-attend rates might give you some indication of client satisfaction, but people may not attend for a host of other reasons, which may speak more to the appropriateness or accessibility of your program, and not so much about its quality, or how satisfied clients are with it. Maybe you run a weekly group program, and part way through the program, you make a change to the start time from 10:00am to 9:00am, and subsequent to this, two of the 10 participants do not attend for two weeks straight. Their non-attendance could be due to poor satisfaction, or it could just as well be due to accessibility issues, as the earlier time clashes with school drop-off. The converse could also be true, where improved attendance may not necessarily indicate improved satisfaction, but rather could be due to other circumstances that have changed in the person’s life.
- Using program completion rates to give an indication of satisfaction is similarly flawed. It could just as well give you an indication of accessibility or appropriateness – not necessarily satisfaction.
Good indicators need to convince us that we are achieving our objective, by presenting evidence that is as indisputable as possible. It can’t just be loosely aligned – and this is where the ‘us’ comes in.
For indicators to convince ‘us’, we need to be the ones determining the best evidence. The best evidence for improved client satisfaction, or improved outcomes, or improved performance in any area needs to be determined by the people who will use the indicator to inform a judgement or decision, and obviously should take into consideration the person most impacted by the decision.
Let’s use another common example of improved client outcomes. The evidence that will convince a funder that clients are experiencing improved outcomes might be changes in a standard outcome measure (eg the widely used K10) whereas the evidence that would convince a front-line service provider, or more importantly, the client, might be different. We need to imagine what success looks like – what would we see; what would we hear; how would people be different; what would have changed? These tangible things that align with our definition of achieving the objective, should be considered as evidence.
The indicators we commit to tracking must be bespoke and fit-for-purpose. It is a burdensome task collecting, tracking and reporting on indicators – so if they’re not the right ones, it’s a waste of precious time and resources.