Tag Archive for: clients

DO WE REALLY TARGET OUR TARGET POPULATION?

When designing programs, we usually have a specific target population in mind.  In fact it would be highly unusual if we didn’t.  The target population, after all, is the group of people for whom the program is purposely designed.  Entry into the program is often restricted by certain criteria that help to differentiate between people who are, and are not, the target population. 

Once our program is operational, we actively recruit the target population into our programs, we advise our referrers who the target population is, and is not, and we try and be clear and transparent about this in all our comms.  We may have an alternative pathway for people who are referred to our programs who are not the target population.

Examples of target populations could be:

  • adults experiencing mild psychological distress;
  • young people aged 12-25 experiencing mild to moderate mental illness;
  • adults experiencing two or more chronic illnesses;
  • adults with a BMI of 30 or over;
  • young people who identify as part of the LGBTIQ+ community;
  • people from Culturally and Linguistically Diverse backgrounds;
  • young people disengaged from schooling;
  • people who smoke and reside in a particular geographic area…

Whilst all the above populations are different, it’s helpful to think of target populations as groups defined by a common need profile – people within a target population are connected by a common set of needs.

There may be a range of needs, with people at the lower and upper end of a need profile still being considered part of the target population, but it’s important to acknowledge that there are boundaries to our target population.  These boundaries are critically important, because we have designed our program to target a particular set of needs. 

No matter how amazing your program is, it can never meet everyone’s needs, which is what’s great about the diverse market of health and social service providers we have in Australia – we have a variety of services and service providers, with a variety of practitioner types, skilled in a variety of areas, located in a variety of settings.  Each individual program cannot appropriately and effectively serve every need profile, by design.

So, when we end up with people in our programs who don’t improve as we expect they should – what do we do?  A lot of the time, when we look closely at where programs have failed people, we find that their need profile is not within the range of the target population.  They are either beyond the lower or upper limits.  Maybe their needs were beyond the lower limits of our target population, and therefore they didn’t really need our program, and didn’t find value in it, and therefore didn’t genuinely commit… or maybe there needs were very high, and beyond the upper limit of our target population, and the program or service wasn’t comprehensive, intensive or frequent enough to genuinely meet their needs.

There are many reasons why people beyond the range of our target populations end up enrolled in our programs, but do we know who they are, and do we take them into consideration when we’re assessing our program’s performance.  When you look at the effectiveness of your program, or the appropriateness or quality of your program as determined by those who use it – do you think it would make a difference if you split your sample into two groups – those in the target population (who have a need profile that the program was designed to support), vs those outside the target population

It’s reasonable to expect, if you have a program that’s well designed, that effectiveness, appropriateness, and quality would be higher for the target population.  If you did discover that your program really wasn’t meeting the needs of people outside your target population – what would you change?  Are we doing people a disservice by enrolling them in our program if the evidence suggests it’s not going to be helpful?  If you discover that your program is actually reasonably effective for people with a more complex need profile than it was originally designed for – perhaps you could expand your scope.  This evidence could be used to lobby for additional funding.

Are you targeting your target population?

WHAT’S NOT WORKING – THE INTERVENTION, OR IMPLEMENTATION OF IT?

I watched a really informative webinar on applying implementation science to evaluation a few years back that really struck a chord with me.  The facilitator, Dr Jessica Hateley-Brown walked participants through the foundations and key concepts of implementation science, including the Consolidated Framework for Implementation Research (CFIR), which if you get a chance – definitely dig into a little more… but it was a little snippet on the differentiation between intervention failure and implementation failure that blew my mind.  In hindsight, it’s still a little embarrassing that I hadn’t understood it so clearly prior to this, but I guess sometimes we can’t see the forest for the trees.

Having spent many years delivering services as a front-line clinician, and then managing and commissioning services from a bit further afar, explaining the obvious difference between intervention failure and implementation failure was like giving me a language that I could finally use to explain what I hadn’t been able to put words to.  I had lived and breathed the difference between intervention failure and implementation failure so many times – but I’d never thought about it so simply.

The concept of implementation outcomes – which are the result of deliberate actions and strategies to implement our interventions – was not new to me, and won’t be new to most of you.  We often collect data about the implementation of our services… but we don’t review it and use it as much as we should.  Implementation outcomes are the things that give us an indication of the quality of the implementation of our program or intervention.  It’s things like reach, acceptability, fidelity, cost and sustainability.  These things don’t tell us anything about the service outcomes or client outcomes.  They are the outcomes of successful implementation – if we’re lucky – and therefore give us an indication of the quality of the implementation.  Hopefully the quality is high, which lays the foundations for achieving our desired service outcomes and ultimately the client outcomes.

Service outcomes give us an indication of the quality of the service.  This might include things like safety, person-centredness, timeliness, and satisfaction.  These things don’t tell us about the quality of implementation, nor about any outcomes experienced by the client. 

And finally, client outcomes are the ultimate outcomes we are hoping clients experience – and might look like changes in knowledge, skillset, confidence, wellbeing, functioning or health status.

The outcomes of implementation are distinct from service outcomes, which are distinct from client outcomes.  Obvious yet mind-blowing at the same time!! 

As front-line staff working with people and implementing programs every day would well be aware, program implementation is dynamic.  Of course there’s a service or operating model guiding practice, but minor adjustments are made often to accommodate people, or meet people where they’re at.  We may learn that some staff are better suited to certain tasks, or some clients are more engaged on certain days of the week.  Noticing these things, and making adjustments, can have significant effects on the reach or acceptability of our programs.  It’s an early step towards the client outcomes we are hoping eventuate.

But sometimes… programs don’t work.  The people we are working with don’t experience the outcomes both they and the provider were hoping for.  Is it intervention that failed, or did we fail in the implementation of it? 

Maybe staff weren’t trained adequately to deliver the program; maybe the program was new and never fully embraced by the organisation; maybe the workplace had poor communication channels; maybe the program was seen as an add-on to existing work, and attitudes towards it were negative.  All of these things will likely effect implementation quality.  In some situations, it might be that the program or intervention never really got a chance, and it was deemed ineffective and phased out… when in fact it was poor implementation, or implementation failure that caused its demise.

When thinking about the programs you deliver, support or manage – can you articulate the outcomes of successful implementation, as distinct from service outcomes and client outcomes?  It might be a useful task to undertake as a team.  Of course, some programs or interventions are flawed in their design… but in many cases, failure to achieve client outcomes is not always due to intervention failure… but could be partially, of fully, the result of implementation failure.