'Beyond Bricks and Mortar - Building Quality Clinical Cancer Services' Symposium 2011
ACHS Clinical Indicator Program: Measuring Performance to Improve Outcomes - Dr Jen Bichel-Findlay
Coordinator, Performance & Outcomes Service, ACHS Clinical Indicator Program
Download powerpoint presentation by Dr Jen Bichel-Findlay (PDF 473 KB)
Introduction by Norman Swan:
So, you know, there’s different worlds of indicators and measures and the ACHS, Australian Council of Healthcare Standards, has been for quite some time, since 1999, had a set of radiation oncology indicators, for radiation oncology services and Jen Bichel-Findlay is coming to talk to us from the ACHS to talk about the Fourth Version, because they’ve just done a review thereof: Jen
Dr Jennifer Bichel-Findlay:
Thank you for inviting me to speak. I’m actually speaking on behalf of the Radiation Oncology Clinical Indicator working party, of which there are a number of people in the room.
So, I’m going to very briefly talk about measurement in healthcare as a concept, very briefly about the ACHS Clinical Indicator programme and then mostly about the Radiation Oncology indictor set review.
In terms of measurement and healthcare, we say that it’s about 250 years old. The first mention of anything to do with measurement was by some researchers at the University of Pennsylvania but there’s not a lot of further information about that. The person who really is credited with measurement is Florence Nightingale and she started measuring mortality rates at Scutari, which were 47.7% and she also measured infection rates. One of the other speakers, Tony I think, was talking about Deming. My hero in measurement is Ernest Amery Codman. He was an orthopaedic surgeon, one of the founders of the American College of Surgeons and he was really the father of indicators and measurement. He firmly believed that there should be end result measurement – i.e. measuring patients on discharge and then a year following discharge. He was the first person to start talking about the value of benchmarking and how important it is that hospitals compare themselves to other hospitals and services. He started review meetings, which were the framework for the MDM meetings which most facilities have today and he is the person who introduced the word ‘audit’ into healthcare and he purposely chose a financial term because back then he felt that there was so much emphasis on getting the books balanced and getting all the costings correct, however there didn’t seem to be as much pressure on looking on patient outcomes and did the patient come in and go out better. And we still use that word, ‘clinical audit’ today. To round that whole measurement area off, we had the Donabedian framework around structure, process and outcome.
Certainly, measurement doesn’t sit well with everyone in health care and that may be for a number of factors, of which I could probably do an entire session, and in fact an entire day on. But one main reason is that there is a fairly long list of things that have to be met for a measure to be accepted by clinicians. This is just a standard ten, I can easily think of another seven on top of this ten. So it’s very difficult for a measurement to meet all that criteria.
As Norman had alluded to, the ACHS Clinical Indicator Programme has been running, it actually started in 1989, it was the first one in the world and it was started with support of the medical colleges. It took four years to develop the first indicator set – luckily we’re a bit quicker than that now. It has 670 organisations currently that contribute to this data base, probably half of them are public and half are private but that depends on the indicator set. We have EQuIP members and non-EQuIP members contributing. So EQuIP members are those organisations that use our service for the accreditation and reporting on indicators for them is no additional cost. They could report on all 351 if they chose for no further money. The people who aren’t EQuIP members pay an annual subscription. We currently have 351 indicators across 22 indicator sets and people generally use our programme to focus on external benchmarking. They enter the data themselves; it’s a self reporting system. So they enter a numerator and a denominator for the events they are interested in measuring and they can do that monthly, quarterly or six monthly, but the data only comes to us in six monthly batches.
I’ve been in this position for about 18 months and realise that if I talk to ten people, I would have ten different opinions on what an indicator is. It means many things to different people. From our perspective, what an indicator is, it’s a succinct form of measurement. It will help you to describe the clinical management or the outcomes of care so that you can understand, compare, predict, improve or innovate. From our perspective, people would generally be interested in a clinical indicator programme for three reasons; to improve their understanding of what their system does and what it provides; secondly, to be able to monitor the performance and improve on that performance and another reason might be accountability. They are actually able to show evidence of outcomes for their funders or their board, the Government, their consumers. Our indicators generally are rate based. We have a couple that aren’t rate based, but the majority are. So that means they look at a particular event.
Certainly indicators do not give you the answer, and that’s probably the biggest misinterpretation of an indicator. It’s not going to tell you what’s wrong. From an indicator programme perspective, it tells you that you’re different, or that your figures are different to what they normally are, or they’re different to another hospital, or they’re different to your peer group, or they’re different to an aggregate. You then have to go and investigate why this is. Is the data accurate? Is case mix an issue? Is your structure different? Do you have different resources, do you have different processes of care, etc? So it will never give you the whole truth, it won’t give you the answers but it gives you a slice of reality.
These are the 22 indicator sets that we currently have. We killed our first indicator set at the start of this year, which was dermatology. So these are the current sets we have. Depending on the set, there are different subscribers, so that the larger set obviously the hospital; why we’ve got 450. One might think that oral health is the lowest reporting but that isn’t the case, because for some strange reason oral health has been reporting by state and by private health organisations, so that actually is about 200 facilities. So unfortunately the least reported set is the radiation oncology set, which currently only has 20 health care organisations reporting.
This is the current version, Version 3 up at the moment. There are three indicators looking at the consultation process, there are five looking at the treatment process and there are two looking at the outcome process. I’m just going to run through the figures from 2007 onwards, for most of these, or 2003 depending on when the indicator was developed.
So we have the indicator at the top. This one is consultation process, it’s looking at waiting times and the event that it’s counting is the number of people who greater than 14 days for radiotherapy from ready to care. We have 2007 onwards, so this indicator only was introduced in 2007 and we have the pink column, which shows you the aggregate rate. So even though our indicators are used at the local level, we do produce a national annual report, where we give an aggregate figure. So this one has been reducing. The other column that’s of interest is the centile gains. So the centile gains are the number of patients who would have benefited if their rate could be moved to the top 20% performers. Now given that this set is so tiny, we’re only looking at one organisation sometimes. But in the bigger sets we’re looking at far more. So that’s saying that 136 patients wouldn’t have had to wait 14 days for the year if everyone could achieve the level that the top 20% are achieving.
We also have stratum gains, which is the S-column and outlyer gains, but the most that people are generally interested in is the centile gains.
The other consultation process indicator is around informed consent and that’s been travelling very well and you can see the column that says 20 and 80, so 20 is the combined rate that the worst performing organisations are achieving and the 80 is the percent that the top 20% are achieving. As you can see, there’s not much variation in this area. The centile gains are a lot less. Sorry, there’s three in that consultation area. The clinical trial participation is going down and the centile gains are low. Again, you’ve got very small numbers. The five in the treatment area – again this one’s been introduced in 2007 and this should be low and it has been trending downwards, with hardly any centile gains. The treatment process around squamous cell carcinoma, cervix, should be high and it did increase in 2009 but now it has dropped down into the 60’s. Most of these, as we go along, there’s not a great deal of variation. We have other sets where the variation is quite high. So one factor here is that there’s a much smaller number of organisations, but the other factor might be that everyone is performing reasonably similarly in that band.
The MLCs has increased since 2007 but this seems to be sitting fairly flat, although 940 people would have been improved if most of the organisations were sitting around the 94% level.
2.4 in the treatment processes around CT planning rate, and as you can see, the top 20% are performing at 100%, so you’re getting even less band width here.
2.5 letters to referring doctors and General Practitioner’s has been markedly improved from 2009 onwards, but again 164 patients would have had letters, had they been performing at the top 20%.
In the outcome area, the first one we have is the follow up in the outcome of patients treated. Again, it should be high and it is high and there is just one centile gain, but this we’re down to only eight organisations reporting here.
Top of page
And the last one, breast conservation complete follow up is sitting at around 75%. And 174 patients would have benefited if the organisations could reach that top level.
So that was the existing set that the working party had to look at and review. So it does seem to come as a bit of a shock to some people, but I don't develop the indicators. I’m certainly not an expert in 22 clinical streams, I have absolutely nothing to do with the development of the indicators. They are developed by clinicians who practice in that clinical specialty. So we have a formal process, we contact the relevant colleges, so back in the initial days it was just the medical colleges but we now live in a multi-disciplinary environment, so for radiation oncology I contacted Faculty Radiation Oncology within the RANCCR, I contacted the ACPSEM for some radiation physicists, I contacted AIR for some radiation therapists and then I found out there was an entity called AAPROP, which looks at private radiation, so I contacted them. So they’re the clinicians who develop the indicators, not me and not ACHS.
We always have consumers on the working parties. We generally have an Australian private hospital association representative but they said they didn’t have any radiation oncologist, that’s all done from AAPROP so we didn’t have a representative from there. We have our statisticians from the University of Newcastle. Bob Gibbard, who was the statistician involved in the Australian Quality Healthcare study, with 4.27 Wilson & Runciman years ago, there’s himself, Peter Howley and Stephen Hancock.
We have a relationship with the new National Case Mix and Classification Centre at the University of Wollongong and they give us all our ICD 10 coding information. We have other experts as required, sometimes we invite people from NICS or NH&MRC if there’s been studies done in area that the working party are interested in developing indicators in. And the ACHS staff on these consists of myself and Chris Maxwell, who’s the Clinical Director.
So again, we can’t tell a working party that they can’t have an indicator on that. We’re just there to say, well maybe the wording needs to be focusing on this, if that’s what you want to measure. So if they want to measure how many toes people had, that’s what they’ll do. We’re just there to advise on the validity.
So how the working party works, is that I take some time to get all these – I was nearly going to say cats in a corral – sometimes it’s very difficult to deal with all the Colleges to get the relevant people in the room, so that takes some time. While I’m doing that I send out a survey to all the members who currently report on that indicator set and they get quite a lengthy survey on each indicator. Is it useful? Do you use it? Have you changed practice because of measuring, or have you implemented an education programme? Have you changed equipment? Have you changed staff? Have you changed policy? Have you started a QI project around this indicator? Is it clear? Is the user manual clear? Etc. and then the very last question is: What are some areas that you believe the working party should develop indicators in? Then there is also, should some be deleted. So that goes out to the members. The other opinion they get is from the statistician. Stephen does a statistical report, so he’ll look at all that data going back to 2003, if that’s when the indicator was there. He will give us statistical analysis i.e. this indicator has flat lined, it’s probably not useful anymore, there’s no room for improvement, it’s showing no variation unless you’re wedded to it, and my advice is to delete it. Other times he might say, this is interesting, why is the private rate so much higher than the public. Because, being a statistician, he hasn’t got that clinical knowledge. So then the working party will say, well we expected that because X, Y and Z. This is how the service works, etc. Sometimes working parties come up with ‘have no idea, that shouldn’t be the case, they should be more equal’. So that survey went out, Stephen did the report and then the working party met in October. That was the makeup of the working party. The decision is in the first area, consultation process – they are going to keep 1.1 the waiting times, but they’d like to increase the time to 28 days. They’re going to delete the second one because remember there was not a lot of variation and it had high results, so we’re going to send out a question on the bulletin board, to see if people are happy that we delete that one. Number three, the one around the clinical trials, they’re going to retain that but in this area one, they’ve been sampling, they haven’t been counting every patient. Most of our indicator sets, they count every patient, but some the number is too big so they give a sampling recommendation and in the current indicator set for radiation oncology, the sampling is one week in May and one week in November. So they want to change that to one month, possibly May and one month possibly November.
Number 2, they did an absolute slash and burn with number 2 and they’ve deleted the first four and they’re keeping number 5, which will become 2.2 and they want to add an indicator here around staging annotation.
Number 3, they’re deleting both of those and they’re proposing one around IMRT for naso-pharynx carcinoma, oral carcinoma and then they want to introduce one around dose escalation.
So what generally happens now is that I launch into the literature review, which generally would have been done six months ago, but last year we decided to do a major survey of our indicator programme, so that’s just put my work back about eight months. So I would do a literature review on all of these indicators to show if there is evidence and give a couple of pages around that evidence. But the other thing that’s happened is that we went to the RORIC meeting in Canberra a few months ago and proposed a new indicator set and they had also said, of other things, well why hadn’t the group looked at treatment prolongation and also receipt of referral. So there might be a couple of others that we will add. So what happens after we do the literature review. There will be draft indicators developed, back and forth via email, generally. Sometimes we have to have another teleconference. The first meeting is always face to face and then any further meetings we try to have as teleconference and given that I’ve got three other working parties in front of radiation oncology, I’ll probably get this done in January or February. Then they need to go back to all those Colleges to have them endorsed, to say yes we agree with this indicator set, these are the ones we support. We then have them ratified by our Board who, trust me, don't look at the minutiae. They just look at the representation. Was it representative of the radiation oncology world. We then will start disseminating that we have a new set. We’ll let all the members know in Australia and New Zealand and I’m assuming that it will be ready for the second half of 2012, so it will go into the collection on the first of July.
And that’s pretty much how we tackle an indicator review set. Certainly, if you have any other suggestions that you believe the working party needs to address, it’s perfect timing now. You just need to send an email to me and then I can take that back to the working party, because it’s certainly not finalised at this point in time.
Thank you very much.
(applause)
Top of page
Norman Swan:
First question is, how does the existence of the practice standards affect what you’re doing?
Dr Jennifer Bichel-Findlay:
I have grabbed a copy and I think we need to look at those to make sure that we have some indicators that address those standards. So yes, definitely they’ll be melded together.
Norman Swan:
But you only have 20 facilities actually doing this?
Dr Jennifer Bichel-Findlay:
Yes, I did a quick search at the time, because I couldn’t understand why there was so few reporting and I rang each centre, or emailed them and I found that the reason why a lot of them don’t report, and this is what I was told, is that they are a facility who send their patients to a private radiation oncology service, who lease a floor or a wing and they cannot tell them to report on indicators.
Norman Swan:
But don’t the private facilities want ACHS accreditation?
Dr Jennifer Bichel-Findlay:
Not all private facilities are accredited by us. There are other companies they are accredited with.
Norman Swan:
And health insurers don’t insist on it, for reimbursement.
Dr Jennifer Bichel-Findlay:
No. So my mission is to get this way up from 20. To be valid and useful, the more people who are benchmarking, the better the data.
Norman Swan:
So how come private hospitals vie for ACHS accreditation and not private radiation oncology centres?
Dr Jennifer Bichel-Findlay:
I don’t know, I haven’t got the detail of that.
Norman Swan:
Do we have an answer?
(unclear answer from floor - "... health funds require for reimbursement ...)
Norman Swan:
That’s my point, so ...
Dr Jennifer Bichel-Findlay:
With day surgery, you have to be accredited.
(unclear answer from floor "... radiation oncology practice, private practice is rarely covered by a health fund (inaudible).
Norman Swan:
Right, so it’s in the community, right.
Dr Jennifer Bichel-Findlay:
So if anyone has any ideas on how to increase the number of people, please email me.
Norman Swan:
So in fact the Commonwealth has a lot to do with it then, the Commonwealth could insist on it.
Chris:
Absolutely.
Norman Swan:
Thank you Chris. Okay – any other questions or comments? Obviously a critical issue and obviously a parsimonious set of indicators could actually have quite a powerful effect, particularly if it was in sync with the practice standards.
Jen thank you very much.
Annette McCormack:
Could I just make a comment? Annette McCormack, Victoria. The private practice in Victoria is accredited by the Victorian Health Department.
Norman Swan:
Well sure but ...
Annette McCormack:
We have to go through an accreditation ....
Norman Swan:
... but there are other facilities that get jurisdictional accreditation but also get ACHS accreditation.
Annette McCormack:
Yes I realise that.
Norman Swan:
Yes, there’s still a disjunction here.
Annette McCormack:
Yes, but there is an accreditation process we do follow, so we’re not unaccredited.
Norman Swan:
Right, thank you for that comment. Let’s have a cup of coffee and let’s come back a little later, say 25 past, half past 3 and we’ll start the next session. Thank you very much.
Top of page
Help with accessing large documents
When accessing large documents (over 500 KB in size), it is recommended that the following procedure be used:
- Click the link with the RIGHT mouse button
- Choose "Save Target As.../Save Link As..." depending on your browser
- Select an appropriate folder on a local drive to place the downloaded file
Attempting to open large documents within the browser window (by left-clicking)
may inhibit your ability to continue browsing while the document is
opening and/or lead to system problems.
Help with accessing PDF documents
To view PDF (Portable Document Format) documents, you will need to have a PDF reader installed on your computer. A number of PDF readers are available through the Australian Government Information Management Office (AGIMO) Web Guide website.

