Evaluation of the consumer - directed care initiative - Final Report
9.4 Considering the cost-effectiveness of the initiative
Given the evaluation was conducted at an early stage of the initiative’s implementation, it was not possible to undertake a full cost-effectiveness analysis as part of the evaluation.26 It is therefore not possible at this point in time to say whether consumer-directed care is cost-effective, or more or less cost-effective than standard packaged care and respite.
However, some high-level observations have been made relating to the cost-effectiveness of the initiative, based on data and qualitative evidence of outcomes collected as part of the evaluation. These observations are based on the following two questions:
- Were CDC and CDRC participants able to access more supports for a given level of resources compared with standard packaged care and NRCP consumers?
- Do CDC and CDRC participants realise greater benefits or outcomes for a given level of resources compared with standard packaged care and NRCP consumers?
More supports for a given level of resources?
Consumer-directed care
It is very unlikely that most CDC participants were able to access more supports through a CDC package compared with a standard packaged care package. It is possible that some CDC participants were actually receiving fewer supports than they would under standard packaged care if providers were charging more for administration and care planning and management than under standard packages.While the administration and care planning and management components of CDC packages were known (and were presented earlier in this chapter), there is no corresponding data for standard packaged care packages to compare to. However, as outlined above, there is evidence that the time involved in planning, administering, and coordinating a CDC package is higher for providers than a standard packaged care package. There is evidence that at least some of these additional costs were being passed on participants through administration charges and charges for care planning and management. While these additional costs were not substantial, it does mean that a slightly higher proportion of a CDC package is being used for administration and care planning and management, and a lesser proportion for services and supports, than under standard packaged care packages.
There is also some evidence that providing greater choice to participants was impacting on the amount that CDC providers were charging for administration and care planning and management, particularly when participants were selecting providers other than the CDC provider. There is evidence that some providers (though not all) were increasing the amounts charged for administration and care planning and management when they were required to purchase or broker supports from other organisations to take into account the additional work involved. Again, for participants accessing a package through these providers, this increased the administration and care planning and management components of a package, and would leave fewer resources for services and supports.
On the other hand, as noted in chapter 5 (Implementation and operation of the initiative), some providers were implementing tiered or differential charging for administration and/or care planning and management, where the charges took account of the degree of self-management a participant or their carer was able to take on themselves. Where a participant was able to take on some or all of the coordination of their package, charges for administration and/or care planning and management were lower than if the provider were undertaking most or all of the coordination. Hence participants accessing a package through these providers, and self-managing their package to some degree, would potentially have had more resources for services and supports compared with standard packaged care. At this stage, however, there appears to have been relatively few participants who were fully self-managing their package.
It should be noted that there is no robust and consistent data available with which to compare levels of support accessed by CDC participants with levels of support accessed by standard packaged care participants. The only available measure for both groups was ‘hours’, though they were from two separate sources which were not comparable (CDC hours from the CDC provider data collection, and standard packaged care data from claims submitted by providers).27 Further, ‘hours’ does not take into account the range of supports that were not measured in hours (such as equipment, home modification, transport), and can be skewed by very low cost supports which were measured in hours (such as on-call access).
Consumer-directed respite care
Unlike CDC, for CDRC there were no ‘standard care packages’ to compare to, hence it was not possible to determine whether a CDRC participant was able to access more supports for a given level of resources than carers accessing respite and other supports through the NRCP program. As for CDC, ‘hours’ was the only measure of level of support, and this is an imperfect measure given it can be skewed by low cost supports and does not include supports not measured in hours.It is unclear whether CDRC participants were paying the full cost of services – particularly for Commonwealth-funded residential respite – or whether they were only paying the consumer fee from their package.28 From the data available it appeared that participants were paying a very small fee for residential respite, indicating that they were not paying the full cost. Further, CDRC participants moving away from seemingly low-cost services such as residential respite to seemingly more expensive in-home and individualised supports will mean that CDRC participants were receiving fewer supports overall than NRCP consumers for a similar level of resources.Top of page
Greater benefits or outcomes for a given level of resources?
Evidence of benefit and outcome
While there was some qualitative evidence of benefit and outcomes for CDC participants compared with standard packaged care consumers – particularly for those on higher care packages – there was no firm quantitative evidence of benefit or outcome available at this early stage of the initiative’s implementation.As noted in chapter 6, during the period in which the evaluation data collection was undertaken, the majority of participants had been receiving their package for less than six months. While this may have been sufficient time for some participants to begin to experience some benefit or outcome from their package (and there was qualitative evidence from interviews and surveys that they were), other participants were still ‘getting used to’ their package and what they could do with it, and many providers were also still developing their own approach to CDC. This is likely to have impacted on the degree to which many participants experienced benefit or outcome from their package, and means that the full extent of benefit or outcome from the initiative is not able to be observed or measured at this stage.
For CDRC, there is considerable qualitative and survey evidence that CDRC participants have realised benefits and outcomes from receiving a CDRC package, and noticeable differences in levels of satisfaction between CDRC survey respondents and NRCP comparison group survey respondents – in a relatively short period. However, as noted in chapter 7, CDRC participants had an annual CDRC package allocation to spend over a period of six months or less, and it is likely that the benefits and outcomes realised were to some extent due to the level of resources available during this initial period.
Measures of outcome
The evaluation used the ICECAP-O tool to derive a measure of wellbeing for the CDC and CDRC participant groups and standard care comparison groups. However, while the ICECAP-O tool measures wellbeing in older people, it is only a partial measure of outcome of the initiative, and there was no composite outcome measurement tool available which encompassed all potential outcomes.The partial measure of outcome which was collected for each CDC and CDRC participant and consumer in the standard packaged care comparison group – the ICECAP-O wellbeing measure – showed no statistically significant difference between the CDC group and standard packaged care comparison group, and no statistically significant difference between the CDRC group and NRCP comparison group.29 This indicates that at this stage there is no quantitative evidence that CDC or CDRC enhances wellbeing for participants.
Further, given the timing of the evaluation the ICECAP-O was administered at one point in time only. While comparisons were able to be made between the CDC and CDRC participant groups and standard care comparison groups, changes in wellbeing over time within each group could not be determined.
Undertaking a full cost-effectiveness analysis
A full cost-effectiveness analysis was not conducted as part of the evaluation, given the length of time the initiative has been operating at the time the evaluation was conducted, and the lack of unequivocal quantitative evidence of benefit or outcome from the initiative compared with standard packaged care or respite care.In addition, it is not clear whether the additional costs incurred by providers associated with implementing and administering the initiative and coordinating supports for participants will continue to be incurred, or whether costs will decline over time. While there was little evidence of participants self-managing their package to any significant extent at the time of the evaluation, as the initiative develops and participants’ capacity to self-manage increases, it is possible that costs associated with administering packages and coordinating supports will decline.
Undertaking additional data collection and a full cost-effectiveness analysis should be considered as part of any future evaluation. This should occur once the initiative has been in operation for a longer period (at least two years), that is, when participants have had sufficient time to realise benefits or outcomes from participating in the initiative and the costs of the initiative have stabilised and are better understood. A cost-effectiveness analysis should encompass:
- a single measure of wellbeing or quality of life (for example, using a validated outcome measurement tool appropriate for older people such as the ICECAP-O). Ideally, the tool used should allow for quality-adjusted life years (QALYs) or disability-adjusted life years (DALYs) to be calculated. Further, the tool should be commonly used in the health and aged care sectors so that outcomes can be compared across programs or interventions.
- measurement of changes in wellbeing or quality of life for participants over time. This will require collection of data at different points in time – for example, at a baseline (for example, when participants commence a package, or a single point in time for all participants), and then periodically (for example, every three or six months over a defined period).
- collection of wellbeing or quality of life data from standard packaged care and respite comparison groups. This will ensure that any changes in wellbeing or quality of life attributable to the initiative can be isolated from changes attributable to receiving community aged care or respite care.
- the full costs of the initiative, including both set-up costs and ongoing costs, relative to the full costs of standard packaged and respite care. It is important that data is collected once the initiative has been in operation for a reasonable period (at least two years), that is, when costs have been fully realised and have stabilised.
26. While a cost-effectiveness analysis was within the original scope of the evaluation, it became clear that a full cost-effectiveness analysis would not be able to be undertaken. As such, it was agreed with the Department, in consultation with the Evaluation Reference Group, that a ‘cost assessment’ would be undertaken instead of a cost-effectiveness analysis, and that guidance on undertaking would be provided on undertaking a full cost-effectiveness analysis once the initiative had been in operation for a longer period.
27. Actual comparisons made between average CDC hours per participant and average usual care hours per consumer revealed considerable differences, yet the level of resources available to each group were broadly similar.
28. Data collected indicates that the fee charged to a CDRC package for Commonwealth-funded residential respite was $2 per hour, compared with approximately $35 per hour on average for in-home respite.
29. using the Mann-Whitney test for non-normally distributed data, p=0.05
Top of page

