Evaluation of the Bringing them home and Indigenous mental health programs
7.2.9 Evaluation and monitoring
In order to develop and improve service delivery in any area, it is important that service providers have an ongoing commitment to regular evaluation and reflection, including to the concept of ‘action research’, where evaluation findings are fed back into and inform changes to and development of service delivery on an ongoing basis.
The consultations conducted for this evaluation indicated that most of the services under all four of the programs have done relatively little in the way of evaluation and monitoring beyond meeting the formal reporting requirements to OATSIH and participating in the present evaluation and in some instances, State-level evaluations of the programs (eg in Victoria and NSW). (There are some exceptions, such as Nunkuwarrin Yunti in SA.) The challenges experienced in organising the fieldwork for the evaluation (see chapter 2) also suggest that many of the services may not see evaluation activities as a core activity which informs and feeds into their service delivery on a regular and ongoing basis.
Data from the annual BTH Questionnaire also demonstrates that there is a lack of emphasis on evaluation within the BTH Program. The Questionnaire seeks information on evaluation and monitoring strategies used by services receiving BTH funding. In 2004-2005 (the latest year for which information is available), the great majority of services collected information on client characteristics and sought informal client feedback. However, only a very small proportion of services either measured client progress or made use of a client satisfaction survey on a systematic basis (see table 7.1). An exception is the Victorian RC, which provides evaluation forms at the end of all its training sessions and the Regional Forums held on a three-monthly basis (see discussion earlier in this chapter).
Monitoring of the performance of individual Link-Up staff members at a local management level has also been inhibited by limitations of the Foxtrot system. Foxtrot does not allow differentiation between caseloads of staff members in many performance monitoring areas, which has meant that the office’s collective performance has largely been the only available tool. The outcome of this has been that:
- Management can only assess performance by individual staff members by their own observations or by reports from other staff. This makes justification of removal or disciplining of under-performing staff difficult, as it is based essentially on perception.
- Staff who are achieving are resentful of others they believe are under-performing and resent their accepting collective acclaim for achievement. It was reported that the overall effect on morale has been negative.
Several factors contribute to this lack of emphasis on program evaluation and action research:
- Many of the staff employed in the programs may have limited or no skills in this area (since this is not part of their job descriptions).
- There is lack of guidance at a national or State level from OATSIH about this. There is no overall evaluation framework for any of the programs, despite this being strongly recommended by the Ministerial Council of Aboriginal and Torres Strait Islander Affairs (MCATSIA) in 2003 (p67).
- There are heavy workloads for services, which encourages a tendency to focus on immediate service delivery needs rather than other activities such as evaluation.
- The Aboriginal SEWB sector is still a relatively young and under-developed field (see chapter 4 and appendix B).
This is of particular concern given that many State OATSIH offices have a ‘hands off’ approach to program management, and are therefore heavily reliant on the data reported annually to OATSIH to perform their program management function.
- GPP6:
- All services funded under the BTH, Link-Up, SEWB RC and Mental Health Programs should conduct regular evaluation and monitoring activities using an ‘action research’ model whereby evaluation findings are used to inform service delivery on an ongoing basis.
Table 7.1: Monitoring and Evaluation Strategies
| Strategy | 2001-2002 | 2002-2003 | 2003-2004 | 2004-2005 |
|---|---|---|---|---|
| Collecting information on client characteristics | 98% | 89% | 81% | 84% |
| Seeking informal client feedback | 82% | 86% | 73% | 80% |
| Systematic use of a client satisfaction survey | 24% | 18% | 16% | 15% |
| Systematic measuring of client progress | 22% | 24% | 25% | 19% |
| Other | 33% | 16% | 34% | 32% |

