Encouraging Best Practice in Residential Aged Care Program: Final Evaluation Report
4.1 - Introduction
prev pageprev page| TOC |next page
Each project included a project-level evaluation, at the core of which was a ‘before and after’ design i.e. measuring a series of variables before implementation commenced and then measuring the same variables after implementation of the evidence. In the absence of control groups, there is a need for some caution in interpreting the results. For example, any judgements about impact on clinical care will be subject to the problem of attributing changes to what was done as part of each project, rather than other factors.
Projects used a wide-range of data collection methods, primarily audits, interviews and surveys but also including focus groups, observation and case studies. Some project activities were as much a part of the evaluation as project implementation. In particular, many activities at project commencement not only provided data for the evaluation but also provided data that was fed back to staff to shape and inform the approach to implementation.
Four of the Round 2 projects based their evaluation on at least some elements of the framework for the program evaluation, something which did not happen in Round 1. Four projects framed their evaluation around a series of evaluation questions, as suggested at the Round 2 Orientation Workshop in December 2008. This helped to understand the rationale underpinning project evaluations but two projects took this to the extreme by limiting their evaluation plans to a table with four columns of information – objective, evaluation question, indicators, data sources - and nothing else. One project used the same table but within the context of a more comprehensive approach that provided information about the process of the evaluation and methods of data collection.
In Round 1 there was a strong emphasis on collecting data on residents. In Round 2 there was more of an emphasis on collecting data across the three ‘levels’ of the evaluation – residents, staff, facilities. In Round 2, a greater variety of tools were used to collect data about practice improvements e.g. interviews with staff, audits of practice and evaluating improvements in policies and procedures.
None of the evaluation plans included any direct means of assessing sustainability. Three projects planned to conduct economic evaluations but none ended up doing so. All projects planned to collect useful data to inform whether program objectives had been met, with the exception (in both rounds) of the objective ‘build consumer confidence in the aged care facilities involved in EBPRAC’.
This section of the report summarises project impacts on residents, families, staff, facilities and the community more broadly, based on the results of the project-level evaluations.
Top of page

