Encouraging Best Practice in Residential Aged Care Program: Final Evaluation Report
1.2 - EBPRAC - evaluation
The evaluation of the EBPRAC program had two main components:
- Summative evaluation which seeks to ascertain whether and to what extent the program was implemented as intended and the desired/anticipated results achieved. The purpose is to ensure accountability and value for money with the results of the evaluation informing any future planning decisions, policy and resource allocation.
- Formative evaluation whereby the results of the evaluation inform the ongoing development and improvement of the program. This ‘action research’ approach fits well with the aim of the program to build resilience and capacity within the health system for longer term sustainable change.
The evaluation was designed to allow the evaluation team to form a judgment as to how successfully the EBPRAC program was implemented, whether the desired results were achieved and what lessons were learnt. The evaluation framework consisted of three levels to examine the impact and outcomes for consumers (residents, their families and friends), providers and the broader residential aged care sector. The three levels fit well with the objectives of the program. Evaluation of the program focused on six key issues – program delivery, program impact, sustainability, capacity building, generalisability, and dissemination.
The program evaluation drew extensively on the aggregate findings of the project evaluations, constituting a ‘meta-evaluation’ of project achievements, constraints and successes. Given the diversity of projects there were no common clinical outcomes, hence improvements in clinical care were only identified by project-level evaluations. The primary focus of the program evaluation was at the project level (rather than individual facilities participating in each project), supported by examination of within-project variation (for example, why the pace of implementation and the results achieved might vary at different facilities within a particular project).
The evaluation commenced with a review of the literature which identified eight ‘key success factors’ that may influence the uptake and continued use of evidence:
- a model for change/implementation, including the role of specific change agents or facilitators
- a receptive context for change
- the nature of the change in practice, including local adaptation, local interpretation of evidence and ‘fit’ with current practice
- demonstrable benefits of the change
- stakeholder engagement, participation and commitment
- staff with the necessary skills
- adequate resources
- systems in place to support the use of evidence (Masso and McCarthy 2009).
Top of page
The remainder of this section briefly describes the components of the evaluation.
1.2.1 Project progress reports
Projects were required to submit six-monthly progress reports to DoHA which were then forwarded to the program evaluation team. The evaluation team designed a template for the progress reports which was framed in accordance with the evaluation framework and the key success factors. When necessary, receipt of progress reports was followed up with a phone call to the relevant project team to clarify any details, elicit further information or confirm any findings.1.2.2 Site visits
An initial site visit was undertaken to each project in the first six months, with various follow-up site visits depending on circumstances. In total, 28 site visits were undertaken, with one member of the evaluation team undertaking each visit. Most of the time during the visits was spent with staff from the lead organisations discussing progress with implementation and evaluation, together with data collection for the program evaluation. Data collection during the first site visit was influenced by the ‘theory of change’ approach which seeks to understand and construct the theory underpinning an intervention (Mason and Barnes 2007).1.2.3 Economic evaluation
The economic evaluation involved the distribution of two questionnaires (Questionnaire 1 and Questionnaire 2) and a spreadsheet to each project to obtain data on inputs and outputs. The design of the EBPRAC program, including the diverse nature of the projects and the lack of common outcomes, did not lend itself to a ‘classic’ economic evaluation, necessitating a pragmatic approach which focused on the cost implications for government and providers.Questionnaire 1 requested information on the main intended outcomes for residents, staff and facilities, what was being implemented by each project and some details on project scope. Questionnaire 2 requested information on project activities (and some degree of quantification of those activities); payments to facilities, costs incurred by facilities, wider cost impacts of the project (e.g. referrals to external providers) and project effectiveness (both qualitative and quantitative data). The spreadsheet was used to collect data on the costs of different phases of each project – governance, establishment, implementation and evaluation. Data from both questionnaires and the spreadsheet were used to inform many sections of the report, not just the section on costs.
1.2.4 Interviews
Interviews were conducted with three groups of stakeholders. The first two groups consisted of people working as part of the project consortiums and facility staff with a good understanding of the project (e.g. managers, facilitators). Selection of those invited for an interview followed a purposive sampling approach using data from project progress reports and discussions with lead organisations to identify suitable people to interview. Interviews were conducted between September 2009 and November 2010 in Queensland, South Australia, New South Wales and Victoria, including interviews with staff from 25 facilities.The third group of interviewees included ‘high level’ stakeholders who could inform the program evaluation. This included people from the Department of Health and Ageing, Aged Care Standards and Accreditation Agency, Aged Care Association Australia, Royal College of Nursing Australia and two major providers of residential aged care. Purposive sampling was used (interviewing people with a good knowledge of the program or residential aged care), with some snowball sampling i.e. inviting some of those interviewed to suggest additional people that it might be useful to interview. Interviews were conducted in June and July 2010. The numbers of people interviewed are summarised in Table 3.
Top of page
Table 3 Summary of people interviewed for the EBPRAC evaluation
Group of stakeholders | Number of interviews | Number interviewed |
|---|---|---|
| Staff working in residential aged care facilities | 28 | 34 |
| Working as members of project consortiums | 16 | 17 |
| High level stakeholders | 13 | 18 |
| Total | 57 | 69 |
1.2.5 NHS Sustainability Tool
To gain some quantification of the likely sustainability of project improvements we requested projects to complete a sustainability tool developed in the UK National Health Service (Maher, Gustafson et al. 2006), once at the beginning of project implementation and once near the end of project implementation, for each facility.1.2.6 Ethics approval
The program evaluation was initially approved by the University of Wollongong / Illawarra Area Health Service Human Research Ethics Committee in April 2008, with subsequent amendments in July 2009 and October 2009. All projects received ethics approval from the relevant human research ethics committees.Top of page

