Encouraging Best Practice in Residential Aged Care Program: Final Evaluation Report
3.1 - Introduction
prev pageprev page| TOC |next page
The key success factors referred to in Section 1.2 were used to ‘frame’ the evaluation, including data collection and data analysis. This section draws on those key success factors to report on the factors that influenced the implementation of evidence within the EBPRAC program. The one exception is the key success factor of ‘staff with the necessary skills’ which is covered in sections 4.4 and 5.1.
The primary sources of data for this section are the six-monthly progress reports submitted by each project and the interviews conducted with facility-based staff and members of the project consortia. Data analysis was informed by a conceptual framework which considers change as a constant interplay between the context of change, the content of change and the process of change (Pettigrew 1985). The conceptual framework was used to structure the coding of data, facilitated by the use of NVivo software.
Planning for implementation
In Round 1, only one project submitted a detailed project plan. In Round 2, the level of detail provided in the project plans varied considerably but was generally improved compared to Round 1, ranging from very simple plans to a comprehensive 15 page plan and a simple, but very well structured, plan.Four of the five projects in Round 1 did not have plans that documented the project objectives, project activities against performance targets or evaluation measures against time frames. This improved in Round 2 where seven of the eight plans used a format linking activities with project objectives although only two demonstrated links between project activities and EBPRAC program objectives. In some cases this made it difficult to follow the links between program objectives and what each project planned to do but easier to follow the links between project objectives and project activities. In Round 2, one plan lacked detail, focusing more on the management of the project than the content of the project, one plan included little information about what was planned and how implementation would take place and one plan included almost no detail about how anything would be implemented.
When project plans incorporated ‘indicators of achievement’ or some other descriptor of performance these tended to be framed in process terms rather than outcomes. In almost all cases indicators of achievement were either not quantified or were not readily quantifiable. All project plans included timeframes, but with considerable variation in level of detail. The lack of a consistent approach and level of detail for the project plans made it more difficult to monitor the progress of each project.
Stage or extent of implementation
There is a lack of understanding in the literature about what is meant by the term ‘implementation’, although this is improving with the recent development of what is known as ‘implementation science’. It has been argued that there appear to be discernible stages of implementation - exploration and adoption, installation, initial implementation, full operation, innovation and sustainability – and that ‘it appears that most of what is known about implementation of evidence-based practices and programs is known at the exploration and initial implementation stages’ (Fixsen, Naoom et al. 2005, p 18). An alternative way of thinking about implementation is the concept of ‘implementation fidelity’ which is about the degree to which something has been implemented rather than the stage of implementation. The main issue is whether implementation of an intervention adheres to what was intended i.e. is the content, frequency, coverage and duration of the intervention consistent with the evidence on which it is based (Carroll, Patterson et al. 2007)? In the absence of good information about the stage of implementation or the degree to which an evidence-based practice has been implemented, judgements about reported outcomes are problematic.The nature of the changes typically made by projects (see Section 3.4) makes it difficult to judge the extent to which changes were implemented, either in terms of stage of implementation or degree of implementation. Some projects indicated that they would have liked more time for implementation, suggesting that ‘initial’ rather than ‘full’ implementation had been achieved, demonstrated by these comments made in the last six months of two projects:
The most movement has been made in the last three or four months, which is why it’s a shame it’s not going for just another six months. (P)
What frustrates me at times is the fact that if I could just get a hold of that facility for a bit longer yes we could put things in place. We’re just now starting to see the real benefits of the project, just now starting to get things working how we thought that they should. (P)
With these caveats about the difficulty of judging the extent of implementation it can be concluded that, in general, projects did what they set out to do at project commencement. There were some variations to project scope, usually an increase rather than a decrease in scope, particularly with regard to education programs e.g. providing additional education.
Top of page

