With 25 sites and approximately 530 interview participants, organising the qualitative data in a meaningful way was a crucial exercise.

A pro-forma was developed in order to summarise the interviews and desktop data for each site visit, grouping data in higher-order questions related to the two evaluation objectives. The site summaries were completed by the evaluator following each set of consultation interviews with an RCS or UDRH and their stakeholders. An additional form was developed to capture specific raw data in terms of quotes, comments and perceptions, and issues which need to be explored more closely. These two documents, compiled for each site, formed the central components of the consultation data collection.

In essence, a summative analysis took place after each consultation, as the evaluator assessed the information and collated it into the pro-forma. The purpose of a summative analysis is for the evaluator to undertake a preliminary analysis, a 'summing up', to create an overall picture of the Program site, its context, its issues and achievements, and the extent to which it is contributing to the national objectives. Each summative analysis was completed as close to the time of the consultation as possible to ensure accurate recall.

The qualitative analysis therefore comprised several activities:

  • Sifting raw data to gather critical points relevant to the strategic analysis, answering key questions from the data, moving from the descriptive to the analytical with regard to each site;
  • Making sense of data in responding to broader questions which extend across the national Program, that is, considering what can be learned from the site which illustrates the strategic aims and outcomes of the national Program;
  • Undertaking a thematic analysis by the evaluation team together; and
  • Aggregating data to undertake a critical analysis with regard to the higher order research questions.
Each member of the evaluation team was responsible for compiling the data for their consultations. A day-long team meeting was held at the beginning of the consultation to ensure consistency across the evaluation team with regard to program objectives, and a second day-long meeting was scheduled towards the end of the consultation process for analysis purposes. At this time key themes were identified as emerging from the data; following the meeting the data was further collated to support the themes. Written data was then analysed again at an aggregate level in the development of the draft report.

In general, it was decided not to focus on individual sites (other than the summary information found in Appendix A), as the purpose of the evaluation was to examine the national Programs rather than the individual institutions. However, within chapters 4 and 5 reference is often made to the activities of specific sites as illustrative of themes within the text. In addition, several case studies were developed which highlight types of good practice or positive contributions which various sites have demonstrated. Although these name particular RCSs or UDRHs, these are not intended to privilege those sites above and beyond others, who may also be undertaking similar activities. Rather, these case studies attempt to create a picture of the contribution of the UDRHs and RCSs nationally by providing examples of what tangible activities and outputs are resulting in local regions.