Evaluation of the Child Health Check Initiative and the Expanding Health Service Delivery Initiative - Final Report

C - Evaluation methodology

Evaluation of the Child Health Check Initiative and the Expanding Health Service Delivery Initiative - Final Report

Page last updated: 17 April 2012

The evaluation involved four key phases: design; implementation; consultation, communication and dissemination; and analysis and reporting. These phases, and the key activities undertaken at each phase, are shown in Figure A and are discussed in further detail below. Although the phases are presented as a sequence, Phase 4 started before the completion of Phases 2 and 3 (as indicated by the dates).


Figure A: Evaluation phases and activities

Figure A: Evaluation phases and activities
[D]

210 Appendix C: Evaluation methodology Phase 1 involved designing the evaluation and development of the evaluation design report (EDR). The EDR set out our plans for addressing the evaluation objectives and proposed the methodology and process for the project as a whole. The activities undertaken as part of this phase are discussed in Table A.Top of page


Table A: Phase 1 activities
Phase 1: Design
Stakeholder engagementAt the beginning of the evaluation we had a series of meetings with the key program partner agencies—including DoHA (OATSIH), DHF, AMSANT and AIHW—and with the key evaluation governance body (the MoU Management Committee and its Indigenous Advisory Group) and members of the PHRG. The meetings were held to gain an understanding of stakeholder interests in, and perspectives on, the evaluation. We continued to engage with the key partners in the MoU Management Committee throughout phases 1 to 4.

We also met with staff from OATSIH who briefed us on each of the component parts of CHCI and EHSDI, and on aspects of related programs.

Document and literature reviewWe reviewed relevant documents on the programs and undertook a literature search to further our understanding of the context of the programs and how previous programs had worked, and to help us draw conclusions from a wider evidence base.
Description of programsWe used the information collected to build a description of the programs—funding, activities, goals and objectives, etc. We considered developing the descriptions into program logic models and included some initial models for EHSDI in the EDR as a basis for further development as part of the evaluation.
Communications planWe developed a communications plan for the evaluation that identified key stakeholders and protocols for communicating with a number of these groups. It also identified communications risks and mitigation strategies.
Evaluation standards and guidelinesWe adapted the evaluation standards of SEVAL (the Swiss Evaluation Society) to make them more relevant to the context of evaluating programs in remote Aboriginal communities. From this ideal set of standards we then developed a set of guidelines to provide specific direction on how the standards would be applied to the evaluation.
Evaluation designOn the basis of all the scoping and initiation activities in Phase 1, we developed an overall evaluation design for both the CHCI and EHSDI identifying the type of evaluation, the key questions that related to each of the evaluation objectives, and the overall methods. The evaluation design also outlined the sources of data and proposed indicators, and summarised expectations about data analysis and interpretation. It also identified potential limitations of the evaluation design.

Phase 2 involved implementing the evaluation plan and the main activities structured around the core methods for collecting and analysing data. The workshops focused on the formative evaluation of EHSDI, and the main quantitative analysis focused on the coverage of the CHCI. The scope of the case studies covered both programs. The case study design was initiated in Phase 1 but, due to the scope of this work and the comparatively late inclusion of case studies in the overall evaluation design, it was carried over as a key Phase 2 activity. These activities are further described in Table B.

The other major activity in Phase 2, not described in Table B, was taking the evaluation through the ethics application process. The evaluation design was submitted to the Top End Human Research Ethics Committee and the Central Australian Human Research Ethics Committee in October 2009. It was granted full approval from both committees in November 2009. The CHCI quantitative analysis, as described in Table B, was submitted for approval after the overall design had been approved. It was submitted to the Top End Human Research Ethics Committee in January 2010 and the Central Australian Human Research Ethics Committee in February 2010. The committees granted approval in February 2010 and April 2010 respectively.Top of page


Table B: Phase 2 activities
Phase 2: Implementation
EHSDI workshop 1Workshop 1 was held on 28 October 2009 and involved 17 members of the PHRG and the MoU Management Committee and its Indigenous Advisory Group. Being part of the formative evaluation of EHSDI, the workshop was designed to provide an opportunity to critically reflect on the implementation of EHSDI and to identify ways to improve its implementation.

The workshop discussed three interrelated issues: partnership, capacity and communication. These issues were identified as ‘hot topics’ in the reform process and as critical factors in the medium- to long-term performance of the PHC system.

We prepared short papers on each of the three issues, setting out what was meant by each issue and referring to literature on critical success factors or principles of good practice. The main purpose of these papers was to provide a prompt for thinking about these issues in the context of EHSDI. Each paper concluded with a set of questions to support this thinking.

At the workshop, for each of the topic areas we introduced the topic; asked each of the evaluation partner organisations (AMSANT, DoHA and DHF) to give a brief perspective on the topic; facilitated a roundtable discussion on the topic; and summarised the key issues and ideas arising from the discussion for agreement by the participants. The agreed issues and ideas formed the basis for a short written report to the participants.

Case study designThe case study design built on work initiated in the EDR and was undertaken to ensure that the evidence gathered in the case studies addressed the key evaluation questions. It covered six areas:
    • data to be collected (case study evaluation questions)
    • case study units of analysis and sites
    • data collection methods and protocols
    • analytical framework
    • criteria for interpretation
    • reporting.
It also included components of our project plan relating to case study schedules, project team members’ roles, and risks and contingencies, as well as detailed interview guides and a case study protocol.
CHCI quantitative analysisThis analysis was designed to help address the first evaluation objective: to assess the extent to which the child health checks reached the target population. It aimed to identify whether there was any difference between the population of children who received a child health check and the population of children who were eligible for a child health check but did not get one. In corroboration with other data collected in the evaluation, this would help to determine whether the group of children who had a child health check were more or less likely to need one than the group of children who did not have one.

We undertook this work in partnership with the DHF and AIHW.

The work involved:

      • matching records in the child health check dataset with records in the DHF’s Client Master Index (CMI) to identify the two population groups: children who had a child health check and children who were eligible for a child health check but did not have one
      • comparing the two population groups by key demographic factors: age, sex and district of residence
      • identifying the two population groups within the NT Hospital Morbidity data to compare number of hospital admissions and diagnosis, including by region of residence
      • identifying the two population groups within the NT Midwives data and comparing data on mean birth weight and low birth weight
      • identifying the two population groups within the Growth Assessment and Action (GAA) data and comparing data on number of GAA attendances.
Each of these activities involved linking disaggregated data (in identifiable form); replacing individual identifiers with codes; aggregating and analysing data; and testing results for significance.

We had planned to examine other indicators within the GAA data but the proportion of records that could be matched with this collection was not sufficient to allow robust analysis. Similarly, we had planned to match the records with the Healthy School Age Kids (HSAK) data collection, but the proportion of records that could be matched was insufficient.

As part of this work, we also undertook to complete a separate ‘nested’ study on two other child health/wellness check programs—the GAA and HSAK programs. These studies were to use a similar method to:

      • compare the population that received a GAA check with the population that was eligible for a GAA check but did not receive one
      • compare the population that received an HSAK check with the population that was eligible for an HSAK check but did not receive one.
Because of data limitations, we did not complete these studies. Negotiating access to GAA data was protracted and could not be completed within the evaluation time frame. For HSAK there was no single, comprehensive data collection with adequate historical data to complete the analysis.
Case studiesThe evaluation case studies were aimed at all levels of the NT PHC system. The main method used to collect information was interviews with 154 people, including:
      • staff and representatives of Australian and NT government departments, AMSANT, NT AHF, PHRG, and Australian and NT non-government bodies
      • health professionals and groups based in four NT case study regions
      • health service staff, community members and parents/guardians based in five case study remote Aboriginal communities.
The interviews were largely carried out face to face. They were tailored to each stakeholder group and were structured around the following core case study questions:
      • How did the CHCI address children’s health and welfare needs?
      • Why did some children get checked while others did not?
      • Why were requested follow-up services not completed for all children?
      • How did the CHCI affect existing health service delivery and the health system?
      • How have social determinants contributed to change in health status?
      • How is PHC understood, and how is the vision for PHC system reform understood?
      • How are Aboriginal communities involved in health service planning and governance?
      • Why have the planning and consultation processes on regional reform been effective (or not)?
      • How is EHSDI funding being distributed, how much funding has been received, and how has this funding been allocated and spent?
      • How is the expenditure of EHSDI funding affecting PHC service delivery?
      • How has the RAHC affected workforce availability and flexibility?
      • How is information and data used to support continuous improvement in health service delivery?
The qualitative information from the interviews was analysed thematically and through an iterative process which involved identifying key concepts, conflicting and converging ideas, assumptions and theories, and interpretations. The analysis evolved from examining single cases to looking across the case studies and finally corroborating the data with that collected from other sources and through other methods.
EHSDI workshop 2Workshop 2 was held on 11 May 2010 and involved 21 members of the PHRG and the MoU Management Committee and its Indigenous Advisory Group. It followed a similar process and format to workshop 1. The ‘hot topic’ identified for this workshop was sustainability.

The workshop included two further items:

      • a recap of the main issues and ideas that arose during workshop 1 around the three topics for that workshop—partnership, capacity and communication—with each of the partner organisations (AMSANT, DHF and DoHA) discussing where they thought progress had been made in the intervening six months, and where further attention was required
      • interim feedback on our work within each of the five case study communities.

Phases 3 (consultation, communication and dissemination) and 4 (analysis and final reporting) of the evaluation overlapped. Table C describes the main activities in these two phases sequentially, rather than by phase. This final report is the third draft of the final evaluation report.
The purposes of the consultation workshops and meetings in Phase 3 were to:
  • check for consistency between the evaluation, stakeholder perspectives and the plans for EHSDI and the delivery of child wellness programs
  • increase understanding of the evaluation and the utility of the findings
  • improve the accuracy and utility of the evaluation report
  • review and refine any future evaluation plans.
Top of page
Table C: Phase 3 and Phase 4 activities
Phase 3: Consultation, communication and dissemination
Phase 4: Analysis and final reporting
Draft 1 report and summary reportThe first draft of the final evaluation report was based on all relevant information collected in the previous two phases (design and implementation) and covers the full scope of the evaluation objectives. Its intended audience included the program partner agencies and the Indigenous Advisory Group.

An associated summary report was prepared for a wider audience, including the NT AHF, PHRG, regional PHC organisations and groups in the NT (such as regional steering committees), and senior executives of DoHA and other relevant Australian Government departments.

Program partner agenciesWe met with the members of the MoU Management Committee (AMSANT, DHF and DoHA) and the Indigenous Advisory Group in September 2010 to share the draft evaluation findings and discuss their implications. The purpose of these meetings was to enable the evaluation partners and key program implementers to fully consider the findings, and to inform the ongoing PHC reform program.

We met separately with other program partner agencies/groups in October and December 2010 to share and discuss the draft evaluation findings, including the NT AHF, DHF staff, NT and Canberra-based OATSIH staff, and a small number of PHRG members.

Australian GovernmentWe shared and discussed the draft evaluation findings with representatives of the Australian Government, including senior staff of DoHA and FaHCSIA, in December 2010.
RAHCWe met with the chair of the RAHC board and the general manager of RAHC to get feedback on the draft evaluation findings in December 2010.
NT regionally based health bodiesWe met with members of the East Arnhem Regionalisation Steering Committee and staff from a regional ACCHO in November 2010. These discussions were aimed at sharing the draft evaluation findings and informing the ongoing PHC reform program.
Summary of feedbackWe provided a report summarising the feedback and discussions from the above consultation activities to the MoU Management Committee.
Draft 2 report and summary reportThe second draft of the final evaluation report incorporated consideration of all the feedback received from the consultation activities up to the end of December 2010. This draft and an associated summary report were further considered by the MoU Management Committee in February 2011.
Final report and summary reportThe third and final draft of the final evaluation report, and its associated summary report, considered all the feedback from Phase 3.
Other proposed dissemination activitiesWe will prepare an evaluation newsletter which includes the main findings and messages from the evaluation and is targeted at health professionals and other people living in remote Aboriginal communities. This newsletter will be distributed to health centres in all NT remote communities, including the five case study communities.

Data collection methods used to address evaluation objectives

The following matrices (Table D and E) show how the methods and data sources were used to address each of the evaluation objectives. The data sources identified in the table are the main sources of information used to address each objective.Top of page


Table D: Child Health Check Initiative
Evaluation objective
Key informant interviews
Case studies
Workshops
Health datasets
Program data
Program documents and research literature
1. Assess the extent to which the child health checks reached the target populationAMSANT
DHF
DoHA
FaHCSIA
Interviews
Health service data
DHF datasets (CMI, Hospital Morbidity, Midwives, GAA)Child health check datasetsAIHW CHCI progress reports
2. Identify the prevalence and, if possible, the severity of the health conditions found through the child health checks and validate these findings with data from other sourcesABCD datasetAIHW CHCI progress reports
Morris study
3. Assess the extent to which requested primary care, allied health and specialist follow-up services have been received; gaps in existing health service delivery; and barriers to the completion of follow-up treatmentAMSANT
DHF
DoHA
Interviews
Health service data
Child health check datasetsAIHW CHCI progress reports
NTER review
DHF and DoHA correspondence/documents
4.1 Analyse whether the CHCI has led to improvements in health service delivery for Aboriginal and Torres Strait Islander childrenAIDA
AMA
AMSANT
DHF
DoHA
InterviewsHealth workforce data
GAA dataset
Hospital morbidity dataset
Financial dataNTER review
AIDA HIA
DHF and DoHA correspondence/ documents
4.2 Analyse the health status of children in relation to the social determinants of health and access to comprehensive PHCAIDAInterviews
Questionnaire
DHF data (maternal health and hospital admissions)Child health check datasetsAIHW CHCI progress reports
NTER review
AIDA HIA
4.3 Analyse treatment outcomesInterviewsChild health check datasetsAIHW CHCI progress reports
Top of page
Table E: Expanding Health Service Delivery Initiative
Evaluation objective
Key informant interviews
Case studies
Workshops
Health datasets
Program data
Program documents and research literature
1. Assess the impact and sustainability of the EHSDI on PHC service delivery and equitable distribution of resourcesDHF

DoHA

Interviews

Observation

Document review (e.g. Area Service Plans)

Workshop 2Financial data

Workforce data

PHRG documents (policy and planning papers)
2. Assess the extent to which Aboriginal and Torres Strait Islander people(s) were engaged and empowered to contribute to health service planning, governance and responsiveness of servicesAMSANT

DHF

DoHA

Interviews

Questionnaire

Document review

Workshop 1PHRG documents

Pathways document

Regionalisation guidelines

Research literature

3. Assess the impact and sustainability of the RAHC on health workforce availability and flexibility in the NTAMSANT

DHF

DoHA

RAHC

Interviews (including with health professionals placed under the RAHC)RAHC database

Financial data

Funding agreement

PHRG documents

4. Assess the efficiency of the EHSDI in terms of how well it has maximised health service delivery with the available fundsAMSANT

DHF

DoHA

InterviewsFinancial data

Workforce data

Research literature

PHRG documents

5. Assess the effectiveness of the EHSDI in achieving change in health status, including measurement against primary care related health indicators as developed through the NT AHKPIs project and the analysis of the CHCIAMSANT

DHF

DoHA

InterviewsNT AHKPI dataDHF and NT AHF documents
6.1 Assess the impact of the regional reform process on the efficient and effective operation of health servicesAMSANT

DHF

DoHA

Interviews

Document review

Workshop 1Financial dataPHRG documents
6.2 Assess the impact of the regional reform process on clinical governance, including quality of health services deliveryAMSANT

DHF

DoHA

PHRG documents
6.3 Assess the impact of the regional reform process on information systems and planning capacityAMSANT

DHF

DoHA

InterviewsWorkshop 1DHF and NT AHF documents

Case study questions

The 12 core evaluation questions developed for the case studies were:

Child Health Check Initiative

  1. How did the CHCI address children’s health and welfare needs?
  2. Why did some children get checked while others did not?
  3. Why were requested follow-up services not completed for all children?
  4. How did the CHCI affect existing health service delivery and the health system?
  5. How have social determinants contributed to change in health status?

Expanding Health Service Delivery Initiative

  1. How is PHC understood, and how is the vision for PHC system reform understood?
  2. How is PHC understood, and how is the vision for PHC system reform understood?
  3. How are Aboriginal communities involved in health service planning and governance?
  4. Why have the planning and consultation processes on regional reform been effective (or not)?
  5. How is EHSDI funding being distributed, how much funding has been received, and how has this funding been allocated and spent?
  6. How is the expenditure of EHSDI funding affecting PHC service delivery?
  7. How has the RAHC affected workforce availability and flexibility?
  8. How is information and data used to support continuous improvement in health service delivery?

Top of page