Vaccine Preventable Diseases and Vaccination Coverage in Australia, 2003 to 2005

Notes on interpreting data

Disclaimer: This is the fourth report on vaccine preventable disease and vaccination coverage in Australia, and is produced by the National Centre for Immunisation Research and Surveillance of Vaccine Preventable Diseases and the Australian Institute of Health and Welfare on behalf of the Australian Government Department of Health and Ageing published as a supplement to the Communicable Diseases Intelligence journal Volume 31, June 2007.

Page last updated: 20 July 2007

Vaccine preventable diseases data

Comparisons between the notification, hospitalisation and death databases should be made with caution as they differ in their purposes, reporting mechanisms and accuracy. To provide the most recent information available, and to account for the varied reporting formats, different time periods have been reviewed for each data set. As there were no unique identifying codes to link records for the same individual across databases, and because of differences in the accuracy of each database, it was not possible to analyse deaths and hospitalisations as a subset of notifications.

The rates presented here are crude rates and may be confounded by differences in the population structure (e.g. age, ethnicity and population density) between jurisdictions. An exploratory analysis of 2002 pneumococcal and incident hepatitis B notification rates for the Northern Territory found that directly age-standardising the rates to the 2001 Australian population did not change the rates significantly (pneumococcal crude rate 20.2 per 100,000 vs 20.5 per 100,000 age-standardised; hepatitis B crude rate 6.8 per 100,000 vs 5.7 age-standardised.) The Northern Territory is the jurisdiction with the most different age structure and we have elected to continue using crude rates as per previous reports. It is also important to note that jurisdictions with small populations (e.g. the Australian Capital Territory, Tasmania, the Northern Territory) may have high rates even with low absolute numbers of cases, so that a small change in numbers results in a large change in rates.

Notification data

A major limitation of the notification data is that they represent only a proportion of the total cases occurring in the community. This proportion may vary between diseases and over time, with infections diagnosed by a laboratory test more likely to be notified. Data accuracy may also vary between states and territories due to the use of different case definitions for surveillance and varying reporting requirements by medical practitioners, laboratories and hospitals. Under-reporting of notifiable diseases by doctors and from hospitals has been documented in Australia.17–19 There are eight different Public Health Acts in operation and no legislative requirement to report to NNDSS although all jurisdictions do so, with daily updates entering the system as of 2004.20 Data constraints are applied to uploaded fields to ensure validity. This is important given that each jurisdiction has its own reporting system with different fields and coding systems in use. A recent evaluation of NNDSS found that this was a major factor limiting data quality and completeness.20 Assessing the sensitivity and positive predictive value of the system was beyond the scope of the evaluation. The review noted that the main use of NNDSS for public health action has been in the area of vaccine preventable diseases.

Top of page

Hospitalisation data

The AIHW publishes regular overviews of Australian hospitalisation statistics, including details of the number of hospitals reporting and any documented data problems. In the periods covered by this report (2002/2003, 2003/2004, 2004/2005), in each financial year there were over 6.6 million, 6.8 million and 7 million separations, respectively.21–23 Almost all public and private hospitals were included in each of these periods.21–23 The AIHW performs logical validations on the ICD-10-AM coded data; for example, for sex and age specific diagnoses. Coding audits are also variously performed at hospital level or state and territory level using software such as PICQ (Performance Indicators for Coding Quality) developed by the National Centre for Classification in Health (NCCH).24

Some variation in hospital access, admission practices and record coding may occur between regions and over time and impact upon the use of hospitalisation data for monitoring disease trends over time and between jurisdictions. It is likely that the quality of coding in Australia has improved over time due to increasing levels of training amongst coders25 and the use of coding audits (M Cumerlato, NCCH, personal communication). The National Clinical Coder Workforce Survey of over 1,000 Australian coders in 2002 found that, whilst just over half held tertiary qualifications,10% had no formal coding education. About two thirds of coders reported undertaking regular quality assurance activities in relation to coding.25

In 1998/1999, most states and territories began using ICD-10-AM and in 1999/2000, all jurisdictions were using the new classification. This change impacted on the sensitivity and specificity of some diagnostic codes relevant to this report. The most notable impact has been on the number of hospitalisations for acute hepatitis B as, unlike the previously used ICD-9-CM, ICD-10-AM allows differentiation between acute and unspecified infection. The NCCH updates the ICD-10-AM every two years, under the guidance of the Australian Coding Standards Advisory Committee.26,27

There are also limitations associated with the use of ICD codes to identify cases. Errors that cause the ICD code to differ from the true disease include both random and systematic measurement error and may either occur along the patient pathway (e.g. level of detail documented in medical records, clinician experience) or along the paper trial (e.g. transcribing errors, coder errors such as miss-specification, unbundling (assigning codes for all the separate parts of a diagnosis rather than the overall diagnosis) and upcoding (using reimbursement values to determine the order of coding)).28 A study of pertussis in children’s hospitals in Sydney noted that, whilst variability in clinician diagnostic practices may reduce the sensitivity of pertussis coding, high specificity enables the codes to be useful for surveillance of infant pertussis trends.19 In the National Clinical Coder Workforce Survey, most Australian coders (77%) nominated incomplete medical record content as the factor most likely to affect coding quality, followed by the principal diagnosis not being identified, complications/co-morbidities not being identified, illegible medical record entries and pressure to maintain coding throughput.25 In Australia, hospital coding errors have been reported to occur more commonly for diseases that the coder was less familiar with (e.g. rare diseases such as tetanus) and for admissions with multiple diagnoses.29

As indicated in relevant disease chapters, the short lengths of stay and lack of notification to public health authorities strongly suggest that some cases with hospitalisation codes for rare diseases, such as tetanus and acute poliomyelitis, are likely to be due to coding errors. For some diseases, such as Haemophilus influenzae type b infection, both the previously used ICD-9-CM and current ICD-10-AM codes lack specificity. This is in contrast to the more stringent case definitions used for notification data. For example, Wood et al recently documented the poor specificity of hospitalisations coded as acute epiglottitis, with most cases on record review found not to be acute epiglottitis and, in the post-vaccination era, none of these admissions due to Hib disease.30 Thus, care must be taken in ensuring the ICD codes accurately reflect diagnosis of the condition of interest. Generally, codes are most likely to be accurate when the disease has a clear definition with observable signs and symptoms, highly qualified physicians document information on the patient, experienced coders with full access to clinical information assign the codes and the codes are not new.28

It must also be noted that the hospitalisation database contains a record for each admission, which means that there are separate records for each readmission or inter-hospital transfer. This is unlikely to have a major impact on the numbers reported for most diseases reviewed, as they are acute illnesses. For hospitalisations where the code of interest was not the principal diagnosis, the code of interest will have been recorded as a co-morbidity (additional or secondary diagnosis), the relative importance of which cannot be gauged.

Top of page

Death data

Mortality data were analysed by year of registration rather than by year of death, thereby avoiding incomplete data for the latest available year. In recent years, less than 5% of deaths in a particular calendar year are registered in the subsequent year,31 with the bulk comprising that calendar year’s December deaths.

Only those deaths where the underlying cause of death was the disease of interest are reported here. Hence, deaths where the disease of interest was a contributing cause of death are not included.

The problems associated with the accuracy of the ICD codes used for hospital separations may also apply to the mortality data. Information on cause of death are reported routinely for each death on a Standard Medical Certificate of Cause of Death completed by a medical practitioner or coroner. The person completing the certificate must nominate the underlying (principal) cause of death and any associated conditions.31 The accuracy of the ascertainment of the cause of death may clearly vary according to the experience of the practitioner, the complexity of the disease process and the circumstances of the death. The rate of hospital autopsy has been steadily declining (to approximately 12% in Australia in 2002/2003)32 and inaccuracy in cause of death certification, compared to the gold standard of autopsy findings, is clearly documented,33–36 with a recent meta-analysis estimating that around one third of deaths may be misclassified on death certificates.37 In the case of pertussis and tetanus, studies have documented that deaths due to these diseases, that can be otherwise identified through disease surveillance systems and hospitalisation records, sometimes go unrecorded on death certificates.38,39 In addition, the number of causes of death recorded by the ABS increased from 187 in 1907 to around 2,850 in 2000 as medical understanding increased.31 Thus, despite comprehensive mapping algorithms, which attempt to take into account changing disease classification over time, caution is required in interpreting mortality trends.

In processing deaths registered from 1 January 1997, Australia adopted the use of the Automated Coding System (ACS) and introduced ICD-10 codes. As a result, there is now a break in the underlying causes of death series between 1996 and 1997. This is especially important where the death was recorded as hepatitis B. Prior to the use of ICD-10, acute, chronic and unspecified infections could not be differentiated. A large artefactual rise in deaths coded as due to pneumonia in 1997–1998 has also been ascribed to changes in coding practices during this period.40

Vaccination coverage data

Limitations of data available from the ACIR must be considered when it is used to estimate vaccination coverage. Vaccine coverage estimates calculated using ACIR data should be considered minimum estimates due to under-reporting.5,41 Another limitation of ACIR data is that records are only held for children up to seven years of age. Coverage is calculated only for children registered on Medicare; however, by the age of 12 months, it is estimated that over 99% of Australian children have been registered with Medicare.16,41,42

Document download

This publication is available as a downloadable document.

Vaccine Preventable Diseases and Vaccination Coverage in Australia, 2003 to 2005(PDF 1622 KB)