Vaccine Preventable Diseases in Australia, 2005 to 2007

Notes on interpreting data

Page last updated: 24 December 2010

Vaccine preventable diseases data in general

Comparison between the notification, hospitalisation and death data should be made with caution since these datasets differ in their purposes of data collection, reporting mechanisms, accuracy, timeliness and period of reporting.

In this report, in order to provide the most recent information available, and to accommodate the varied reporting formats, data of different time periods (although all being the available data for the most recent 2 years) have been selected for review from each dataset. As there are no unique identifying codes to link records for the same individual across these datasets, and due to differences in defining a case and in the completeness and the accuracy of the data in each dataset, it is not possible to analyse deaths and hospitalisations as subsets of notifications.

For some diseases, there are no specific ICD codes that correspond to the particular disease condition of interest. This will limit the validity of comparisons between notification and hospitalisation or death data. Examples include invasive pneumococcal disease and invasive Haemophilus influenzae type b disease. The methods and algorithms used to select surrogate ICD codes to match the notification case definitions are explained in the relevant disease chapters and in the notes on interpreting hospitalisation data in the following section.

The rates presented in this report are crude rates and may be confounded by differences in the population (e.g. age structure, ethnicity and population density) between jurisdictions. An exploratory analysis of 2002 pneumococcal and incident hepatitis B notification rates for the Northern Territory found that directly age-standardising the rates to the 2001 Australian population did not change the rates significantly (pneumococcal crude rate 20.2 per 100,000 versus 20.5 per 100,000 age-standardised; hepatitis B crude rate 6.8 per 100,000 versus 5.7 per 100,000 age-standardised). The Northern Territory is the jurisdiction with a population age structure most different from other jurisdictions. In view of this, and to maintain consistency with previous reports in this series, this report continues to report using crude rates. It is also important to note that high disease rates may be observed even with small absolute numbers of cases in jurisdictions with small populations (e.g. the Australian Capital Territory, Tasmania, and the Northern Territory), and a small change in the numbers may result in a relatively large change in rates.

To assist with interpreting data on the proportion of disease in various age groups, the proportions of the estimated Australian population in various age groups used in the standard tables and an additional age grouping used in some chapters presented in this report are tabulated in Tables 2.1 and 2.2.

Table 2.1: Proportions of the Australian population in the age groups used in standard disease data tables, by year

Year
Age group Total
0–4 yrs
5–14 yrs
15–24 yrs
25–59 yrs
60+ yrs
2005
6.3%
13.4%
13.8%
48.9%
17.5%
100.0%
2006
6.3%
13.2%
13.9%
48.8%
17.8%
100.0%
2007
6.3%
13.1%
13.9%
48.5%
18.2%
100.0%

Table 2.2: Proportions of the Australian population in the age groups used in some specific disease data tables or figures, by year

Year
Age group Total
0–4 yrs
5–9 yrs
10–19 yrs
20–59 yrs
60+ yrs
2005
6.3%
6.6%
13.7%
55.9%
17.5%
100.0%
2006
6.3%
6.5%
13.6%
55.8%
17.8%
100.0%
2007
6.3%
6.4%
13.5%
55.6%
18.2%
100.0%

Top of page

Notification data

A major limitation of the notification data is that they represent only a proportion of all the cases occurring in the community, due to under-reporting. This proportion may vary between diseases, over time, and across jurisdictions. An infectious disease that is diagnosed by a laboratory test is more likely to be notified than if it is diagnosed only on clinical grounds. Data accuracy may also vary among jurisdictions due to the use of different case definitions for surveillance (prior to adoption of the national case definitions) and varying reporting requirements and mechanisms by medical practitioners, hospitals and laboratories. While in three jurisdictions ≥95% of notifications originated from laboratories only, 43%–59% of notifications in three other states originated from both doctors and laboratories.5 Under-reporting of notifiable diseases by doctors and from hospitals has been documented in Australia.17–19

Top of page

Hospitalisation data

The AIHW publishes regular overviews of Australian hospitalisation statistics, including details of the number of hospitals reporting and any documented data problems. In the periods covered by this report (2005/2006, 2006/2007), there were approximately 7.3 million and 7.6 million separations in each financial year, respectively.14,20 Almost all public and private hospitals were included in each of these periods.

The AIHW performs logical validations on the ICD-10-AM coded data; for example, for sex- and age-specific diagnoses. Coding audits and coding quality improvement activities are variously performed at hospital level and/or state and territory level, and in some states also enhanced by using software such as PICQ (Performance Indicators for Coding Quality) developed by the National Centre for Classification in Health (NCCH).21 Generally, states and territories consider that coding of the hospitalisation data in recent years has been of high quality.14

Some variation in hospital access, admission practices and record coding may occur between regions and over time and this may impact upon the use of hospitalisation data for monitoring disease trends over time and between jurisdictions. It is likely that the quality of coding in Australia has improved over time due to increasing levels of training among coders22 and hospitals performing coding audits and other quality initiatives to assess the quality of the coded data (M Cumerlato, NCCH, personal communication, September 2009). The National Clinical Coder Workforce Survey of over 1,000 Australian coders in 2002 found that just over half of clinical coders held tertiary qualifications, and 10% of them had no formal coding education. About two-thirds of coders reported undertaking regular quality assurance activities relevant to clinical coding.22

In 1998/1999, most states and territories began using ICD-10-AM and, since 1999/2000, all jurisdictions use the new classification. This change may impact on the sensitivity and specificity of some diagnostic codes relevant to this report, especially with respect to historical trend analyses. The NCCH updates the ICD-10-AM every 2 years, under the guidance of the Australian Coding Standards Advisory Committee.23,24

There are also limitations associated with the use of ICD codes to identify cases. Errors that cause the ICD code to differ from the true disease include both random and systematic measurement errors and may either occur along the patient pathway (e.g. level of details documented in medical records, clinicians’ experience) or along the paper trail (e.g. transcribing errors, coder errors such as miss-specification, unbundling [assigning codes for all the separate parts of a diagnosis rather than the overall diagnosis] and upcoding [using reimbursement values to determine the order of coding]).25 A Canadian study based on four teaching hospitals showed the sensitivity of the validity of coding of hospital discharge data to range from 9.3% to 83.1% using International Statistical Classification of Diseases, 9th Revision, Clinical Modification ( ICD-9-CM) and 12.7% to 80.8% using ICD-10 codes, and varied with the conditions assessed.26 A study of pertussis in children’s hospitals in Sydney noted that, while variability in clinician diagnostic practices may reduce the sensitivity of pertussis coding, high specificity enables the codes to be useful for surveillance of infant pertussis trends.19 In the National Clinical Coder Workforce Survey, most Australian coders (77%) nominated incomplete medical record content as the factor most likely to affect coding quality, followed by the principal diagnosis not being identified, complications/co-morbidities not being identified, illegible medical record entries, and pressure to maintain coding throughput.22 In Australia, hospital coding errors have been reported to occur more commonly for diseases that the coder was less familiar with (e.g. rare diseases such as tetanus) and for admissions with multiple diagnoses.27

For a few rare diseases, such as acute poliomyelitis, tetanus and diphtheria, as indicated in the relevant disease chapters, some of the hospitalisation episodes or deaths that have been coded to be due to these diseases are likely to be coding errors or the coding error could be related to inaccurate documentation, as suggested by the short lengths of stay of the hospitalisation episodes and the lack of notification of that disease to public health authorities.

Top of page

The ICD codes of diagnosis chosen for analysis of a disease should accurately reflect the condition of interest. For some diseases, such as Hib infection, both the previously used ICD-9-CM and current ICD-10-AM codes lack specificity. This is in contrast to the more stringent case definitions used for notification data. For example, for this report, only the ICD code of G00.0 (Haemophilus meningitis) was selected as the indicator for hospitalisation due to H. influenzae type b disease, as other codes, including those of H. influenzae pneumonia, H. influenzae septicaemia, H. influenzae infection and acute epiglottitis, are considered insufficiently specific. Wood et al have documented the poor specificity of hospitalisations coded as acute epiglottitis, with most cases on record review found not to be acute epiglottitis and, in the post-vaccination era, none of these admissions due to Hib disease.28 Generally, codes are most likely to reflect the disease accurately when the disease can be clearly defined with observable signs and symptoms, when information about the patient is documented by highly qualified physicians, when the coders are experienced and have full access to clinical information while assigning the codes, and if the codes are not new.25 For each disease in this report, the ICD code(s) that have been selected to constitute the indicator for hospitalisation due to the disease are listed in the ‘case definition’ box on the first page of each disease chapter.

It must be noted that in the AIHW hospitalisation database, there is one record for each hospital admission episode. This means that there will be separate records for each re-admission or inter-hospital transfer. This is unlikely to have a major impact on the numbers reported for most of the diseases reviewed in this report, as they are mostly acute diseases. It should also be noted that it is difficult to gauge the relative importance of hospitalisations where the coded disease of interest was not the principal diagnosis but was recorded as an additional or secondary diagnosis for that hospitalisation episode. This indicates that the condition might be a co-morbidity.

Hospitalisations represent the more severe end of the morbidity spectrum of a disease, and the extent to which ICD-coded hospitalisation data can reflect the burden of the disease of interest varies with diseases. The general limitation of this data source is that hospitalisation may be affected by variations in admission practices and the availability of and access to hospitals should also be noted.14

Top of page

Death data

Mortality data are reported and analysed by the year of registration rather than by year of death. This avoids problems associated with incomplete data for the latest available year. In recent years, less than 5% of deaths in a particular calendar year are registered in the subsequent year,29 the bulk of which are deaths that occurred in December of that calendar year.

In this report, only the death records in which the disease of interest was recorded as the underlying cause of death (i.e. the single disease that initiated the train of morbid events leading directly to death) are reported. Hence, deaths where the disease of interest was a contributing cause of death are not included. The extent of underestimation due to this limitation varies with different diseases.

The problems associated with the accuracy of ICD coding used for hospital separations, discussed above, may also be relevant for the mortality data. In Australia, information on the cause of death is reported routinely for every death on a standard Medical Certificate of Cause of Death completed by a medical practitioner or a coroner. The person completing the certificate must nominate the underlying (principal) cause of death and any associated conditions.29 The accuracy in ascertaining the cause of death may vary according to the experience of the practitioner, the complexity of the disease process and the circumstances of the death. The rate of hospital autopsy has been steadily declining (to approximately 12% in Australia in 2002/2003)30 and inaccuracy in cause of death certification, compared to the gold standard of autopsy findings, has been documented,31–34 although the studies were mainly based on non-infectious conditions. A recent meta-analysis estimated that at least one-third of deaths may be misclassified on death certificates and half of autopsies produced findings unsuspected before death (although the leading discrepant diagnoses were pulmonary embolism, cardiovascular disease, pneumonia and infections at other sites).35 Studies have found that infectious diseases being the missed or discordant diagnosis when comparing clinical and autopsy diagnoses were not uncommon, although vaccine preventable diseases were not specifically identified.36,37 In the case of pertussis and tetanus, studies have documented that deaths due to these diseases, which can be otherwise identified through disease surveillance systems and hospitalisation records, sometimes go unrecorded on death certificates.38,39

In addition, newer versions of the ICD codes were used in more recent times, as necessitated by the increasing precision in identifying conditions required as medical understanding grew. The number of causes of death recorded by the ABS increased from 187 in 1907 to around 2,850 in 2000.29 Thus, despite comprehensive mapping algorithms, which attempt to take into account changing disease classification over time, caution is required in interpreting trends in these mortality data. Australia adopted the use of the Automated Coding System (ACS) and introduced ICD-10 codes for processing deaths registered from 1 January 1997. Causes of death were classified by ICD-9 for deaths registered from 1979 to 1996 in Australia.29 As a result, there could be some discontinuity in the underlying causes of death series between 1996 and 1997. A large artefactual rise in deaths coded as due to pneumonia in 1997–1998 has also been ascribed to changes in coding practices during this period.40

Top of page

References

1. Menzies R, Turnour C, Chiu C, McIntyre P. Vaccine Preventable Diseases and Vaccination Coverage in Aboriginal and Torres Strait Islander People, Australia, 2003 to 2006. Commun Dis Intell 2008;32(Suppl):S2–S67.

2. National Health Security Act, No 174. 2007. Available from: http://www.comlaw.gov.au/ComLaw/Legislation/Act1.nsf/0/A005BA0145A00248CA25736A00126AA5?OpenDocument Accessed on 31 March 2010.

3. National Health Security (National Notifiable Disease List) Instrument 2008 (Legislative Instrument – F2008L00800). Available from: http://www.comlaw.gov.au/ComLaw/Legislation/Act1.nsf/0/A005BA0145A00248CA25736A00126AA5?OpenDocument Accessed on 31 March 2010.

4. National Health Security Agreement. Available from: http://www.health.gov.au/internet/main/publishing.nsf/Content/ohp-nhs-agreement.htm Accessed on 31 March 2010.

5. NNDSS Annual Report Writing Group. Australia’s notifiable disease status, 2007: annual report of the National Notifiable Diseases Surveillance System. Commun Dis Intell 2009;33(2):89–154.

6. Public Health Committee, National Health and Medical Research Council. Surveillance case definitions. Canberra: National Health and Medical Research Council, 1994.

7. Communicable Diseases Network Australia. Surveillance case definitions for the Australian National Notifiable Diseases Surveillance System. 2004. Available from: http://www.health.gov.au/internet/main/publishing.nsf/Content/cdna-casedefinitions.htm Accessed on 24 August 2009.

8. Begg K, Roche P, Owen R, Liu C, Kaczmarek M, HII A, et al. Australia’s notifiable diseases status, 2006: annual report of the National Notifiable Diseases Surveillance System. Commun Dis Intell 2008;32(2):139–207.

9. Australian Institute of Health and Welfare. Australia’s Health 2008: the eleventh biennial health report of the Australian Institute of Health and Welfare. AIHW Cat. No. AUS 99. Canberra: Australian Institute of Health and Welfare, 2008.

10. McIntyre P, Amin J, Gidding H, Hull B, Torvaldsen S, Tucker A, et al. Vaccine Preventable Diseases and Vaccination Coverage in Australia, 1993–1998. Commun Dis Intell 2000;24(Suppl):S1–S83.

11. McIntyre P, Gidding H, Gilmour R, Lawrence G, Hull B, Horby P, et al. Vaccine Preventable Diseases and Vaccination Coverage in Australia, 1999 to 2000. Commun Dis Intell 2002;26(Suppl):S1–S111.

12. Brotherton J, McIntyre P, Puech M, Wang H, Gidding H, Hull B, et al. Vaccine Preventable Diseases and Vaccination Coverage in Australia, 2001 to 2002. Commun Dis Intell 2004;28(Suppl 2):S1–S116.

13. Brotherton J, Wang H, Schaffer A, Quinn H, Menzies R, Hull B, et al. Vaccine Preventable Diseases and Vaccination Coverage in Australia, 2003 to 2005. Commun Dis Intell 2007;31(Suppl):S1–S152.

14. Australian Institute of Health and Welfare. Australian hospital statistics 2006–07. AIHW Cat. No. HSE 55 (Health Services Series No. 31). Canberra: Australian Institute of Health and Welfare, 2008.

15. Hull B, Deeks S, Menzies R, McIntyre P. Immunisation coverage annual report, 2007. Commun Dis Intell 2009;33(2):170–187.

16. Hull B, Menzies R, McIntyre P. Immunisation coverage annual report, 2008. Commun Dis Intell 2010;34(3):241–258.

17. Blogg S, Trent M. Doctors’ notifications of pertussis. N S W Public Health Bull 1998;9(4):53–54.

18. Allen CJ, Ferson MJ. Notification of infectious diseases by general practitioners: a quantitative and qualitative study. Med J Aust 2000;172(7):325–328.

19. Bonacruz-Kazzi G, McIntyre P, Hanlon M, Menzies R. Diagnostic testing and discharge coding for whooping cough in a children’s hospital. J Paediatr Child Health 2003;39(8):586–590.

Top of page

20. Australian Institute of Health and Welfare. Australian Hospital Statistics 2005–06. AIHW Cat. No. HSE 50 (Health Services Series No. 30). Canberra: Australian Institute of Health and Welfare, 2007.

21. National Centre for Classification in Health. Performance Indicators for Coding Quality (PICQ) 2006. Available from: http://nis-web.fhs.usyd.edu.au/ncch_new/PICQ.aspx Accessed on 24 August 2009.

22. McKenzie K, Walker S, Dixon-Lee C, Dear G, Moran-Fuke J. Clinical coding internationally: a comparison of the coding workforce in Australia, America, Canada and England. Presentation to the 14th International Federation of Health Records Congress, Washington, October 2004. Available from: http://eprints.qut.edu.au/archive/00000575/01/575.pdf Accessed on 24 August 2009.

23. Perry C, Harrison S. The Australian Coding Standards Advisory Committee. Health Inf Manag 2004;32(1):26–30.

24. Bramley M. A framework for evaluating health classifications. Health Inf Manag 2005;34(3):71–83.

25. O’Malley KJ, Cook KF, Price MD, Wildes KR, Hurdle JF, Ashton CM. Measuring diagnoses: ICD code accuracy. Health Serv Res 2005;40(5 Pt 2):1620–1639.

26. Quan H, Li B, Saunders LD, Parsons GA, Nilsson CI, Alibhai A, et al. Assessing validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions in a unique dually coded database. Health Serv Res 2008;43(4):1424–1441.

27. MacIntyre CR, Ackland MJ, Chandraraj EJ, Pilla JE. Accuracy of ICD-9-CM codes in hospital morbidity data, Victoria: implications for public health research. Aust N Z J Public Health 1997;21(5):477–482.

28. Wood N, Menzies R, McIntyre P. Epiglottitis in Sydney before and after the introduction of vaccination against Haemophilus influenzae type b disease. Intern Med J 2005;35(9):530–535.

29. Australian Institute of Health and Welfare. Mortality over the twentieth century in Australia: trends and patterns in major causes of death. AIHW Cat. No. PHE 73. (Mortality Surveillance Series No. 4). Canberra: Australian Institute of Health and Welfare, 2006. Available from: http://www.aihw.gov.au/publications/index.cfm/title/10154 Accessed on 24 August 2009.

30. The Royal College of Pathologists of Australasia Autopsy Working Party. The decline of the hospital autopsy: a safety and quality issue for healthcare in Australia. Med J Aust 2004;180(6):281–285.

31. Mant J, Wilson S, Parry J, Bridge P, Wilson R, Murdoch W, et al. Clinicians didn’t reliably distinguish between different causes of cardiac death using case histories. J Clin Epidemiol 2006;59(8):862–867.

32. Ravakhah K. Death certificates are not reliable: revivification of the autopsy. South Med J 2006;99(7):728–733.

33. Nashelsky MB, Lawrence CH. Accuracy of cause of death determination without forensic autopsy examination. Am J Forensic Med Pathol 2003;24(4):313–319.

34. Shojania KG, Burton EC, McDonald KM, Goldman L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003;289(21):2849–2856.

35. Roulson J, Benbow EW, Hasleton PS. Discrepancies between clinical and autopsy diagnosis and the value of post mortem histology; a meta-analysis and review. Histopathology 2005;47(6):551–559.

36. Carvalho FL, Cordeiro JA, Cury PM. Clinical and pathological disagreement upon the cause of death in a teaching hospital: analysis of 100 autopsy cases in a prospective study. Pathol Int 2008;58(9):568–571.

37. Kotovicz F, Mauad T, Saldiva PH. Clinico-pathological discrepancies in a general university hospital in São Paulo, Brazil. Clinics (São Paulo) 2008;63(5):581–588.

38. Crowcroft NS, Andrews N, Rooney C, Brisson M, Miller E. Deaths from pertussis are underestimated in England. Arch Dis Child 2002;86(5):336–338.

39. Sutter RW, Cochi SL, Brink EW, Sirotkin BI. Assessment of vital statistics and surveillance data for monitoring tetanus mortality, United States, 1979–1984. Am J Epidemiol 1990;131(1):132–142.

40. Korda RJ, Butler JR. Trends in pneumonia rates explained by changes in coding practices. Intern Med J 2005;35(2):138–139.

Document download

This publication is available as a downloadable document.

Vaccine Preventable Diseases in Australia, 2005 to 2007(PDF 1217 KB)