The eHealth readiness of Australia's medical specialists - Final Report

Conducting the medical specialist survey

Page last updated: 30 May 2011

To identify relevant medical specialists in our selected segments, we worked closely with market intelligence firms and healthcare industry experts who maintained lists of specialists. We were able to prepare a list of over 19,000+ specialists, which included practitioner’s name, specialty, gender, address and contact details, who were then approached in a randomised format to participate in the survey. Since many specialists practice across multiple postcodes, their appropriate geographic region was determined during the recruitment for the survey.

As noted above, we randomly selected survey participants from the pool of 19,000+ specialists, controlling to ensure our sample set was representative of the overall demographic profile for each segment (Exhibit 27). In total, we approached 10,015 specialists to participate in this survey, of which 956 offered to participate. From this group, 600 were eligible to complete the survey – a yield rate of ~6 percent. Because all questions were mandatory, quantitative survey results are based on 600 responses unless otherwise indicated.

EXHIBIT 27
(D)

Determining the appropriate sample size

As a methodological note, the statistic error (i.e. generalising from the survey results to the whole population of medical specialists) of a representative sample (such as this one) is +/- 4 percent at the 95 percent level of confidence. This means that if 50 percent of medical specialists in the sample agree with a particular proposition, it can be assumed with 95 percent confidence that, had the whole population of medical specialists been interviewed, between 46–54 percent would also have agreed with the proposition at the time of the survey.

When the survey results are broken down into subgroups of medical specialists (surgeons), the error of estimation will be higher for smaller sample sizes, in the order of one divided by the square root of the sample size. Below is a simple table which shows the error of estimation at a 95 percent confidence interval based on specific sample sizes:

Table 10: Error of estimation at 95 percent confidence interval


 

Total population size

Sample size100200400800100015001500+
2517.118.419.019.319.419.419.6
509.812.013.013.413.513.613.9
755.79.010.210.810.911.011.3
1000.06.98.59.29.39.59.8
1255.47.38.18.28.48.8
1504.06.37.27.47.68.0
1752.65.66.66.26.57.4
2000.04.96.06.26.56.9
2503.85.15.45.76.2
3002.84.54.75.15.7
4000.03.53.84.24.9
5002.73.13.64.4
6002.02.53.14.0
7001.32.02.73.7
8000.01.62.43.5
9001.02.13.3
10000.01.83.1


As Table 10 above illustrates, the maximum error of estimation when comparing any two Medical Specialist segments using a sample size of 75 for each segment is 11.3 percent. What this means is that regardless of whether a population of a given segment is 500, 5,000 or 50,000, the maximum error of estimation is 11.3 percent.

To help illustrate the implication of this approach and to interpret the above chart, two short case examples are helpful.

Case example 1: Single population confidence

To determine error for a single population, we read the relevant cell in the above table for both population and sample. For example, if Profession A had a population of 400, a sample size of 75, and a score of 50 percent eHealth ready, the result would show 95 percent confidence that 39.8–60.2 percent (i.e. 50 percent +/- 10.2 percent) of the population is eHealth ready.

Case example 2: Comparing two populations using confidence

Assume Profession A had a population size of 400, and Profession B had a population size of 5,000. Assume also that the objective is to determine whether the mean eHealth readiness of each profession is statistically different, with 40 percent of Profession A responding they are eHealth ready and 65 percent of Profession B responding they are eHealth ready. Assume a sample size of 75 across each profession.

Using the above chart and given the assumptions described, we can see that a sample of 75 of the 400 Profession A results in an error of estimation of 10.2 percent. A sample of 75 of the 5,000 Profession B (i.e. 1500+) results in an error of estimation of 11.3 percent – the theoretical maximum error of estimation when surveying 75 professionals from a very large population. So, what we can say is that it can be assumed with 95 percent confidence that, had the whole population of:
  • Profession A been interviewed, between 29.8–50.2 percent (i.e. 40 percent +/- 10.2 percent) would be eHealth ready
  • Profession B been interviewed, between 53.7–76.3 percent (i.e. 65 percent +/- 11.3 percent) would be eHealth ready.

Given these two ranges do not overlap, we would conclude that there was a statistically significant difference between these two professions.

For this survey, we selected a sample size of 75 responses for each of the eight categories of medical specialists. The selection of 75 responses represents a balance between a desire to minimise the error of estimation, the likely variance between each medical specialist segment, and the financial resources available by the Department for this work. While surveying 1,000 medical specialists in each profession would have reduced the error of estimation from 11.3 percent to 3.1 percent, this would have resulted in a waste of resources given the specificity required for the hypotheses in this project and been impractical given the relative size of some of these segments and anticipated yield rates.

This constraint is meaningful. By way of illustration, there are approximately 1,037 emergency medicine specialists in Australia. Even if responses had been in the order of 200 for this group (~20 percent, which would be a high response rate), we can see from the above table the error of estimation would still have been in the order of 6.2 percent.

Therefore, although the general maxim of ‘bigger is better’ for surveys such as the one in this project is true, we feel confident that 75 medical specialists per segment provides an acceptable level of sampling error to identify outlier segments. This is particularly so given the focus of this effort is to identify directional trends (rather than a precise point estimate).

Avoiding biases

We applied several survey techniques to control for selection biases in this survey. For example, we:
  • Collected ~75 randomly selected respondents per category to provide a representative sample
  • Allowed survey participants to respond to the survey by either completing an online form or undertaking a telephone survey (15 percent of medical specialist responses were completed via telephone)
  • Ensured representativeness of the sample by replicating the demographic profile of each medical specialist segment in the sample
  • Ensured surveys were 15 minutes in length (online) and approximately 20 minutes in length (telephone) to minimise imposition on respondents.

Caveat: appropriate use of data

Given the nature of the underlying hypotheses, and the desire to identify outliers, the primary research approach was calibrated to an acceptable level of residual error as described above (maximum of 11.3%, depending on the type of analysis being undertaken). The output of the research identified directional differences between clusters and specialties. The nature of this approach means that future research studies cannot be directly compared to the outcome of the primary research in this report without replicating the research methodology.

Data weighting methodology

All responses have been weighted as follows:
  • Under weighted if they are over represented in sample as compared to their representation in the overall population of medical specialists
  • Over-weighted if they are under represented in sample as compared to population.

The population distribution by specialty segment was estimated based on statistics from the Australian Institute of Health and Welfare (AHIW)7 and the population distribution by geographic classification was estimated based on the total pool of 19,000+ specialists that was compiled prior to launching the quantitative survey

For example, anaesthesia in inner regional forms approximately 2 percent of the population but only 1.2 percent of the sample. As such, this segment has been over-weighted by a factor of 1.68 (2/1.2) to accurately represent its presence in the population.

The weighting scheme used for this analysis is provided in Table 11 below and the comparison between unweighted and weighted data is provided in Exhibit 28.

Table 11: Weighting for medical specialist data

Major Cities

Inner Regional

Outer regional and remote

Total

Anaesthesia 11.6%2.0%0.7%14%
Emergency Medicine 3.4%0.6%0.2%4%
Internal medicine 23.0%2.6%0.9%27%
Obstetrics and Gynaecology 5.2%0.8%0.4%6%
Ophthalmology and dermatology5.2%0.9%0.5%7%
Pathology 3.4%0.7%0.2%4%
Psychiatry 9.5%1.0%0.4%11%
Radiology5.6%0.8%0.2%7%
Surgery 16.4%2.6%1.2%20%
Total83%12%5%
EXHIBIT 28

(D)



7 ‘Medical Labour Force 2008’ Bulletin 82. Australian Institute of Health and Welfare October 2010.