Artificial intelligence (AI) transparency statement

Our AI transparency statement explains how we intend to use AI technology, and how we will adopt and use AI in line with the Australian public's and Government's expectations.

Our commitment to the safe, ethical, responsible and legal use of AI supports our vision to deliver better health and wellbeing for all Australians, now and for future generations.

We are aligning with the whole of government approach to AI. The Digital Transformation Agency’s (DTA) Policy for the responsible use of Artificial Intelligence (AI) in government 2.0 sets a framework for the Australian Government’s safe, responsible, adoption and use of AI, along with the APS AI Plan, Standard for AI transparency statements and Guidance on government use of public generative AI tools. We identify, assess and manage AI use case impacts and risks informed by Australia's AI Ethics Principles and AI impact assessment tool.

Why we use AI

Our adoption of AI will improve:

  • service delivery
  • policy outcomes
  • efficiency
  • productivity.

Our commitment to digital innovation aligns with the Australian Government’s Data and Digital Government Strategy in relation to adopting emerging technologies.

How we use AI

From 1 January 2024 to 30 June 2024, we participated in the Australian Government’s trials of a generative AI service, Microsoft 365 Copilot. We have made Copilot Chat available to all staff and are rolling out Microsoft 365 Copilot licenced version to staff in phases. As a prerequisite to using Copilot, our staff must complete AI fundamentals training that includes responsible and acceptable use of AI. We also require users to acknowledge safe, responsible and ethical use of AI before accessing and using generative AI tools. 

We restrict the use of AI tools, including Microsoft 365 Copilot and Copilot Chat, to certain approved use cases in our use case register.

We use generative and narrow AI models in line with the DTA’s Classification systems for use.

What we use AI for

We can use AI to:

  • analyse data to gain insights
  • automate activities to make tasks more efficient and increase workplace productivity
  • identify patterns and objects automatically
  • support decision making by helping staff summarise, analyse or synthesise information used to prepare advice or recommendations considered by our committees or decision makers.

We do not use AI to automate decisions. Human officials remain fully accountable for the advice and recommendations they provide.

Where we use AI

We can use AI in these areas:

  • policy and legal
  • scientific
  • compliance and fraud detection
  • corporate and enabling
  • service delivery.

Our approach with AI

We set up an Artificial Intelligence Subcommittee (AISc) to guide our approach to AI. The AISc advises the Digital Committee which oversees our digital, data and ICT functions and capabilities and includes senior executive members from across the department. The AISc considers:

  • the application of AI within the Health portfolio’s policy and program context
  • the use and regulation of AI in the health, disability and aged care sectors
  • the use of AI within the department
  • the whole of government approach to AI and the intersection with health, disability and aged care sectors.

Our staff will be able to explain, justify and take ownership of advice and decisions informed by AI.

We have an AI assurance framework in place. We also keep an internal register of AI use cases, in line with the whole-of-government approach. This register helps us see where AI is being used and monitor its usage properly. 

We have measures in place to:

  • make sure AI is well governed and managed. Staff cannot use sensitive or personal information without approval through our assurance and governance processes
  • make AI use across the department visible, so we can govern it effectively and manage risks, assurance and reporting
  • encourage staff to use AI safely, responsibly, ethically and lawfully through corporate communications and training
  • support collaboration across the department and with other government agencies on AI use, including developing shared resources to ensure safe, responsible, ethical and lawful use.

Our commitment

We are committed to using AI in a safe, ethical, responsible and lawful way for the benefit of Australians. We will continue to work closely with the DTA and use AI in accordance with applicable:

  • laws
  • frameworks
  • policies
  • best practice.

We remain committed to transparency and protecting the public. We will be transparent as we responsibly adopt evolving AI technology and policy requirements.

Safe and responsible AI adoption

We are developing internal AI policy and guidance material. These will align with the DTA’s policy, advice and guidance on the safe, responsible and ethical use of AI. This includes our role in grants, procurement, regulation and policy making related to AI.

We will leverage whole-of-government policies and develop internal policies and guidance materials when necessary for:

  • AI Governance and approval processes
  • acceptable use of AI in the department
  • ethical considerations
  • Freedom of Information (FOI) considerations
  • record keeping
  • security
  • procurement of AI systems
  • risk mitigation and technical guardrails
  • roles and responsibilities when using AI and required training for identified roles.

These internal policies will apply to all employees (including contractors) and consultants.

We will update this transparency statement as we continue to develop policies on AI usage and to implement AI technology. We will continuously review our use of AI to:

  • protect the public against negative impacts
  • reflect the pace of technological change
  • manage the evolving risk environment
  • align with whole-of-government guidance.

Contact

The Chief Digital Information Officer is our AI Accountable Official.

AI team

Contact us for questions about our AI transparency statement, our use of AI, or to report AI safety concerns, including AI-related incidents.
Date last updated: