Generative AI presents new and innovative opportunities for government and an increasing number of new AI tools and models are now easily available. However, due to their rapid evolution there is a need and growing demand for guidance to support the consideration and assessment of the risks involved in their use.
In response, the Digital Transformation Agency (DTA), in collaboration with the Department of Industry, Science and Resources (DISR), has developed advice for government agencies to use as the basis for providing guidance to their staff. This advice is available through the Australian Government Architecture as the Interim guidance for agencies on government use of generative Artificial Intelligence platforms.
The DTA and DISR will continue to investigate the need for whole-of-government policies and standards relating to the responsible use of AI in government.
What is Artificial Intelligence?
Artificial Intelligence describes a family of technologies that can bring together computing power, scalability, networking, connected devices and interfaces, and data. AI systems can be programmed to perform specific tasks such as reasoning, planning, natural language processing, computer vision, audio processing, interaction, prediction and more. AI systems can operate with varying levels of autonomy.
Usage of these AI and machine-assisted decision-making technologies in government presents opportunities to optimise business processes, reduce risk of legacy impacts, increase overall transparency of decision-making and process significantly larger scales of data than current capacities allow to deliver better citizen outcomes.
AI and machine assisted intelligent decision-making technologies are being adopted across several public sectors agencies. However, currently there is no coordinated, whole of government positioning that provides clarity and enable agencies to implement AI applications responsibly and consistently facilitated by effectively reusing existing learnings and investments for like-applications.
This gap will be addressed by practical guidance that translate AI principles into practice. The guidance will focus on whole of government adoption with a consistent approach to governance, risk, usage (including reuse) and monitoring mechanisms.
Objectives in public sector adoption are to:
- increase confidence and reduce duplication in the adoption of the technologies in a public sector context
- provide a structure for consistent application of the technologies
- provide the guardrails to achieve efficiencies and reduce duplication.
Ensure application of the technologies meet business outcomes
Apply Australia's AI Ethics Principles in the development or management of any AI technologies.
Implement appropriate governance and risk frameworks
Continually review and update risk assessments as the technologies are further developed and matured through ethical implementation and management:
- prioritise benefits to individuals, society, and the environment
- respect and maintain human rights
- deliver impartial and just treatment or behaviour without favouritism
- uphold privacy rights and data protection
- continuously validate those solutions such that they remain fit for purpose
- ensure that results from AI solutions are explainable, traceable, and transparent in use
- allow for a human-evaluated right of review of an outcome that has used AI to assist in the process of decision making.
Establish monitoring and reporting systems and processes to measure performance and other issues relating to the technologies
Engage with agencies with prior experience in adoption of these technologies to leverage existing investment as well as knowledge, patterns, and services.