Australian Government Architecture
Search

Generative Artificial Intelligence (GenAI)

Definition

While there are various definitions of what constitutes artificial intelligence, the DTA and AGA use the OECD definition of an Artificial Intelligence (AI) system.

An Artificial Intelligence (AI) system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Agencies may refer to explanatory material on the OECD website.

Given the rapidly changing nature of AI, agencies should keep up to date on changes to this definition. The definition may be reviewed as the broader, whole-of-economy regulatory environment matures to ensure an aligned approach.

There may be instances, such as considering whether to apply AI assurance processes, where agencies may wish to provide practical guidance to staff to identify AI use.

Purpose

A purpose statement specific to Generative AI will be finalised through the iterative development of the Australian Government Architecture. The below is considered applicable across the Domain of Artificial Intelligence.

Artificial Intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. This has the potential to increase the efficiency and accuracy of entity operations, allow improved exploration of data and derivation of insights, and improve service delivery for people and business.

The capability of AI is realised through:

  • deployment of AI technologies to support Commonwealth entities’ unique needs
  • alignment with Australia's AI Ethics Principles
  • being an exemplar in the safe and responsible use of AI, requiring a lawful, ethical approach that places the rights, wellbeing and interests of people first
  • adopting AI assurance practices that align best practice including those within the National framework for the assurance of artificial intelligence in government and developing Australian government and whole of economy safe and responsible AI initiatives
  • continual review and improvement of AI use and practice, in recognition of its state as an emerging technology area.

Objective

Objectives specific to Generative AI will be finalised through the iterative development of the Australian Government Architecture. The below are considered applicable across the Domain of Artificial Intelligence.

The objectives of this Australian Government Architecture (AGA) content are to:

  • ensure that entities engage with AI confidently, safely and responsibly, and realise its benefits
  • strengthen public trust in government’s use of AI
  • ensure strategic alignment of the adoption of AI to the Australian Government’s data and digital goals
  • meet compliance obligations with legislation and regulation, government policies and standards, and relevant national or international agreements relating to AI

Whole of Government Applicability

The Australian Government has published its interim response to the safe and responsible AI consultation held in 2023. The response considers the below principles that should be paramount to any entities' own considerations:

  • Risk-based approach
  • Balanced and proportionate
  • Collaborative and transparent
  • A trusted international partner
  • Community first

The use of AI by Commonwealth entities has the potential to contribute to the seamless delivery of government services across different systems and processes. To do so, entities should consider:

  • reuse of commercial (including whole-of-government) arrangements
  • replication and redeployment of proven AI solutions
  • mobility of APS employees to support knowledge sharing
  • reuse of lessons learned from prior implementations.

The Data and Digital Government Strategy and Implementation Plan set directions to the APS for AI through:

  • Delivering for all people and business: To maximise value from data
  • Simple and seamless services: To deploy scalable and secure architecture
  • Government for the future: To adopt emerging technologies
  • Trusted and secure: To connect data, digital, and cyber security, and build and maintain trust

Policy Elements

Policy:
POL14
Policy for responsible use of AI in government Mandate:
Endorsed
Status:
Core
  • Designate accountability to accountable official(s)

    Designate accountability for implementing the policy to accountable official(s) by 30 November 2024 (within 90 days of the policy taking effect).

    Requirements for designating accountable official(s) are set out in the Standard for accountable officials.

  • Publish an AI transparency statement and keep it updated

    Make publicly available a statement outlining their approach to AI adoption and use by 28 February 2025 (within 6 months of the policy taking effect).

    Review and update AI transparency statement annually or sooner, should the agency make significant changes to their approach to AI.

Domains

This capability is part of the following domain.
DOM12

Artificial Intelligence (AI)

Policies

The following policies have requirements that impact this capability.
Mandate: Endorsed
Status: Core
The Policy for the responsible use of AI in government ensures that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations. The policy:

Standards

The following standards support development of digital solutions in this capability.
The Standard for accountable officials sets out the requirements that agencies designate accountable official(s) (AOs) responsible for their agency's implementation of the Policy for the responsible use of AI in government. Agencies may choose AOs who suit the agency context and structure. AOs…
The Pilot Australian Government artificial intelligence (AI) assurance framework guides Australian Government agencies through impact assessment of AI use cases against Australia's AI Ethics Principles. A number of Australian Government agencies are participating in the pilot, led by the…
Under the Policy for the responsible use of AI in government, agencies must Publish an AI transparency statement and keep it updated. The Standard for AI transparency statements establishes a consistent format and expectation for AI transparency statements. Clear and consistent transparency…

Designs

The following designs include examples of how digital solutions in this capability can be delivered.

Lead Agency: Digital Transformation Agency

The NSW Artificial Intelligence Assurance Framework assists agencies to design, build and use AI-enabled products and solutions.

Lead Agency: Digital Transformation Agency

The Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR) have released interim guidance on government use of publicly available generative AI platforms. The interim guidance is recommended for government agencies to use as the basis for providing…

Lead Agency: Digital Transformation Agency

This document provides guidance for Australian Government agencies completing assessments using the draft Australian Government artificial intelligence (AI) assurance framework. Use it as an interpretation aid and as a source of useful prompts and resources to assist you in filling out the…

Lead Agency: Digital Transformation Agency

Notice On 1 September 2024, the Policy for the responsible use of AI in government took effect. This supersedes the below guidance, which is retired effective that same date.  
Was this information helpful?

Do not include any personal information. We are unable to respond to comments or feedback. If you would like a response, please email, or phone us. Our details are on the AGA contact page www.architecture.digital.gov.au/contact-us.