Australian Government Architecture
Search

Adoption of Artificial Intelligence in the Public Sector

Notice

On 1 September 2024, the Policy for the responsible use of AI in government took effect. This supersedes the below guidance, which is retired effective that same date.  

Adoption of AI in the Public Sector

By 2030, it is estimated that Artificial Intelligence (AI) and associated machine-assisted decision-making technologies will contribute more than $20 trillion dollars to the global economy. AI can present many benefits to government and users of their services. For example, AI can be used to manage resources, improve efficiencies, and enhance public services. The use of AI, however, comes with unique risks and challenges for government agencies. Mitigating these risks and challenges is possible and will allow agencies to recognise these benefits.

The term ‘AI’ is often used interchangeably to describe several technologies within AI that are used in machine-assisted decision-making applications including machine learning, robotic process automation (RPA), natural language programming, deep learning (neural networks), computer vision and robotics.

AI systems embrace a family of technologies that can bring together computing power, scalability, networking, connected devices and interfaces, and data.

These systems can be programmed to perform specific tasks such as reasoning, planning, natural language processing, computer vision, audio processing, interaction, prediction and more. With machine learning, AI systems can improve on tasks over time according to a set of human-defined objectives and can operate with varying levels of autonomy.

Despite the potential for achieving better efficiencies through incorporating machine assistance into decision-making, application to government decisions, such as regulatory or service delivery, should strongly consider the potential impacts of adverse outcomes that can impact individuals and business.

Current usage in the APS

AI is currently available in several commercial products used across Commonwealth Agencies. Agencies looking to adopt established technologies which use AI still have an obligation manage the benefits and risks associated with the use of that technology. Agencies should also pay particular attention to the governance and ethical implications of adoption and application of the AI system. Usage must also be authorised by law, particularly where discretion is used to determine outcome.

AI capabilities are currently being used in solutions in several agencies including:

  • chatbots, virtual assistants and agents in service management
  • document and image detection and recognition for border control and fraud detection
  • data mapping to geographical areas
  • free text recognition and translation
  • entitlement calculation
  • natural language processing.

These applications represent various levels of risk both to the reputation of the agency, as well as individuals or business impacted by the decisions made or informed by the technology. Commonwealth entities looking to use AI systems in these areas should pay particular attention to the governance, risk, ethical, and legal implications of adoption and application of AI system.

AI initiatives across the Commonwealth

The critical nature of AI and potential benefits to government has triggered a range of AI related initiatives and investigations happening throughout the Commonwealth. The DTA’s work on better structure and guidance for AI adoption in the public sector seeks to complement the broader whole of economy AI focuses supporting Australian industries more generally. Both elements will continue to evolve as other longer-term AI specific initiatives underway across the Commonwealth progress over time.

Some of these initiatives include:

  • the Artificial Intelligence for Decision Making Initiative
  • the Automated Decision Making and AI Regulation whitepaper
  • the review of the Privacy Act 1998
  • the Royal Commission into the Robodebt Scheme

Objective

The whole of government focus of this section is targeted specifically towards providing better structure to close the current gap in practical guidance within Government, for translating existing AI principles into practice. The information is intended to increase confidence to uptake AI in Australian Government agencies, through clarity on informing appropriate design, build and usage of AI technologies.

It is critical that, as the public sector embraces these objectives, it is also recognised that with the opportunities that AI presents comes risk. AI is not infallible, and data inefficiencies and biases built into AI algorithms risk causing unintended harm at scale if they are not addressed and mitigated against. This standard should be consumed by entities considering AI to ensure that Australians are better able to trust that the technology will be used responsibly and safely by the Australian Government, and that AI promotes and improves inclusivity. It will be updated as necessary to ensure it remains consistent with the complementary Commonwealth-led workstreams, policies and legislation.

Audience

The primary audience for this information is members of the Australian Public Service who are engaged with the outputs of AI technologies to either inform activities or decisions, responsible for business processes or policy outcomes that are supported by AI technologies, responsible for decisions relating to the usage of AI technologies within an agency or who operate directly or indirectly in the technologies, for example:

  • agency heads
  • senior executives, decision makers and influencers who are responsible for the outcomes of policy decisions informed or assisted by AI technologies
  • senior executives, decision makers and influencers who are responsible for the design and use of AI systems
  • project leads and teams considering AI applications in their solutions
  • operational leads and teams managing AI systems.

This information can also be used by private sector stakeholders to better understand the expectations and requirements of public sector usage of AI technologies.

Public Sector Adoption of AI

Governance

Appropriate governance structures are critical to achieving the ethical and responsible use of AI in a public sector context. Governance encompasses the system by which an organisation is controlled and operates, and the mechanisms by which it, and its people, are held to account. Ethics, risk management, compliance and administration, including meeting legal requirements, are all elements of governance.

Initial Considerations

Before adopting AI into a business process, agencies should consider:

  • how the AI system will augment, change or even improve on existing approaches to delivering agency and whole of government outcomes
  • whether AI is the most appropriate solution and if other non-AI and existing systems and processes could be explored.

Decision-making and accountability hierarchies

Existing business owners and accountable decision makers, as opposed to information technology specialists, remain responsible for decisions assisted by machines and must understand both the inputs for the technologies within a process, and the outputs of the technologies within a process.

For this reason, oversight of the development and maintenance of AI technologies must be incorporated into existing decision-making and accountability hierarchies that govern the output, risk and compliance elements of a Commonwealth entity’s core business, such as audit and risk committees, and not solely into ICT-specific governance arrangements.

Governance bodies for AI technologies must have a reporting avenue to the Secretary, Agency Head or appropriate agency leadership body to review and accept the usage of these technologies and the way in which they support decision-making. Accordingly, these positions must have sufficient understanding of the way in which the technologies operate to inform that decision.

Further, operational-level implementation should be driven by business or policy areas and supported by technologists, rather than being driven by the ICT areas of an entity. This is to ensure that the technology is implemented and maintained to support business and policy outcomes, as opposed to business and policy outcomes being driven by AI systems adoption.

Agencies should also consider specific functions to support existing governance structures, including in specialist areas such as AI ethics, regulatory compliance and legal advice. These functions require a skill set that allows for consideration of the ramifications of a wide range of issues and must be positioned and at the appropriate level within an organisation to assert the level of influence required to inform decisions on the adoption, usage, review and maintenance of the technologies. Expertise within agencies that have mature AI applications presents an avenue to support knowledge transfer across the Australian Public Service (APS). Such knowledge transfer could occur, for example, as part of the Digital Profession.

Risk Management

Risk tolerances must be determined for each specific use case for AI technologies and align with existing risk management frameworks within each Agency. In assessing and managing risk associated with the adoption of the technologies, consideration should be given to assessing the varying risk tolerances across each specific use case, rather than in general terms for usage of the technology itself. These tolerances include for example, factors that are specific for each use case and should be considered independently of any prior adoption, such as requirements for accuracy, level of impact to existing process and potential impact to both internal and external outcomes. To achieve appropriate risk management, agencies should consider:

  • how the appropriate human intervention points are to be embedded into the decision-making process; these intervention points will depend on the risk level, as well as the type of decision being aided by the AI, for example ensuring a human decision point in the process preceding any potential negative determination for a citizen or business
  • an appropriate accountability structure to ensure the ongoing compliance of the use case with ethical and responsible AI usage; use cases must also meet existing legal requirements, particularly administrative law principles, privacy law and anti-discrimination law
  • broader reputational risks that apply to public trust in government services; further information on this aspect is described in the Australian Data Strategy

The use of Risk Management Frameworks

The differences in risk tolerances for each use case and application of AI technologies inhibits the ability to adopt a universal risk management framework. Agencies should use risk management frameworks that are appropriate for the specific use cases and aligned to existing risk management approaches to assess and manage risks specific to AI systems.  Several publicly available risk frameworks provide examples of how different applications can call for different approaches to risk management, for example:

  • The NSW AI Assurance Framework is designed to assist agencies to design, build and use AI-enabled products and solutions. It is designed to help agencies identify risks that may be associated with their projects. 
  • New Zealand has the Algorithmic Charter for Aotearoa New Zealand that provides Charter signatories with a Risk Matrix to make an assessment of their algorithm decisions. This supports their evaluation, by quantifying the likelihood of an unintended adverse outcomes against its relative level of impact to derive an overall level of risk.
  • Canada has the Directive on Automated Decision Making, that imposes a number of requirements, primarily related to risk management, on the federal government’s use of automated decision systems. This Directive is targeted at instances where artificial intelligence is used to make, or assist in making, administrative decisions.
  • Agencies might also their own risk management frameworks and processes which meet agency specific requirements. However, agencies will need to ensure that these risk management frameworks and processes encompass AI specific risks and governance issues.

Considerations for the Ethical adoption of AI technologies

At a whole of economy level, Australia’s AI Ethics Principles released in 2019 are designed to ensure AI is safe, secure and reliable, as part of Australia’s broader AI Ethics Framework. It was developed alongside several existing and emerging initiatives, including the OECD Principles on AI which were endorsed by 42 countries, including Australia, and adopted by the G20. Australia’s eight AI Ethics Principles were codeveloped by CSIRO’s Data61 and the Department of Industry, Science and Energy (DISR), and help to: 

  • achieve safer, more reliable and fairer outcomes for all Australians 
  • reduce the risk of negative impact on those affected by AI applications 
  • businesses and governments to practice the highest ethical standards when designing, developing and implementing AI. 

The implementation of the AI Ethics Principles is a direct measure under the Australia’s AI Action Plan for making Australia a global leader in responsible and inclusive AI. The below guidance has been developed to assist agencies in the key considerations for applying these principles in a public sector context.

There are eight ethics principles:

  • Human societal and environmental wellbeing
  • Human-centred values
  • Fairness
  • Privacy protection and security
  • Reliability and safety
  • Transparency and explainability
  • Contestability
  • Accountability.

They help to:

  • achieve safer, more reliable and fairer outcomes for all Australians
  • reduce the risk of negative impact on those affected by AI applications
  • businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.

This section builds on the AI Ethics Principles and their application in a public sector context.

Human, societal and environmental wellbeing

AI systems should benefit individuals, society and the environment. To incorporate human, societal and environmental wellbeing into the usage of AI technologies, agencies should:

  • document a clear objective and reasonable justification for using AI technology
  • determine that the development and use of AI delivers a clear public benefit and added value to the public
  • consider whether the unintended consequences, secondary harms or benefits, and long-term impacts to the community have been documented; unintended consequences, secondary harm, and long-term negative impacts should be mitigated and prevented where possible
  • conduct a risk assessment of potential risks against positive outcomes, which should be documented
  • consider how the structures will be implemented to ensure than humans are able to maintain visibility of unintended consequences during design, implementation, and ongoing maintenance of the technologies.

Human-centred values

AI systems should respect human rights, diversity, and the autonomy of individuals. To incorporate human-centred values into the usage of AI technologies, agencies should consider:

  • how the system may prevent the restriction of human rights, and how any impacts to human rights will be measured and mitigated
  • how unlawful discrimination will be prevented.
    • Under Australia’s federal anti-discrimination laws, it is unlawful to discriminate on the basis of a number of protected attributes, including age, disability, race, sex, intersex status, gender identity and sexual orientation.
    • Unlawful discrimination could arise from how algorithms are written, from the use of non-diverse datasets in machine learning and from accessibility discrimination or digital exclusion.
  • Australia’s anti-discrimination law (Attorney-General’s Department) and the human rights public sector guidance sheets (Attorney-General’s Department) provide additional information regarding discrimination. How user needs and their context for using a service will be understood and designed for as outlined in the Digital Service Standard (Digital Transformation Agency).
  • the inclusion of subject matter experts who understand the ethical and legal implications of the decision or outcome in the design, audit and maintenance of AI applications
  • how Disability discrimination (Attorney General’s Department and the Australian Human Rights Commission) will be addressed, and the use of assistive technology to assist people with hearing, visual and other impairments to engage with the AI systems.  It is worth noting that failing to provide a reasonable adjustment to enable a person with a disability to engage with AI may constitute unlawful discrimination under federal law
  • how to incorporate diversity throughout designing, developing and deploying AI to minimise the risk of missing important consideration, including how disability discrimination will be addressed.

Fairness

AI systems should be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities, or groups. To incorporate fairness into the usage of AI technologies, agencies should consider:

  • ensuring that AI is accessible to those who need to use it
  • engaging with people, communities and groups who will be impacted by the AI
  • taking measures to ensure AI produced decisions minimise the risk of harmful bias and discrimination
  • regularly evaluate the impacts of AI, iterating the technology and system to address any negative impacts
  • reading the Commonwealth Ombudsman’s Automated Decision-making Better Practice Guide, which provides a practical tool for agencies and includes a checklist designed to assist managers and project officers during the design and implementation of new automated systems in administrative decision-making, and with ongoing assurance processes once a system is operational, will be incorporated into design and maintenance of AI enabled systems.
  • the availability and assurance that data is of appropriate quality to support fair decisions
  • that data governance and management practices ensure data is accurate, timely, complete, and consistent
  • that data supporting AI technologies is fit for purpose, particularly noting usage of data collected for different purposes, and the reflection of biases in historical practices that risk being inherently ‘taught’ through machine learning.
  • how human oversight in the decision-making process will be maintained
  • the appropriate performance measures to calibrate the system.

Privacy protection and security

AI systems should respect and uphold privacy rights and data protection and ensure the security of data to incorporate privacy protection and security into the usage of AI technologies, agencies should consider:

  • how the application of AI technologies aligns with existing legal requirements that currently apply to government decision-making. These legal and policy requirements may include:
  • where applicable, appropriate management of Notifiable Data Breaches under Part IIIC of the Privacy Act 1988
  • ensuring that appropriate security measures are in place to manage potential security vulnerabilities
  • ensuring that AI systems have the appropriate cyber security measures in place and are resilient against cyber-attacks.
  • how the compounded risks associated with AI and the storage of cumulative data will be assessed, documented, and agreed by the appropriate decision maker within an agency
  • whether they have obligations under the Freedom of Information Act 1982 (Cth) (FOI Act) relating to proactive publication of information in accordance with the Information Publication Scheme. It is also considered best practice to publish information about automation, as this is in line with the objectives of the FOI Act and helps foster public trust in government data-handling.
  • how decision makers will agree to an appropriate risk of theft and exposure of data in AI applications

Reliability and safety

AI systems should reliably operate in accordance with their intended purpose. To incorporate reliability into the usage of AI technologies, agencies should consider:

  • the methods of monitoring to recognise and remediate degradation in AI system performance over time, including through the use of a traceability matrix
  • identifying performance metrics to complying with the agreed standard for the technology application
  • the intended use of AI technology is operating in accordance with relevant federal, state or territory legislation
  • undertaking regular monitoring and testing to check AI technologies and systems are meeting their intended purpose; and
  • completing appropriate testing in line with standards and laws to ensure the system is safe.

Transparency and Explainability

A good administrative apparatus is important for government agencies. To incorporate explainability into the usage of AI technologies, agencies should consider:

  • the way in which meaningful information will be provided about the AI application, which:
    • fosters a general understanding of the AI system with a clear rationale for the decision-making process.
    • is communicated in a way that is easily understood.
    • makes people aware of their interactions with AI systems; and
    • enables those affected by AI systems to understand the outcome and the machine-assisted contributions supporting the outcome.
  • how digital and ICT sourcing processes will be best used to ensure understanding of the inner workings of industry products and services
  • how the purpose, operation, and contribution of AI to an overall process will be articulated in clear and understandable language to non-technical audiences, by:
    • being transparent by letting people know, where appropriate, that their personal information is being used for or that they are interacting with an AI system
    • clearly explaining how a decision was informed or aided by AI within what is technically possible. Where lawful or legitimate restrictions do not prevent this, this may include:
      • documentation describing the purpose and broad functionality of the AI and how it is used
      • making information about data and AI processes available.
  • the approach to ensure that all technical and policy staff have appropriate support and training to create and maintain the technology application and understand the impact of its change on the overall processes it supports.
  • the tools that will be used to provide meaningful information about the AI applications, including, for example:
    • releasing to the public a risk analysis or profile of all AI machine-assisted decision-making systems
    • creating evidence trails and or checklist of AI decisions
    • creating a decisions register, which may be accessed by a member of the public through a Freedom of Information request.

Contestability

When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system. To incorporate contestability into the usage of AI technologies, agencies should consider:

  • the avenues and pathways for an individual or community to contest a decision, including the underlying data and process, including how a decision supported by the technologies can be appealed
  • have communication channels and processes in place that allow users and stakeholders to provide feedback on AI systems
  • the avenues to correct and record errors in decision-making by the AI application, and how best to communicate the avenues of redress to impacted individuals
  • the extent to which information about the underlying technology applications, and algorithms, be made available to the public and other stakeholders
  • structures for updating existing review processes and procedures to ensure their adequacy for the inclusion of AI technologies and assisted machine-assisted decision-making.

Agencies should also make their own assessment as to whether they have obligations under the Freedom of Information Act 1982 (Cth) (FOI Act) relating to proactive publication of information in accordance with the Information Publication Scheme. It is also considered best practice to publish information about automation, as this is in line with the objectives of the FOI Act and helps foster public trust in government data-handling.

Accountability

People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled. To incorporate accountability into the usage of AI technologies, agencies should:

  • consider how AI responsibilities will be included in agency accountability instruments and delegations.
  • consider how delegates and senior executives accountable for decisions informed by AI will have the necessary skills to consider ethics in the decision-making process.
  • ensure accurate and complete documentation and processes for the AI systems and their outcomes are available for external reviews and assurance engagements if necessary, and;
  • have an appropriate level of human oversight for AI and decisions to protect the public.

Further Reading

Capabilities

This design is part of the following capability.
CAP67

Generative Artificial Intelligence (GenAI)

Policies

This design can be relevant to meeting the requirements of the following policies.
POL14

Policy for responsible use of AI in government

Standards

This design can be useful in achieving the intent of the following standard(s).
Direct link: https://www.digital.gov.au/policy/ai/pilot-ai-assurance-frameworkResponsible agency: Digital Transformation AgencyLast updated: October 2024 The Pilot Australian Government artificial intelligence (AI) assurance framework guides Australian Government agencies through impact assessment…
Was this information helpful?

Do not include any personal information. We are unable to respond to comments or feedback. If you would like a response, please email, or phone us. Our details are on the AGA contact page www.architecture.digital.gov.au/contact-us.