Australian Government Architecture
Search

Interim guidance for agencies on government use of generative Artificial Intelligence platforms - July 2023

Notice

This guidance has been superseded. The current version is available here.

The Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR) have released interim guidance on government use of publicly available generative AI platforms. The interim guidance is recommended for government agencies to use as the basis for providing generative AI guidance to their staff. Agencies should read the introduction before using this interim guidance.

Interim guidance for Australian Public Service (APS) staff

Note this guidance will be iterative. It is provided for government agencies to implement within their organisation. APS staff should follow their agency’s policies and guidance on generative AI tools in the first instance.

Feedback from public consultation on the responsible use of AI in Australia will be used to inform consideration across government on appropriate regulatory and policy responses that may include future iterations of this guidance.

Generative AI tools present new and innovative opportunities for government. However, due to their rapid evolution, the risks involved in their use need to be considered and assessed. 

As you consider the potential uses of publicly available generative AI platforms[1] like ChatGPT, Bard AI or Bing AI in your work, you should assess the potential risks and benefits for each use case. In doing so, you should use the following principles and tactical guidance, which are supplemented by three practical use cases.

Users should first and foremost align with their Departmental and Agency ICT obligations and policies. The Digital Transformation Agency (DTA) encourages departments and agencies to review their policies related to AI in line with this advice.

[1] Publicly available generative AI platforms are third party AI platforms, tools or software that have not been security risk assessed by the agency nor entered into a commercial contract with the provider.  

Principles based approach for the ethical use of AI in government

The four principles below are designed to ensure the responsible, safe, and ethical use of generative AI by the Australian Public Service (APS). They will help:

  • support the responsible and safe use of technology
  • minimise harm, and achieve safer, more reliable and fairer outcomes for all Australians
  • reduce the risk of negative impact on those affected by AI applications
  • enable the highest ethical standards when using AI
  • increase transparency and build community trust in the use of emerging technology by government.

Responsible and ethical use of AI is paramount. Ethical use revolves around appropriate consideration of privacy, transparency, accountability and potential biases that can impact individuals or groups and influence decision-making processes. These are fundamental principles of the APS.

The risk of using AI for official activities is context-specific. For example, the breadth of government activities range from policy advice, delivering programs to industry, through to delivering services to the community. As such the requirements will be different, and will change, depending on how the AI is deployed. These principles are designed to support the APS in considering whether it is appropriate to use AI tools like generative AI.
 

Principles 

  • 1. AI should be deployed responsibly

Only when AI is deployed responsibly can it improve the efficiency, effectiveness and quality of services and advice delivered. You should only use publicly available generative AI platforms in low-risk situations and take the appropriate risk mitigation strategies for these situations using advice below. See Figure 1 for advice on high-risk situations where these tools must not be used.

Publicly available generative AI platforms should only be used in cases where the risk of negative impact is low.

Use cases which currently pose an unacceptable risk to government include but are not limited to:

  • Use cases requiring input of:
    • large amounts of government data
    • classified, sensitive or confidential information
  • Use cases where services will be delivered, or decisions will be made
  • Use cases where coding outputs will be used in government systems.

Figure 1: Initial High-Risk Assessment

Any responses or outcomes provided by these tools should always be reviewed for appropriateness and accuracy, as they can provide incorrect answers in a confident way. Consider whether outcomes meet community expectations, including in relation to the impacts of known biases in the training data. You should also consider intellectual property rights of third parties as well as broader privacy and copyright issues when using these tools.

  • 2. Transparency and explainability

The information provided by public AI tools is often not verified, may not be factual, or may be unacceptably biased. Users must question where this data comes from and be aware of the nature of the tool being used.  When using generative AI tools, users must be able to explain and justify their advice and decisions. They also need to critically examine outputs from these tools to ensure those outputs reflect consideration of all relevant information and do not incorporate irrelevant or inaccurate information. Additionally, users must ensure the ideas being generated are ethical and responsible and will improve the lives of Australians.

It should also be clear when AI tools are being used by government to inform activities. Users could consider including markings in briefings and official communications indicating if AI was used to generate any of the information.

  • 3. Privacy protection and security

Inputs into AI tools should not include or reveal classified information, or personal information. All activities need to align with legislation and policies relating to information and data (for example the Privacy Act 1988, and the Protective Security Policy Framework).

Government information must only be entered into these tools if it has already been made public or would be acceptable to be made public. Employees determining that the information in question is suitable for public release must have the appropriate organisational delegation to do so. Classified or sensitive information must not be entered into these tools under any circumstances. You should also not enter information that would allow AI platforms to extrapolate classified or sensitive information based on the aggregation of content you have entered over time. Any data entered is stored externally to government and we do not know who has access to it. Where available, you should disable any settings or permissions which save chat history for training purposes.

  • 4. Accountability and human centred decision making

Government engages in a broad range of activities, including service delivery, policy advice and program delivery, among others. Generative AI tools must not be the final decision-maker on government advice or services. Accountability is a core principle for activities within the APS. As such, humans should remain as the final decision maker in government processes. Users may wish to use tools to brainstorm options or draft content but ensure that the options are reviewed by a human prior to use. When reviewing the information generated by the AI tool users should ensure the content aligns with their understanding of the issue, and if in doubt should fact check the content using reputable sources.

Tactical Guidance – Implementing the principles in practice

  • When you are using public generative AI platforms as part of your work, you should use your corporate credentials to sign up or log in (i.e. use your work email as your user ID and create a new unique password). Do not use a work federated account. If a publicly available generative AI platform can be used without needing to create an account, then don’t create an account.
  • Check whether your department or agency requires you to obtain approval or register your use of the platform before you use it.
  • Do not distribute or click on any links provided or generated by public AI platforms or bots. These links could lead to phishing sites or malware downloads. Only click on links from trusted sources.
  • You should treat with care any files generated by public AI platforms or bots which have the potential to contain malicious code, such as macros in Microsoft Office files. These files must be considered as potentially malicious, and not interacted with or distributed to other people, until vetted and proven to be non-malicious. 
  • Ensure you align to Australian Government best practice AI principles and guidelines, including:
    a. Australia’s AI Ethics Principles
    b. Australian Government Architecture – Artificial Intelligence guidelines
  • You should report any instances where you are not able to fully apply this guidance to your Chief Information Security Officer and/or Chief Information Officer.

Use Cases

  • AI USE CASE 1

Nick needs to develop a project plan and is wondering if he can use ChatGPT to create a baseline project plan that he can then improve upon.

What should Nick do?

Nick can get the template of a project plan from ChatGPT. Nick must refrain from entering any details of the project, such as the project name, agency, names of systems/software, high level requirements or staff members involved. These details could provide sensitive information about the project to ChatGPT.

  • AI USE CASE 2

Stephanie is creating an urgent report to present later in the day and realises she is short of time to make a PowerPoint presentation. She is planning to upload the data for the slides to a public AI platform to help create a quick presentation.

What should Stephanie do?

Stephanie must not upload or input any sensitive or classified information into public AI platforms. She could input non-sensitive information, such as information available on a government website or an agency report that has been released publicly. She could also ask the AI for generic slide templates to help her prepare her presentation.

  • AI USE CASE 3

Roque is writing technical requirements for a tender that needs to go out urgently. Roque wants to type his requirements into Bard AI to confirm some of the technical specifications around monitor resolutions.

What should Roque do?

Roque can use Bard AI to find information about technical aspects, however Roque should take care not to input any details of his agency’s specific requirements or any organisational information. Roque should also take care not to mention the word tender or any market sensitive information that could, combined with his official email address, indicate a tender is being prepared by his agency. The information received should also be thoroughly validated by a human for appropriateness and accuracy before being used.

Was this information helpful?

Do not include any personal information. We are unable to respond to comments or feedback. If you would like a response, please email, or phone us. Our details are on the AGA contact page www.architecture.digital.gov.au/contact-us.