In July 2023, the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR) released initial interim guidance on government use of publicly available generative artificial intelligence (AI) platforms.
Click here to see the current guidance. For previous versions of this guidance, see the change log table below.
Introduction for agencies
Generative AI presents new and innovative opportunities for government and an increasing number of tools and models are now easily available to use.
However, due to their rapid evolution and uptake, there is an urgent need and growing demand for guidance to support the consideration and assessment of the risks involved in their use.
Whole-of-government and agency-specific policies provide guidance on ensuring the appropriate, safe and effective use of technology tools, including AI. However, given the ease of access and use with publicly available generative AI tools like ChatGPT, Microsoft Copilot and Google Bard, this interim guidance is recommended for government agencies to use as the basis for providing advice to their staff. It is also recommended as the basis for advice in relation to the enterprise version of Microsoft Copilot (previously referred to as Bing Chat Enterprise), where agencies have enabled it for staff use.
In addition to the interim guidance for staff, we recommend agencies:
- consider implementing an enrolment mechanism to register and approve staff user accounts to access public generative AI platforms. This could include appropriate approval processes through relevant senior accountable officers (e.g. Chief Information Security Officer (CISO) or Chief Information Officer (CIO)).
- implement an internal feedback/reporting mechanism to evaluate any challenges staff are facing in following guidance.
- seek to move to commercial arrangements for generative AI solutions as soon as it is possible.
In exploring commercial arrangements for generative AI solutions, agencies should consider whether service providers and their AI services have undergone a security assessment by an Infosec Registered Assessor Program (IRAP) assessor to determine their security posture and security risks associated with their use. This assurance may better enable agencies to leverage the benefits of generative AI solutions while meeting their data security and privacy requirements. Otherwise, agencies should apply the guidance under 'privacy protection and security' as the default advice for the level of classification/sensitivity of information that may be entered into generative AI solutions.
Agencies may also consider developing or using in-house AI models that do not rely on sending data to an external AI service provider.
While the default position of this guidance continues to be a recommendation that staff use work email addresses instead of personal email addresses where it is necessary to log in to public generative AI tools, it is up to each agency to consider how it appropriately applies the guidance. Using work email addresses provides staff with a clearer separation between work and personal use of these tools. A range of obligations apply to actions undertaken by Australian Public Service (APS) staff in connection with their employment, including those outlined under the APS Values, Employment Principles and Code of Conduct.
As the Government's understanding of the legislative, administrative and broader risks of this technology evolves, the DTA, in collaboration with DISR, will investigate the development of whole-of-government policies and frameworks relating to the responsible use of AI in government.
In the meantime, we encourage agencies to build on this interim guidance and implement agency-level policies. The interim guidance will be updated as needed to provide the latest advice on how to approach generative AI.
We also invite agencies to provide feedback on the value, applicability or relevance of this guidance by emailing email@example.com.