Government’s “interim” generative AI guide emphasises low-risk use cases

The government has published cautious “interim” guidance on how generative AI might be utilised by the public sector, but says it does not constitute a binding position on the technology.

<div>Government's " title="
Government's "interim" generative AI guide emphasises low-risk use cases
" decoding="async" />

The government has published cautious “interim” guidance on how generative AI might be utilised by the public sector, but says it does not constitute a binding position on the technology.

The guidance was prepared by the Digital Transformation Agency (DTA) and the Department of Industry, Science and Resources (DISR), “through broad consultation with Commonwealth agencies”.

It builds on an earlier set of more generic AI guidance, described as a beta, which had allowed for some limited generative AI experimentation.

While there is no formal whole-of-government policy or position, individual agencies have made their own calls about what they will allow.

Home Affairs, for example, has blocked open access to ChatGPT, and makes exceptions for experiments on a case-by-case basis.

DTA chief executive Chris Fechner said yesterday that the interim guidance is intended to augment – not override – individual agency positions on the technology.

“The purpose of this interim guidance on generative AI is intended purely to guide staff within APS agencies,” he said in a blog post. 

“At this stage, it does not replace the generative AI policies developed by individual agencies, rather it is intended to supplement these policies while further work is done to develop a whole-of-government position.”

The interim guidance was actually unveiled earlier this week but was largely not public; this has now changed.

It comprises two sections: an introduction and then more detailed guidance that contains four principles for generative AI use, and some brief hypothetical use cases.

At a high level, the guidance is to formally enrol staff access to generative AI platforms, with layers of approval, log any exceptions, and to “seek to move to commercial arrangement for generative AI solutions as soon as it is possible to do so.”

Underlying that, the guidance is only to use the technology “where the risk of negative impact is low”, to disclose its role in any activity, and to question its outputs.

Largely, the allowable use case for generative AI is as a search engine, to seek a generic template – such as for a report or project plan, but not to seek to customise it in any way in the generative AI tool.


About Author

Subscribe To InfoSec Today News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.