New guidance for the development and deployment of Artificial Intelligence replaces the existing Australian Government voluntary standard and calls into question the status of proposed mandatory guardrails.
The Guidance for AI Adoption
At the end of October, the Department of Industry, Science and Resources and the National AI Centre published new Guidance for AI Adoption ("GfAA"). While there was a general commitment by government to continue to review and monitor the suitability of the existing Voluntary AI Safety Standard (the "VAISS"), the publication of the GfAA as the superseding version only 12 months later is somewhat surprising and raises questions about the future of previously foreshadowed mandatory guidelines for AI deployment.
The GfAA is framed as a response to rapid shifts in technology and the governance landscape in the past 12 months, and industry feedback...
The VAISS was released in September of 2024 and was the primary (albeit non-binding) source of best practice controls and governance arrangements for the deployment of AI systems in Australia. The new guidance has been framed as a response to rapid shifts in technology and the governance landscape in the past 12 months, as well as industry feedback. The GfAA now condenses the ten VAISS guardrails into six essential practices and is pitched at both AI deployers and developers.
What has changed?
Where the VAISS was broader and principles-based, the GfAA is more prescriptive and places greater emphasis on whole-of-lifecycle development, deployment and ongoing assessment of AI systems. Australia's Eight Artificial Intelligence (AI) Ethics Principles remain, underpinning the GfAA, and continue to be a relevant resource informing Australia's public policy approach to the safe, secure and reliable deployment of AI systems.
| VAISS Ten guardrails | GfAA Six essential practices |
| 1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance | 1. Decide who is accountable – Establish end-to-end accountability and robust AI governance |
| 2. Establish and implement a risk-management process to identify and mitigate risks | 2. Understand impacts and plan accordingly – Ensure stakeholder rights and fair treatment |
| 3. Protect AI systems, and implement data governance measures to manage data quality and provenance | 3. Measure and manage risks – Implement AI specific risk management |
| 4. Test AI models and systems to evaluate model performance and monitor the system once deployed | 4. Share essential information – Ensure appropriate transparency and explainability |
| 5. Enable human control or intervention in an AI system to achieve meaningful human oversight | 5. Test and monitor – Ensure quality, reliability and protection through evaluation and monitoring of AI systems |
| 6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content | 6. Maintain human control – Integrate meaningful human oversight |
| 7. Establish processes for people impacted by AI systems to challenge use or outcomes | |
| 8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks | |
| 9. Keep and maintain records to allow third parties to assess compliance with guardrails | |
| 10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness |
Two versions of the GfAA have been released:
- A "Foundations" version for organizations at the start of their AI adoption journey
- the "Implementation Practices" version, which is more extensive and includes a number of sub-provisions to each of the six essential practices, pitched as suitable to governance professionals and technical experts.
Interestingly, the two versions of the GfAA do not delineate between small and larger businesses, but rather AI fluency regardless of organization size. However, the more fulsome Implementation Practices version is likely to provide more useful instruction for larger organizations, including those that have already begun shaping internal AI policies taking into account the VAISS guardrails and the Ethics Principles.
To that end, helpfully the GfAA has been published alongside a "crosswalk" identifying corresponding or like provisions between the two standards, which may be of use to organisations that have structured their AI governance protocols by reference to the VAISS (available here).
Further guidance can also be gleaned from the AI policies and standard contractual provisions published for the Australian Public Service ("APS"). As a significant workforce, and a major purchaser/user of various technologies and services in the local market, the APS's approach under the newly published AI Plan and the Digital Transformation Agency’s AI Model Clauses is likely to influence AI adoption and contracting practices in Australia. It is notable that the AI Plan requires contractors to disclose to government agency customers where AI is used in the delivery of services and to contract on terms that ensure responsibility rests with the supplier for the use of AI.
The fate of the Mandatory Guidelines
Around the time the VAISS was published, the government also put forward a proposals paper for introducing mandatory guardrails for AI in high-risk settings.
The proposals paper looked to: (a) define high-risk AI; (b) put forward mandatory guardrails for that category of AI systems; and (c) consider options for best regulating any such guardrails. The mandatory guardrails put forward were identical to the VAISS except for the tenth which instead required deployers of AI to undertake a conformity assessment to demonstrate and certify compliance with the guardrails.
Given that the GfAA supersedes the VAISS, the fate of mandatory guardrails is unclear but so far, no mandatory equivalent of the GfAA has been indicated. This development sits alongside a view taken in some quarters – including by the Productivity Commission in its interim report published earlier this year – that mandatory guardrails, and AI technology-specific legislation in general, would have a chilling effect on innovation and the economy.
Where to from here?
The GfAA is available now for all organisations to access and adopt, noting that it is non-binding, and the National AI Centre will phase in further complementary tools and resources over the coming 12 months.
For organisations that have already aligned their AI governance policies with the VAISS, the Department of Industry, Science and Resources has confirmed that the Implementation Practices version of the GfAA integrates and builds on the principles of the VAISS. As such, organisations can continue to rely on existing policies and practices that have been developed having regard to the VAISS, and may in time choose to review and revise those policies and practices having regard to the GfAA and other guidance developed by the Department and the National Artificial Intelligence Centre as part of their continuing AI project.
Heading into 2026, it remains unlikely that Australia will introduce technology-specific legislation regulating the development and deployment of AI. For now, organisations engaging in these activities must instead comply with the largely technology-neutral laws already in place with an eye to the non-binding guidance available.
More information about current applicable laws and regulations can be found in AI Watch, White & Case's global regulatory tracker.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2025 White & Case LLP