Powered by MOMENTUM MEDIA
accounting times logo

Powered by MOMENTUMMEDIA

Powered by MOMENTUMMEDIA

Canberra to develop guardrails for high-risk AI

Technology
19 January 2024
canberra to develop guardrails for high risk ai

Consultation paper response outlines a targeted regulatory approach to ensure low-risk use continues unimpeded.

The government will develop mandatory guardrails for the use of AI in high-risk settings such as surgery or cars but will refrain from the one-size-fits-all regulation adopted in Europe, it says.

Ahead of possible legal changes, it proposed three voluntary measures for “immediate action”: working on a set of safety standards, mechanisms for labelling AI-generated material and establishing an expert group to thrash out mandatory rules for AI use.

Minister for Industry and Science Ed Husic said the government wanted “safe and responsible thinking baked in early” as AI was designed, developed and deployed.

==
==

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” he said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI. The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.”

The proposals come in an interim response to the consultation paper Safe and Responsible AI in Australia, and outline an approach that contrasts with the EU’s single regulatory law to ensure low-risk AI use “continues to flourish largely unimpeded”.

Indicators of high-risk activities included “systemic, irreversible or perpetual” impacts, the paper said, such as using AI-enabled robots for surgery or the use of AI in self-driving cars. However, a comprehensive definition of the term was still in development.

The interim response was the product of 510 online submissions from industry stakeholders, roundtables and a virtual town hall event.

“Almost all submissions called for the government to act on preventing, mitigating and responding to the harms of AI”, it said.

The paper said the national AI centre would “create a single source for Australian businesses seeking to develop, adopt or adapt AI”.

Other measures included developing labelling mechanisms for AI-generated materials and a temporary expert advisory group to develop mandatory “guardrails”.

The paper said submissions recognised that “voluntary commitments from companies to improve the safety of systems capable of causing harm were insufficient” but views differed on the most appropriate form of regulation.

Potential mandatory “guardrails” could involve AI product testing before release, transparency regarding model design and data underpinning AI applications and training for developers and deployers of AI systems, it said.

It also considered possible forms of certification and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.

RMIT research fellow Nataliya Ilyushina criticised the government’s delayed response to the paper, with the consultation process closing six months ago.

“Australia’s unacceptable delay in developing AI regulation represents both a missed chance for its domestic market and a lapse in establishing a reputation as an AI-friendly economy with a robust legal, institutional and technological infrastructure globally,” she said.

Ms Ilyushina emphasised the importance of striking the right balance between regulating AI’s risks without stifling its benefits, especially for small businesses.

“The adoption of AI is affordable and accessible, which is particularly essential for the growth of small businesses – the cornerstone of the Australian economy. Employing AI to augment human jobs has demonstrated a capacity to enhance productivity, providing a direct solution to Australia's challenges of stagnant productivity growth, the cost-of-living crisis and labour shortages,” she said.

“While businesses prefer voluntary codes and frameworks, other stakeholders – especially those working on risks related to cybersecurity, misinformation, fairness and biases – seek more stringent regulations.”

AI writer Tracey Spicer said she was disappointed by the government’s “weak” regulatory response.

“Australia had a tremendous opportunity to be a world leader in this area. Instead, it’s all about a soft, voluntary approach. Big Tech has won, like Big Tobacco in the past,” she wrote on X.

In addition to developing AI regulation, the government has also committed $75.7 million for AI initiatives in the 2023-24 federal budget, including creating SME support centres, expanding the national AI centre and funding AI graduate programs.

About the author

author image
Christine Chen

Christine Chen is a graduate journalist at Accountants Daily and Accounting Times, the leading sources of news, insight, and educational content for professionals in the accounting sector. Previously, Christine has written for City Hub, the South Sydney Herald and Honi Soit. She has also produced online content for LegalVision and completed internships at EY and Deloitte. Christine has a commerce degree from the University of Western Australia and is studying a Juris Doctor degree at the University of Sydney.

Subscribe

Join our subscribers get exclusive access to freebies and the latest news

Subscribe now!
NEED TO KNOW