CA ANZ submission raises concerns with proposed AI guardrails
The professional body says the lack of clarity surrounding the definitions of ‘deployer’ and ‘end user' could have implications for accountants.
Chartered Accountants ANZ (CA ANZ) has highlighted a number of concerns with the government's proposal for mandatory AI guardrails in high-risk settings with the proposed principles and guidelines lacking clarity in their current form.
In its submission, CA ANZ said it is concerned about how AI systems will be designated as high-risk under the proposed guardrails.
"While most practitioners offering services to consumers will use systems built on or incorporating AI, we do not consider that their use cases for AI would meet the principles to designate such systems as high risk," the submission said.
"However, in discussions with members and stakeholders, we noted that there was not a consistent interpretation of this critical point, particularly given the question in the proposals paper asking whether all general-purpose AI systems should be considered high risk."
CA ANZ said where accountants provide advice based on an AI system to analyse data, it is unclear whether that system would be designated as a high-risk AI system.
"For example, strategies to improve cash flow or strategies to grow a business in line with industry benchmarks. Potentially, the larger the practice the more likely it is to deploy AI to analyse data to provide more fulsome information to their clients so they can make an informed decision," it said.
"As we understand the principles, the key will be to assess the severity and extent of potential impacts of a decision based on the information generated by an AI system."
CA ANZ said further guidance is needed to clarify the elements to be considered in assessing the risk of an AI system such as illustrative use cases of both high-risk AI systems and a low-risk AI systems.
"Guidance could also reflect the evidence that will be expected of deployers of AI systems to prove to a regulator, or regulators, how they have assessed their AI system against the proposed principles and concluded the risk is low or high," it said.
CA ANZ said further clarification is also needed on when an end user of an AI system might be categorized as a deployer, as this will affect the application of the guardrails.
The professional body noted in the submission it has reached out to members and stakeholders for feedback on what may be considered high-risk in the field of accounting.
“Practice management software will use AI in various ways,” CA ANZ said.
“From our perspective, the provider of the software would be considered a ‘deployer’ and our members that use this software ‘end users.’”
“To others, using such software to provide a service, say taxation services, means the accountant would be considered a deployer.”
CA ANZ said issues such as these highlight the need for "clarity, enforceability, and a balanced approach to mandate AI guardrails for high-risk AI systems".
"We recommended that implementing mandatory guardrails be considered at least 12 months after the release of the AI voluntary standard," it said.
This will allow the government to consider feedback on the impact of the voluntary standard, who has and has not implemented it and why, and to assess the progress of AI and if regulation of AI in other jurisdictions has improved protections for consumers.