The Department of Industry, Innovation and Science has released a set of “aspirational” and entirely voluntary ethics principles for Australian businesses that use artificial intelligence.
The principles are designed to encourage organisations to think about the impact of AI, according to the department.
“They are aspirational and intended to complement — not substitute — existing AI related regulations,” it states.
“If your AI use doesn’t involve or affect human beings, you may not need to consider all of the principles.”
But the federal government’s former digital transformation chief Paul Shetler has poked holes in the principles. He says “glittering generalities” have rendered the guidelines ready for politicisation.
“A lot of the words they’re using are completely undefined and need to be interrogated. It seems to be full of jargon and buzzwords, and I hate that kind of stuff because it’s not clear — and if you’re dealing with bureaucrats that’s basically an open door for them to drive through,” he told InnovationAus.com last week.
“It seems like a waste and it’s hard to take seriously. It’s a PR move and I don’t think it’ll have an impact on businesses. No one is going to say that what they’re doing is bad for society or for humans. They need to be specific about what they want people to do.”
The eight principles are:
Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society, and the environment.
Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities, or groups.
Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
NAB, Commonwealth Bank, Telstra, Microsoft, and Flamingo AI will test the principles to see if they have any “practical benefits”, according to the industry, science and technology minister, Karen Andrews.
Andrews released a discussion paper put together by CSIRO’s Data61 in April seeking feedback on a set of draft AI ethics principles. The department received more than 130 written submissions, conducted consultation in Sydney, Melbourne, Brisbane, and Canberra, and developed the revised set of principles with a group of “AI experts”.
The federal government has allocated almost $30 million to “support the responsible development of AI”.