AHRC calls for establishment of AI safety commissioner

By Shannon Jenkins

May 30, 2021

coding-technology
(Image:Adobe/Gorodenkoff)

The Australian Human Rights Commission has developed a roadmap to ensure the public and private sectors safeguard human rights when designing, developing and using new technologies.

In a new 240-page report, the AHRC has made 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education in relation to new technologies, including artificial intelligence.

One recommendation calls for the creation of an AI safety commissioner, which Australian human rights commissioner Edward Santow said could improve trust in government.

“Australians should be told when AI is used in decisions that affect them,” he said.

“The best way to rebuild public trust in the use of AI by government and corporations is by ensuring transparency, accountability and independent oversight, and a new AI safety commissioner could play a valuable role in this process.”

An AI safety commissioner could, according to the report, support regulators, policy makers, government and business apply laws and other standards in respect of AI-informed decision-making.

“The use of AI, especially in momentous decision-making, brings real human rights risks. For this reason, the primary focus of the AI safety commissioner should be to promote and protect human rights, with a special focus on groups at greatest risk of harm,” the report said.


READ MORE: New Defence report explores ethical use of AI


The commission has also recommended that the Department of the Prime Minister and Cabinet ‘set Australia’s national strategy for new and emerging technologies’ and ‘promote responsible innovation’ through its upcoming Digital Australia Strategy.

“A clear national strategy and good leadership will give Australia a competitive advantage and technology that Australians can trust,” Santow said.

Other report recommendations made to the federal government relate to:

  • Legal accountability for government and private sector use of AI,
  • AI-informed decision-making,
  • Equality and nondiscrimination,
  • Biometric surveillance, facial recognition and privacy,
  • Functional accessibility,
  • Broadcasting and audio-visual services,
  • Availability of new technology,
  • Design, education, and capacity building.

The AHRC’s report is the result of three years of consultation with the tech industry, governments, civil society and communities across Australia. Santow noted that, during consultation, Australians consistently said new technology must be fair and accountable.

“That’s why we are recommending a moratorium on some high-risk uses of facial recognition technology, and on the use of ‘black box’ or opaque AI in decision making by corporations and by government,” he said.

“We’re also recommending measures to ensure that no one is left behind as Australia continues its digital transformation — especially people with disability. We need to ensure that new technology facilitates the inclusive society Australians want to live in, and that innovation is consistent with our values.”


READ MORE: Former digital government boss criticises new ethical AI guidelines


 

About the author
0 Comments
Inline Feedbacks
View all comments
The Mandarin Premium

Canberra’s changed

Stay on top for only $5 a week

 

Get Premium Today