New Defence report explores ethical use of AI

By Shannon Jenkins

Thursday February 18, 2021

Adobe

The Department of Defence has launched a technical report aiming to assist in the development of ethical artificial intelligence systems for Defence.

The report, published on Tuesday, noted that while there is potential for AI technology to increase Defence capability and reduce risk in military operations, significant steps must be undertaken to ensure that introduction of the technology does not result in “adverse outcomes”.

“Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms,” it said.

To address this, Plan Jericho, the Defence Science and Technology Group, and the Trusted Autonomous Defence Cooperative Research Centre ran a workshop in Canberra in 2019, with the aim of developing a pragmatic ethical methodology for AI projects in Defence.

Attended by more than 100 representatives from Defence, other government agencies, industry, academia, international organisations and media, the discussions and theories which came out of the workshop have been outlined in the new report.


Read more: Defence experts discuss ethical AI, while NSW wants to ensure AI in government places customers at the centre


Attendees identified five facets of ethical AI in Defence, including:

  1. Responsibility — who is responsible for AI?
  2. Governance — how is AI controlled?
  3. Trust — how can AI be trusted?
  4. Law — how can AI be used lawfully?
  5. Traceability — how are the actions of AI recorded?

They also put forward 20 topics to be explored when considering AI, such as transparency, safety, accountability, human factors, supply chain, and misuse and risks.

Chief defence scientist Professor Tanya Monro said AI can offer benefits such as removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness, but there are risks.

“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” she said in a statement on Tuesday.

The paper outlined three tools to assist Defence and Industry in developing AI systems for Defence, including: an AI Checklist for the development of ethical AI systems; an Ethical AI Risk Matrix to describe identified risks and proposed treatment; and, for larger programs, a data item descriptor for contractors to develop a formal Legal, Ethical and Assurance Program Plan to be included in project documentation for AI programs where an ethical risk assessment is above a certain threshold.


Read more: Former digital government boss criticises new ethical AI guidelines


Air Vice-Marshal Cath Roberts, head of air force capability said AI and human-machine teaming would play a “pivotal” role for air and space power in the future.

Ethical and legal issues must be resolved at the same pace that the technology is developed, she said.

“This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control,” she said.

“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way.”

The release of the findings has come a week after the CSIRO released a study showing that AI can be used to influence human decision-making by exploiting vulnerabilities in an individual’s habits and patterns.

CSIRO Data61 director Dr Jon Whittle said the study was further proof that AI technologies have “tremendous potential” to provide societal benefit, but also ethical risks.

“Like any technology, AI could be used for good or bad, so proper governance is critical to ensure that AI and machine learning are implemented in a responsible manner. Organisations need to ensure they are educated on what these technologies can and cannot do and be aware of potential risks as well as rewards,” he said.


Read more: AI can now learn to manipulate human behaviour


 

About the author
0 Comments
Inline Feedbacks
View all comments
The Mandarin Premium

Insights for policy professionals.

Subscribe for only $5 a week.

Get Premium Today