The Department of Defence has launched a technical report aiming to assist in the development of ethical artificial intelligence systems for Defence.
The report, published on Tuesday, noted that while there is potential for AI technology to increase Defence capability and reduce risk in military operations, significant steps must be undertaken to ensure that introduction of the technology does not result in “adverse outcomes”.
“Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms,” it said.
To address this, Plan Jericho, the Defence Science and Technology Group, and the Trusted Autonomous Defence Cooperative Research Centre ran a workshop in Canberra in 2019, with the aim of developing a pragmatic ethical methodology for AI projects in Defence.
Attended by more than 100 representatives from Defence, other government agencies, industry, academia, international organisations and media, the discussions and theories which came out of the workshop have been outlined in the new report.
Attendees identified five facets of ethical AI in Defence, including:
- Responsibility — who is responsible for AI?
- Governance — how is AI controlled?
- Trust — how can AI be trusted?
- Law — how can AI be used lawfully?
- Traceability — how are the actions of AI recorded?
They also put forward 20 topics to be explored when considering AI, such as transparency, safety, accountability, human factors, supply chain, and misuse and risks.
Chief defence scientist Professor Tanya Monro said AI can offer benefits such as removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness, but there are risks.
“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” she said in a statement on Tuesday.
The paper outlined three tools to assist Defence and Industry in developing AI systems for Defence, including: an AI Checklist for the development of ethical AI systems; an Ethical AI Risk Matrix to describe identified risks and proposed treatment; and, for larger programs, a data item descriptor for contractors to develop a formal Legal, Ethical and Assurance Program Plan to be included in project documentation for AI programs where an ethical risk assessment is above a certain threshold.
Air Vice-Marshal Cath Roberts, head of air force capability said AI and human-machine teaming would play a “pivotal” role for air and space power in the future.
Ethical and legal issues must be resolved at the same pace that the technology is developed, she said.
“This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control,” she said.
“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way.”
The release of the findings has come a week after the CSIRO released a study showing that AI can be used to influence human decision-making by exploiting vulnerabilities in an individual’s habits and patterns.
CSIRO Data61 director Dr Jon Whittle said the study was further proof that AI technologies have “tremendous potential” to provide societal benefit, but also ethical risks.
“Like any technology, AI could be used for good or bad, so proper governance is critical to ensure that AI and machine learning are implemented in a responsible manner. Organisations need to ensure they are educated on what these technologies can and cannot do and be aware of potential risks as well as rewards,” he said.