Agencies across the Australian Public Service must become data-driven so that employees can understand the power of the data and achieve better outcomes, according to Dr Simon Barry, deputy director at CSIRO’s Data61.
Barry appeared on a panel of experts and government leaders at an online event on Tuesday, with discussions centring on the future opportunities and challenges of artificial intelligence in government operations and service delivery.
Hosted by the Institute of Public Administration Australia and the Australian Council of Learned Academies, the event saw Australian chief scientist Dr Alan Finkel deliver his final speech on AI ahead of his departure from the role later this year.
Finkel noted that there is a spectrum of rules in human societies, moving from accepted societal norms at one end, through to severe punishments for the worst crimes at the other. There needs to be a similar spectrum for AI.
While some applications of AI would be covered by existing regulatory regimes, “there is a worrying void” at the societal norms end of the spectrum, Finkel said.
“It’s amazing how much information we’re willing to give up in exchange for a bit of convenience. Information that can and will be used to target us with advertisements and coercions. We have little protection,” he said.
“End user agreements exist to protect the vendor, not the consumer. None of us has the knowledge or the time to individually vet the AI products we use. When was the last time you read the terms and conditions before clicking ‘Accept’?
“In our human intelligence society, there are consequences for falling short of standards. We need the same for AI developers. A way for consumers to recognise and reward ethical conduct.”
The simplest way to do this would be through an AI trust mark, allowing consumers and governments to identify and reward ethical AI. However, this system would only work for low-risk consumer technologies, not for AI applications that have a direct impact on human freedom and safety.
Finkel referred to Department of Home Affairs secretary Michael Pezzullo’s “golden rule” for human rights and AI:
“No robot or artificial intelligence system should ever take away someone’s right, privilege or entitlement in a way that can’t ultimately be linked back to an accountable human decision-maker.”
He argued the rule was “the mark of a public sector fit to be an ethical custodian”.
In order for society to build AI into a better way of living, Finkel said good legislation, guidelines and ethical behaviour was needed.
Professor Genevieve Bell, director of the Australian National University’s 3A Institute agreed, but she added that an “AI-ready society” was also needed.
Building that society would require six things:
- A balanced regulatory framework, which would consider issues such as ethics, and the relationship between foreign companies, local data, and “the complex of technologies and networks that make it very hard to know where those borders precisely sit”.
- An engaged society, referring to issues such as trust, the role of AI inside civil and civic discourse, how people tell stories and whether they are fact or fiction, and deep fakes.
- A learning focus on being an AI-ready society, to prepare children, adults and communities. Bell noted that a lot of focus is given to delivering STEM education in schools, despite the fact that “most of Australia is no longer in primary school or high school”.
- Thinking differently about the rules of the economy, and challenging the notion that the most successful use of AI would be to deliver productivity and efficiency gains.
- Considering the future of AI in the creative sector. Bell noted AI is often talked about in relation to the workplace, and the future of work. She encouraged people to instead consider the role of AI in the arts, sport, entertainment, or in producing “the spectacle”.
- A focus on sustainability and changing the way technology is being delivered, as storing and circulating the world’s data uses an immense amount of energy.
Barry said data quality has become a “very significant issue” for the public sector, and workers making mistakes by not understanding data was “a much bigger danger” than the small amount of people who choose to be unethical.
He noted that focussing on how to embed efficient capture of data is important, because it’s impossible to build good AI if data capture is complex. He said the public service must become data-driven so that efficient capture is pushed in a cultural sense.
“People have got to see the value of it, you know, for their mission, for their purpose,” he said.
“I think all levels of agencies need to become data-driven, at the top level, to express the purpose of an agency — both in terms of the data and what you will see — and then through all the people who actually understand the power of the data, and how it can actually free them up to actually get better outcomes.”
Barry noted people will often approach data with the aim of getting something from it, but the real value comes from being able to find the questions. He encouraged more training so that employees better understand what technology can and cannot do.
“If we frame the right questions, and we can frame what we’re trying to understand, then I think it makes it much more powerful,” he said.
“Then it’s about building the solutions to the questions, whereas a lot of the discussion at the moment is ‘I have so much data, it must be valuable, we must be able to get something from it.’ So it is actually, it’s a journey for an agency around that.”