So much of government is about decision-making, but are humans still up to standard when a machine can spit out decisions faster and using vastly more data sources? Does that means policy and program jobs will become as rare as manufacturing jobs?
AI systems and humans are very complementary, says Professor Francesca Rossi. Exploit this complementarity is key to getting the most out of AI. Humans do well at intuition, creativity, value judgements or asking the right questions. Humans are also pretty awful when it comes to statistical reasoning and are vulnerable to unconscious bias. Computers, however, are very good at learning from much larger quantities of data and can help humans answer questions in a less biased way — alerting them if they see biased behaviour.
Rossi says many government internal procedures and operations can be greatly improved by the use of AI techniques — but that doesn’t mean AI will kill the jobs of existing workers. Most AI research is devoted to building systems that support humans, rather than replace them.
On leave from Italy’s University of Padova to work with IBM, Rossi talks with the Centre for Public Impact about her work and research program. Find out how she thinks humans and AI can best work together, why she can’t “think of a single task that will not be transformed” by AI.
AI will err and learn from failure too
“If you want machines to be flexible enough to cope with the uncertainty of real life, you have to allow for data-driven machine learning approaches. If you use other approaches, at least in perception kind of task, being able to understand what’s in an image, what’s in a video, what’s in a text, we’ve sent that other approaches are less effective.
“If you expect AI to always give you the correct answer, like you expect a calculator … if it’s not correct you say I cannot use this calculator, but in this case it’s not the same. You don’t want them making more errors than humans, but people should know that there are some limitations. But of course one could think there are some decisions that no matter what the error is, you do not want to delegate to machines. Those decisions could have a lot of ethical issues involved, moral issues, societies may want to decision not to give those decisions to machines.”
System decisions must be explainable
“In Europe there is this new regulation, the GDPR [General Data Protection Regulation] that is going to be in effect very soon, 2018 all over Europe, that requires many things about data privacy. It requires that anytime you have a system that makes a decision that impacts a person’s life, it has to be explainable. So if you’re not granted a loan, then the system should be able to explain why. Explanation capability is very important to build the right interaction, the right level of trust between the humans and the systems.”
AI processes can help transparency
“Governments are about improving the social good of society. AI is instrumental in reaching that goal in a transparent and accountable way. Whether it’s delivering healthcare or improving the operation and accountability of internal procedures that people have to relate to, or [how it] treats the data of the citizens … the fact that you have to spell out the goal you want to reach, that can make it more auditable and more transparent.”