Aerial threat: rewards come with the AI revolution, but risks follow


The changing parameters of opportunity and risk from the emerging AI revolution run much deeper than might be generally supposed, say Professor Anthony Elliott and Julie Hare.

 As the great wave of digital technology breaks across the world, artificial intelligence creeps increasingly into the very fabric of our lives.  From personal virtual assistants and chatbots to self-driving vehicles and tele-robotics, AI is now threaded into large tracts of everyday life. It is reshaping society and the economy.

Klaus Schwab, founder of the World Economic Forum, has said that today’s AI revolution is “unlike anything humankind has experienced before”.  AI is not so much an advancement of technology, but rather the metamorphosis of all technology.

This is what makes it so revolutionary. Politics changes dramatically as a consequence of AI.  Not only must governments confront head on the fallout from mass replacement of traditional jobs with AI, algorithms and automation, they must ensure that all citizens are adaptable and digitally literate. It will be fundamental to almost all areas of policy development.

In many respects, what we see today is a new agenda. The United Nations has predicted that the number of people aged 60 or over will double between 2015-50 to nearly 2.1 billion, accounting for 20 per cent of the world’s population. This will coincide with falling birth rates in many countries and could result in a “demographic time bomb”. Falling tax revenues and increasing welfare payments will significantly challenge governments across the world.

The consequences of our increasingly automated global world involve a shattering of political orthodoxies. Some, such as Boston Consulting Group partners Vincent Chin and Christopher Malone, have argued that such times call for bold new thinking – such as a Universal Basic Income (UBI). Trials of a UBI are underway in Finland, Canada, Brazil, the Netherlands and parts of Africa to see if the fallout from widespread labour force disruption and job losses can be better managed.

Profound questions

AI and other disruptive technologies are also causing profound questions to be raised about the legal and regulatory frameworks governing the digital revolution. Factors that must be taken into consideration include their impacts on the labour market, social inequality and potential physical harm. Which in turn raises questions whether policies should be interventionist, or whether new developments should be allowed to flourish in a state of what is called “permissionless intervention”.

“The point is this: there is no easy way in advance of identifying how new technologies based on autonomous systems will play out.”

Obviously, policy attempts aimed at coping with emerging and vastly complex technological systems often produce unintended consequences, revealing or generating other issues or problems.  In turn, other solutions or synergies in emerge in response to such interdependencies between adaptive systems.

One just need look at the recent debate over drones, or unmanned aerial vehicles (UAVs), which highlights these dilemmas. In the wake of the London Gatwick airport drone incursion with caused the delay of flights for many days, the UK Parliament extended exclusion zones around airports, along with giving police new powers to deal with the illegal use of drones.  However, no sooner were these new laws passed in early 2019 when a drone sighting at Heathrow airport led to the immediate close of its terminals.  One step forwards, one sideways.

The changing parameters of opportunity and risk run much deeper than might be generally supposed. AI, robotics and information technology creates new forms of opportunity and risk which previous generations have not had to cope with.  Technological risks cannot be left only to experts to resolve, partly because experts routinely disagree, both about levels of risk and policy responses.  Making judgements about the gains and losses of AI technologies must be a shared responsibility.

Consider, once again, drones.  Developed originally for military purposes, drones have opened up stunning new global opportunities, many of which were previously unimaginable. Hungry? Get your takeaway meal delivered by drone. Bored? Disney World in Florida deploys Shooting Star drones to perform a regular light show – a super high-tech version of so last-century fireworks displays.

The opportunities afforded by drones cannot be underestimated. Squads of small military drones can co-ordinate their movements to identify targets. Drones can be used to survey building and mining sites, they can be used to survey and help protect endangered animals and ecosystems, they are transforming agricultural practices, including planting large tracts of land with seeds from 300 metres.

In Rwanda, drones are now in use to deliver blood, vaccines and other urgent supplies to remote and inaccessible areas.

In South Africa, Peru, Guyana, Papua New Guinea and the Dominican Republic, drones are being used for health deliveries and other humanitarian emergencies.  In the Democratic Republic of the Congo, the UN has deployed drones as part of their peacekeeping program.

But there are also huge risks stemming from drones and their associated technologies.

Stunning risks

The US, for example, has deployed unmanned drones to attack militants in Pakistan and Afghanistan, but wrongly targeted numerous innocent civilians, according to some reports.

Drones have been used to smuggle drugs, to stalk individuals, one crash landed in the White House while another was used to intimidate German Chancellor Angela Merkel. Drones are not always piloted by competent people and they are being used to reinvent spying and espionage – which is either good or bad, depending on which side you are on.

The point is this: there is no easy way in advance of identifying how new technologies based on autonomous systems will play out.  There are certainly some stunning opportunities, with the potential to drastically reduce poverty, disease and war.  But so too the risks are enormous, and this can be clearly discerned from the development of autonomous weapons systems.  Moreover, the assessment of risk here must involve not only direct but also indirect threats.  An example is that of insurgent groups tapping into communication satellites and aerial drone camera feeds in order to hack into military intelligence.

The debate over AI very much hinges upon the assessment of risk where the calculation is often murky, and sometimes incalculable.

“There should be reporting to parliament and to the public about the risks of AI technologies, backed with informed analysis.”

What is missing in much of the policy debate is precisely considered risk assessment. As recent governmental inquiries such as the UK Parliament’s Select Committee on Artificial Intelligence have recommended, there should be reporting to parliament and to the public about the risks of AI technologies, backed with informed analysis.

Some material pertaining to defence and national security will, of course, need to be kept confidential; but since so much rests on the new parameters of risk, the reckoning involved should be scrutinised on an ongoing basis.


Professor Anthony Elliott is Dean of External Engagement and Executive Director of the Hawke EU Jean Monnet Centre of Excellence at the University of South Australia.  Julie Hare is Honorary Fellow of the Graduate School of Education at the University of Melbourne.

Professor Elliott’s new book, The Culture of AI: Everyday Life and the Digital Revolution, is published by Routledge.

About the author
Premium

The essential resource for effective public sector leaders

Special offer on now: Subscribe for a year to Mandarin Premium, get two outstanding books free.

Get Premium Today