Computer-generated humans and disinformation campaigns could soon take over political debate, experts warn.
This year has already seen bots spread misleading claims online about Australia’s bushfire disaster. The disinformation campaign on social media argues that the rampant bushfires have been largely caused by arsonists. Social network analysis expert Dr Timothy Graham looked at 1340 tweets under the #arsonemergency hashtag on Twitter. He found a “suspiciously high number of bot-like and troll-like accounts”, with a large number of suspicious accounts also using the #australiafire and #bushfireaustralia hashtags.
“The conspiracy theories going around (including arson as the main cause of the fires) reflect an increased distrust in scientific expertise, scepticism of the media, and rejection of liberal democratic authority,” he told the Guardian. “These are all major factors in the global fight against disinformation, and based on my preliminary analysis it appears that Australia has for better or worse entered that battlefield, at least for now.”
Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years. At least one political party or government entity in each country had engaged in social media manipulation. The researchers conclude that governments and political parties have used social media to spread propaganda, “pollute the digital information ecosystem, and suppress freedom of speech and freedom of the press”. Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.
But bots can also be used to influence decisions of government.
The US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities. More than 1 million comments were created using the same template, with some words changed. Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy, stating:
“Soon, AI-driven personas will be able to write personalised letters to newspapers and elected officials, submit individual comments to public rule-making processes, and intelligently debate political issues on social media. They will be able to comment on social-media posts, news sites, and elsewhere, creating persistent personas that seem real even to someone scrutinising them. They will be able to pose as individuals on social media and send personalised texts. They will be replicated in the millions and engage on the issues around the clock, sending billions of messages, long and short. Putting all this together, they’ll be able to drown out any actual debate on the internet. Not just on social media, but everywhere there’s commentary.”
He suggests better authentication methods be developed and standardised, and hopes that our ability to identify artificial personas keeps up with our ability to disguise them.
“In the end, any solutions have to be nontechnical. We have to recognise the limitations of online political conversation, and again prioritise face-to-face interactions … This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.”