Artificial intelligence and profiling at the Commonwealth Games


Advances in biometric, policing and crowd management technology combined with global heightened security concerns will put the GC2018 into uncharted territory. Can future events of this scale be safe and secure as well as fair, accessible and compliant with privacy protections?

The Gold Coast Commonwealth Games will be here before we know it, with plans well underway to make them as exciting and smoothly run as possible. Safety and security will play a paramount role, with considerations heightened following recent terrorist attacks here and overseas.

Over 11 days next April, Queensland will host 6600 athletes and team officials from 70 nations and territories, 15,000 volunteers and be centre stage as GC2018 is broadcast to a cumulative international audience of 1.5 billion. As the organisers admit, the event is unprecedented in the city’s history — likely to leave an enduring memory for generations of future tourists, hopefully for all the right reasons.

According to Assistant Commissioner Peter Crawford of the Commonwealth Games Group, the Australian Federal Police are considering utilising new developments in facial recognition technology during the games along with other security measures. News Corporation has previously reported that this technology would be used to scan crowds on public transport “to identify potential terror suspects before they can get close to any sporting or public venue”.

However, while details of this and other measures are scarce (for obvious security reasons), the Assistant Commissioner noted that “This technology is constantly evolving. No decisions have been made regarding how and where this technology will be deployed.” The idea was so nascent that at the time the original claims were published, relevant agencies and other government stakeholders in Queensland had yet to be approached over the issue.

Balancing privacy and security

Philip Green

Queensland’s Privacy Commissioner, Philip Green, notes that facial recognition technology is an important security feature, but using automated identification processes a could open the door to profiling and lead to a large number of false-positives. “There is a need to maintain a proportionate response in view of the assessed risks at the time. I am meeting with Queensland Anti-Discrimination Commissioner, Kevin Cock, to discuss possible discriminatory impacts and errors in using the technology which have arisen in the US,” says Commissioner Green, referring to the FBI’s controversial FACE program which was subject to a damning report by the United States Government Accountability Office last year.

Queensland Police, Transport and the Department of Premier and Cabinet have told the Commissioner that they will consult with his Office if the Commonwealth does end up approaching them about the use of AI. Like the Privacy Commissioner, they seem to share some of the same concerns.

The use of machine-made decision-making is a hot topic in the privacy space at the moment. Earlier this year Commissioner Green spoke at the Computers, Privacy and Data Protection Conference in Brussels, where patterns and profiling featured as key issues for exploration.  It throws up a number of unchartered privacy considerations – how should one assess a decision made by a machine or even understand how complex decision-making technology works in the first place?

And back at home, the privacy authorities in Australia are also grappling with the use of AI. The Office of the Australian Information Commissioner hosted state and territory privacy counterparts in Sydney earlier this month, where they discussed their approach to facial recognition and to plan out their strategies. But achieving uniformity around privacy is difficult. As Commissioner Green states, there is a “patchwork of frameworks protecting privacy … some jurisdictions have greater privacy protections than others.”

Tapping into the Attorney-General’s face matching services

In addition to international sources, the facial recognition initiative would tap into the two Face Matching Services being established by the Attorney-General’s Department. The Face Verification Service (FVS) – launched in November last year – and the Face Identification Service (FIS), will provide secure one-to-one and one-to-many photo sharing of images between agency systems to establish unknown individuals’ identities.

“Access to the Facial Identification Service will be limited to law enforcement and security agencies.”

The FVS’ one-to-one image matching is more targeted than the FIS, providing a simple ‘yes/no’ response. Currently, visa and passport images held by the Department of Immigration and Border Protection can be shared with DFAT and the AFP. However, ongoing negotiations to expand the pool of images to motor licences photos held by state Roads agencies are proving a sticking point. Tasmania and the Northern Territory have participated in a pilot, but some eastern states are reluctant with the prospect of their biometric information being linked up in this way, when privacy standards are lower in other jurisdictions (like WA and SA – which do not have their own dedicated, independent privacy authorities).

Complicating matters, while information-sharing is currently done via a hub or router (and does not conduct matching or store any personal information itself), there is still discussion that the Commonwealth may develop a database of images uploaded daily. Centralising this information could increase potential privacy concerns and security risks. Commissioner Green notes for example that it is likely that using the FIS would likely put ‘personal information’ into play (which has various ramifications under state and federal privacy principles). As such, ‘the Commonwealth, state and territory jurisdictions have indicated they would do separate privacy impact analysis of the various components, which should clarify this matter further.’

However, access to the Facial Matching Services will be highly controlled. As a spokesperson for the Attorney-General’s Department told The Mandarin, “access to the FIS will be limited to law enforcement and security agencies investigating serious offences, or specialist fraud prevention areas in the agencies issuing passports, immigration and citizenship documents.”

The Gold Coast attracts crowds, but nothing on the scale of the Commonwealth Games.

Where this technology is heading in future

As the Attorney-General’s Department spokesperson noted, the impetus for establishing the new biometric infrastructure is to combat identity crime and prevent identity theft. The use of these identity services for wide-ranging counter-terrorism measures, while sensible, does imply an expansion of the original project, which would need to be worked through by relevant bodies.

Turning to the more distant future, you can imagine a number of other use-cases for such technology. For example, it could negate the need to establish identity in-person by expanding the range of government services being provided purely online, such as Centrelink.

And like the Document Verification Service, which the Attorney-General’s Department states is ‘complimentary’ to the Face Matching Services, it could in time be opened up to private sector players like telcos and identity verification service providers.

Proven technology will underpin the games’ network

Not all the security technology at the games will be experimental. The communications network needed to keep keep crowds moving and sporting venues connected to seamless information will be based on TETRA. The voice and dispatch radio standard doesn’t the bandwidth for the full video streaming features that LTE offers, but is a proven technology that emergency services depend on every day, and has the advantage of avoiding any controversy.

Motorola Solutions was named as the critical communications provider last week, and its Zeon Digital network will enable expansive coverage for officials with more than 6000 radios across the state. The network already provides year-round mission-critical communications for 15,000 emergency service workers in the state, to keep essential public utility and transportation services running efficiently, as well as the occasional Hollywood blockbuster filmed locally. It was also used for the 2014 G20 Leaders Summit for the safety of the world’s most powerful people.

There was one unrelated privacy breach at that summit, but that one can’t be blamed on advanced networks or AI — simple human error in the Department of Immigration and Border Protection meant the leaders’ personal contact information was sent to a third party:

“The cause of the breach was human error. [Redacted] failed to check that the autofill function in Microsoft Outlook had entered the correct person’s details into the email ‘To’ field. This led to the email being sent to the wrong person.”

But maybe AI could solve that risk too.