The automation, data linkage and digital governance problems facing government agencies


Unaccountable algorithms and out-of-control AI are popular topics of discussion. The more immediate issue, however, is how government agencies will set appropriate boundaries for the sharing and release of the public’s personal data.

Government agencies’ handling of sensitive data is rightly the subject of significant concern, most recently fuelled by the switch to opt-out for My Health Record, initially changing the settings for secondary use of My Health Record data, and by Robodebt automated mail-outs and uses and abuses of Centrelink data. These events raised questions about the conditions under which government data sets should be permitted to be joined at all, rules for controlled data linkage environments (such as government data matching and analytics centres), who makes the decisions about what criteria applies to the kind of data that is released and what algorithms should be deployed into algorithmic decision-making environments.

Legislating the authority to join together the data sets of multiple government agencies overcomes government agencies’ statutory barriers to such sharing. However, it does not address the concerns of citizens and data custodians relating to ‘digital social licence’ and fairness, or the legitimate expectations of affected individuals about automated processes that substantially affect their rights and how government agencies elect to deal with them.

A digital social licence, or sometimes ‘digital trust’, variously describes:

  • acting fairly and responsibly, as well as in compliance with relevant laws and constraints on administrative decision-making
  • acting “ethically”
  • maintaining trust of relevant stakeholders, including trust of data custodians who contribute data, affected individuals, and other citizens concerned with balancing fair treatment of particular citizens with broader social benefits.

In an age of rapid change and unpredictable erosion of trust in institutions, including government agencies, social licence through beneficence and clear social purpose is often no longer good enough, even for relatively high-trust organisations such as some government agencies. Social media has given voice to a cacophony of competing views that are articulated in expectation of inclusion within a debate, even where the contributions are not fully informed and moderately expressed. Views of different constituencies may need to be sought out, empowered to be expressed, and taken into account.

Beware the disenfranchisement of social minorities

Social licence is not political mandate. Often today, it is the minorities who are excluded, or who perceive themselves as excluded, who loudly express views as to what is acceptable or not and thereby define digital social licence and levels of societal trust of automated processes. Indeed, majority support for particular automated processes may increase diverse minorities’ sense of disenfranchisement and therefore alienation. And sometimes, minorities are right to feel disenfranchised. Algorithmic decision-making can work to their disadvantage, either by choice (excluding or discriminating against them) or inadvertence or poor application (e.g. use of bad algorithms that are the output of poor statistical methods).

Legislated authority confers some social licence. But the fact that something can be legally done rightly does not confer authority that the thing should be done. Similarly, the potential of external oversight and for independent review of exercises of broadly conferred powers may confer legitimacy but only limited social licence. Social licence for broadly conferred powers requires demonstrably reliable controls, safeguards, gateways and escalation points to ensure reasonableness, proportionality and justified necessity in applications of that power.

“…the fact that something can be legally done rightly does not confer authority that the thing should be done.”

Rightly or wrongly, data-privacy laws regulating disclosure, sharing and uses of linked digital data sets are today regarded as an important control against excessive intrusions upon rights of individuals. Where legislated authority to link data sets and use linked digital data overrides protections of data privacy laws, demonstration of social licence becomes more important. Hence, concerns about (to take but a few recent examples) the consumer data right, national biometric matching capabilities, and decryption powers cannot simply be addressed through privacy-impact assessment.

Taking humans out of any part of data flows also creates challenges for government agencies in demonstrating digital social licence. Often social licence arises (rightly or wrongly) from perceptions as to human control and intermediation in data flows, and in joining and analysis of centralised data. Concerns about unreliable data (false positives or negatives, data being insufficient or incomplete) are often substantially ameliorated by perceptions that the involvement of fair-minded, independent and properly trained and instructed government decision-makers will be an effective control and safeguard against error or overly intrusive use of powers and discretions. Automated processes by the very fact of automation can reduce perceived social licence for data sharing and data linkage and trust of data custodians as to decision-making by the central information point. This outcome may arise even if the automated processes as demonstrably more reliable than human intermediated processes. Particular care must, therefore, be taken to ensure both that automated processes are at least as reliable as human intermediated processes, and that social licence is nurtured (and not undermined) by the substitution of automated processes.

The nuances of earning a digital social licence

Nurturing a digital social licence thus requires careful impact assessment of proposed automated processes, to ensure the automated processes are fair, ethical, accountable and sufficiently transparent. And this assessment requires more than ethics-washing of automation proposals against high-sounding principles and the making of vague assurances as to human oversight: it requires careful use of tools and methodologies to make sure that the reality of process implementation embeds these principles.
And in the government context, assessment of digital social licence requires remapping of legislated authorities against the effect upon social licence of substitution of particular automated processes for human intermediated controls and safeguards.

Privacy impact assessments as today conducted by privacy professionals cannot simply expand to include evaluation as to digital social licence. Privacy impact can usually be objectively assessed by a privacy expert using now relatively mature methodologies, structured broad enquiry and evaluation. However good the privacy expert, she or he cannot capture the concerns of disparate and diverse stakeholders.

Additionally, evaluation of social licence for data-driven automated applications requires subjective balancing of competing perspectives that must be assessed and weighed using now-developing frameworks and methodologies that are not yet the subject of accepted industry frameworks or standards (although there is a substantial body of experience derived from other processes of review for social beneficence).

Data-governance frameworks and processes must also be assessed applying human-centric design principles and be forward looking. They must:

  • address laws and regulations as they stand,
  • address currently anticipated data sources and data uses and potential new data sources and uses, and
  • anticipate so far as is possible changes in public policy and expectations of citizens about fair handling of data available to businesses dealing with them.

Citizens will continue to raise the privacy bar that government must meet

The future path of citizen expectations will likely see the bar progressively raised for how government agencies handle sensitive data.
It is time to think way beyond the blinkers of privacy impact assessment. Impact assessment of digital social licence builds from now well-accepted and understood processes of privacy-impact assessment, but then addresses the unique governance and process challenges of ensuring that digital automation is only implemented with controls, safeguards, gateways and escalation points that demonstrably and reliably ensure processes and outcomes are fair, ethical, accountable and sufficiently transparent.

There are many ways to fail, and the tools and methodologies that we need to deploy are immature. We need to be humble and careful in translating traditional thinking about citizen trust in government agencies into design and implementation of digital data initiatives.

About the author
Premium

The essential resource for effective public sector leaders

Can you afford to miss the next briefing from Mandarin Premium? Sign up today.

Get Premium Today

>