Trust infrastructure for digital governance and the 21st century


Getty Images

THE PIA REVIEW: PIA ANDREWS This is a Public Sector Pia Review — a series on better public sectors.

Technology is increasingly embedded in the processes and decision-making of government. As this happens, we’re discovering a challenging paradox: technology offers great opportunities for better service delivery, better policy, better governance, and more informed decision-making, but brings with it greater risks of reduced accountability and auditability, entrenching biased or inequitable outcomes at scale, and making more difficult the transparency or ability for citizens to appeal decisions.

It is clear a careful rethink is needed — as a part of any digital transformation or digital government strategy we need to consider how we ensure visibility and traceability of technology-enabled decisions and the authority (and legality) behind them. We also need to rethink our approach to security and risk. Otherwise, we risk not only the creation of an unaccountable black-box approach to public governance, but plummeting public trust in our public institutions, with implications for social justice and economic stability.

To renew and maintain public trust, the biggest shift needs to be from ‘getting trust’ to ‘being trustworthy’. Trust isn’t given, it’s earned. Too many efforts in this space start from the premise of ‘once people understand the benefits they’ll give us social licence to do whatever is needed’. It would be very helpful to ask people what it would take to trust us. I ask people that question all the time, and the answers are often surprising. These are all things to discuss authentically, early, regularly, and openly with citizen engagement to drive policy development.

Many thanks to Tim De Sousa, Mark Mckenzie, and Alison Kershaw for peer reviewing this post.

Public sectors are increasingly prolific users of big data, AI, and more, but we also need to be especially responsive to customer use of AIs that interact (or will interact) with public sectors as well as hostile uses of technology. While we scramble to keep pace and leverage new opportunities, we must also hold ourselves to the highest possible standard of accountability, integrity, and transparency so that the communities we serve can trust us.

This Mensa Canada article put it most concisely, and is even more true and critical for public sectors than companies:

Trust is the currency of business. Companies which provide context, ensure transparency, and maintain auditability for their AI systems and algorithms will prosper. These companies will create intelligible AI, and in turn gain their customers’ trust.

I have found the differing responses from people on this topic to be fascinating, both inside and outside the public sector. Many seem to fall into either the camp of full believers or full skeptics: “technology is great and if you say otherwise you are a luddite!”, ortechnology is only making things worse and you are removing humanity from human services!”, both of which start from a position that is hard to engage with constructively. I would like to suggest it is important that we all take a balanced approach so we (and the people we serve) can benefit from the opportunities while we also actively mitigate the very real risks and issues.

Personally, I am both excited and concerned. I’m excited about the new opportunities in government, like test-driven regulation, policy difference engines, AI supported service delivery, genuine consensus driven governance, and many other optimistic possibilities that all leverage technologies and the internet. I’m concerned about the use of big data to automate normative outcomes (which easily entrenches biased assumptions, like issues with AI hiring or Face2Gene), autogenerated content that targets children for profit, proactive delivery of services that people don’t want automated, social credit systems, and a variety of other uses of technology that can hurt people without recourse. Public sectors can’t stop all the terrible misuses of technology, but we can at least ensure our own systems and applied use of tech is ethical and aligned to the values of the people and communities we serve. I believe it is the responsibility of every public servant to do their best to ensure best possible public good outcomes and ensure gaps don’t emerge that could create unethical, unaccountable, or inequitable outcomes.

“It is the responsibility of every public servant to do their best to ensure best possible public good outcomes and ensure gaps don’t emerge that could create unethical, unaccountable, or inequitable outcomes.”

I’m also concerned that so many people designing new programs of service delivery or regulation start with user-centred design that assumes only human users, and miss the need to also design for machine ‘end users’. Designing with machines as users in mind would be a handy trick to ensure good human outcomes by enabling positive machine usage (like personal AI helpers) and better mitigate against bad machine usage (like phoenix AIs). I do recommend people check out the emergent work of the 3A Institute, which has a range of leading researchers and organisations around Australia involved, and is actively exploring this space.

We are in a rapidly changing world, so why wouldn’t we actively, holistically, and continuously consider what changes we need to make to operate most effectively and appropriately? The simple answer, usually, is that no one has time to plan outside their immediate day-to-day pressures. So step one in creating fit-for-purpose trust infrastructure (or any other strategic, policy, or technical futures planning) is to free up a little time. See my article on enabling innovative teams in public sectors if this is your first barrier.

This article presents ideas for the sort of ‘trust infrastructure’ we need to improve and maintain public trust. It talks about explainability, oversight and digital governance, how to enable auditing and appealability, and what you could be considering right now to contribute to public trust infrastructure, even in small ways. This article doesn’t talk about behaviours, culture, practices, processes, politics, or the challenges of barriers to oversight and accountability of outsourced systems. If you are interested in any of these and some ideas for how to build trust in government more broadly, please see the Pia Review on Dissecting the Recent Recommendations for Renewing Trust in Government.

Where do you start when trying to design trust infrastructure?

When you start talking about trust infrastructure, if can get complicated quite quickly, as people have varying definitions for accountability, transparency, traceability, etc. But I would suggest a way to focus and shape your efforts might be assisted by three really simple and user-centred questions. These would help you can build your systems to be trustworthy, and therefore capable of being trusted.

  1. How would you audit the process and decisions?
  2. How would an end user appeal a decision?
  3. What does the public need for you to be considered trustworthy?

Almost anything we do in government needs to have a solid, demonstrable answer for all of these questions, and these questions take a user-centred view (where auditors and people affected by or consuming the service are the end users). Why don’t you actually map the user journey for the first two questions? This would likely reveal the need for realtime and in perpetuity decision capture, traceability of authority (like legislation or policy) in making a decision, and discoverability and communication of decisions to end users. If you understand and design an optimum user experience for auditing and appealing the decisions or outcomes of your work, then you have likely designed something that is quite trustworthy. But on the third question, one that is unique and critical for public sectors to be effective, why not ask people what would make you trustworthy, rather than just asking for (or demanding) trust? A little user-centred design for how to be seen as trustworthy by the people and communities that need and rely on us every day, noting that this will likely be different for different agencies and public sector functions.

For people working in AI or data analytics who respond that explainability is all too hard and their work is only a contributing factor to a decision and not the decision itself, and therefore explainability is only the responsibility of the ‘business owner’, I would like to encourage you to consider explainability also your job. Because if anyone is substantially making a decision based on something you have produced, then you are responsible, at least in part, for the outcomes of those decisions. If not formally, then at least morally. I’ll discuss a little more the challenges around explainability with regards to AI further below, as there are genuine challenges, but it has been interesting to find that agencies with skills in traditional processes around evidence (such as intelligence agencies) tend to naturally use technology in ways that maintains traceability and explainability so that everything stands up in court.

Explainability

Capturing and assuring the explainability of a decision or action taken by the public service is critical for the ability to audit, appeal, and maintain the integrity of our public institutions. It is also critical for ensuring the actions and decisions are lawful, permitted, and correctly executed. As such, it is important to ensure and regularly test the end-to-end explainability and capture of that information for the work we do in the public sector, especially where it relates to anything that directly impacts people — like service delivery, taxation, justice, regulation, or penalties.

In fact, the public sector has ALWAYS been required to provide explainability in administrative decision-making. Administrative law principles require that decision-makers only make decisions that are within their power, only take into account relevant evidence, and provide their decision together with reasons for the decision. The public sector is uniquely experienced and obligated in this respect. So, any and all technology-enabled decision-making system should be compliant with administrative law principles.

It was only in late 2018 that we had a landmark court case in Australia (Joe Pintarich v Deputy Commissioner of Taxation) which ruled that an automated piece of correspondence was not considered a ‘decision’ because there was no mental process accompanying it. This creates a huge question and issue for the legitimacy of all machine-generated decisions, as was stated in substantial detail by the dissenting judge, and should be a major driver for agencies to invest in and mandate explainability to be captured in any significant decision-making so that the relevant and traceable authority is captured for the record, and so that those decisions can’t be easily overturned by this precedent.

The “rules as code”  work that is taking off around the world provides another piece to the puzzle. If the legislative, regulatory, or policy authorities for your service or decision-making were all available as authoritative code from a persistent source (like api.legislation.gov.au), then you could capture the authority for decisions in real time. Imagine simply and immutably capturing ‘based on x, y, and z legislation, the policy rules of a, b, and c and this data, this decision was made’. Capturing the legal basis of decisions requires that legal basis to be available to and persistently referenceable by machines.

Explainability also requires visibility and discoverability. Currently, people have very little visibility of the decisions made by government agencies with or about them, and this leads to the onus often being on citizens or businesses to prove why a government department did something. This becomes a burden for people who are already time-poor, and particularly when people are vulnerable and already under significant other burden. Of course, just making decisions publicly available doesn’t mean everyone has the skills, digital literacy and capacity to use the information, but it is a good start!

If decisions made about or with citizens regarding service delivery were captured in real time, in a form that citizens could access, then there would be greater transparency and empowerment for citizens receiving services. For instance, ‘x received this rebate/service/entitlement on this date based on this authority/rules’. This citizen’s ledger could also ensure greater accountability and auditing of government service delivery.

Storing the outcome of a decision or validation of claim on a persistent ledger for citizens to access details about their interactions with government could improve visibility, trust, auditing, and appealability of decisions. Such a ledger doesn’t currently exist anywhere so far as I know, and obviously there are risks in such an approach, but it could also provide extremely beneficial information for finding patterns of unusual use, like a birth certificate being invoked in multiple states on the same day, which would indicate potential identity theft. If such a function were co-designed with citizens, I believe we could ensure the right balance of privacy and accountability.

Do you really need to share that data?

Most governments are trying to create better services and proof of identity through digital initiatives, starting with the assumption that sharing data is necessary for better services. Meanwhile, trust in public institutions is rapidly dropping, as is the social licence for sharing sensitive government data. Perhaps we need to explore innovative ways to create fundamentally better, more secure, and more trustworthy modern services without requiring bulk data sharing — for instance, we could use verifiable claims to assure certain conditions of eligibility rather than copying and pasting personal data around the system.

Public institutions are uniquely responsible for a lot of information. This includes information about people, businesses, public services, and the economy, from high-integrity identity attributes like birth or marriage certificates, to eligibility rules and public service registers, to regulatory requirements and macro economics. Government departments also have a significant amount of administrative data from which information can be derived or inferred. But copying and pasting this data around the sector, even when permitted, creates duplication of effort, increased costs of processing and security, inconsistencies, and additional risk.

A lot of digital initiatives are limited in impact by trying to automate and streamline existing business processes, rather than solving problems in modern and more scalable ways. Modular and federated approaches to digital architecture can enable data to stay at the source and be better leveraged across the system, reducing many of the issues above, while simply verifying a claim where possible (‘does the person meet the age/means test requirement’) dramatically reduces the need to share, process, and store sensitive data.

If customers of a government service or non-government services that require validation from a trusted government information source (such as a birth certificate), we could dramatically improve their experience by building verifiable claims for common service delivery needs. Imagine applying for a service and being asked “do you give us permission to check that you meet the means test and other eligibility criteria for this service”, with the results then visible to you for validation and for your record into the future. No paperwork (paper or digital), no copy and pasting, no processing, and above all, a more dignified experience. If done properly, the service provider doesn’t even need the personal information (like the age or address of the person to validate they are over 18 to sell them liquor).

Oversight and accountability

My strongest recommendation for oversight and accountability is public testability. You should have your rules, test cases, eligibility engines, algorithms, or programmatic interfaces to AI/APIs available for people to test the outcomes against expectations and the rules. This helps people feel a greater confidence in the decisions made on the back of those systems. This was also a big part of our Better Rules (drafting better legislation/regulation) and Rules as Code (human and machine consumable rules)  work in New Zealand and New South Wales, where my teams developed prototype ‘eligibility engines‘ that provided both the technical utility of rules for service delivery, and the traceability and visibility or the complex web of legislation, regulation, and policy involved, which could then be publicly tested.

It has been interesting to see some examples of community organisations do the work to make Hansard and other foundations of the Australian democratic system testable. Open Australia are a great contributor in this sense.

What oversight is there around human and automated decisions, and in particular the use of AI and assuring processes align with genuinely permitted rules as laid out in legislation or law?

There are a few ways we could improve the trustworthiness of government systems. There are of course governance mechanisms that would help, like independent oversight through citizen committees and third parties that are not bound to the controls, agenda, or influence of government. But, again, for this article I’ll focus on digital governance approaches.

The most interesting government-led works happening in the world, that I know of, are in New Zealand and Canada. In New Zealand, the national statistics agency StatsNZ has created an holistic approach to algorithmic transparency and accountability that includes an Algorithm Charter that commits government agencies to improving transparency and accountability in their use of algorithms over the next five years. This is a response to the recommendations from the Algorithm assessment report in 2018. When you consider this is the same country that is also creating a Digital Bill of Rights, you can see a pattern of trying to ensure good human outcomes are prioritised in the work of the public sector.

The Canadian Government created an Algorithmic Impact Assessment tool, which provides a useful framework for categorising and applying proportionate governance, accountabilities, limitations, and oversight on the use of algorithms. The New South Wales and federal governments of Australia are developing ethical AI frameworks, which will be a good start domestically, and I know there is a lot of work and consideration into explainability in public sector usage of AI happening all around the world, but this is a quickly evolving space and I don’t know of anyone who has it fully under control yet.

But frameworks alone will not solve this problem. Indeed, sometimes they form part of the problem. When people are focused on compliance with a policy, framework or governance, then they are focused on assuring great inputs to a system, when what really matters most is the output. You need oversight and accountability of both inputs and outputs. It almost doesn’t matter your intent or compliance if you create enormous harm. So again, measuring and monitoring impact of policy, services, legislative change, and regulation is critical.

Certainly maintaining a register of medium-to-high-risk algorithms and AI usage across governments might help with oversight and governance, but it might also be useful to build certain minimum standards of explainability, auditing, traceability, and oversight into the various digital design standards around the world.

As a novel idea, if you were to monitor all public sector programs for impact on quality of life, regardless of what tools or machines were used, you’d also have a chance of identifying and mitigating where programs, systems, AI or anything else was having a negative impact on humans and society, without limiting the scope of intervention to a particular technology, channel or assumed risk.

Security and digital integrity

If you can’t secure your systems and data, then you lose trust. We’ve seen many examples of this in recent years and the overarching lesson is that there is no real excuse for critical national digital infrastructure to be compromised. You can’t blame the vendor, or the internet, or current processes or even the people or machines that are cracking into your systems. The security, integrity, and highly proactive monitoring and realtime mitigation of threats to your systems are 100% your responsibility. So, what are some tips for security approaches that support public trust?

Firstly it is useful to expand upon the traditional locked gate philosophy, where end users are categorised and granted access according to levels of trust and held accountable according to the terms of use, and embrace the idea of real-time pattern recognition and response systems that continuously monitors for and responds to atypical patterns of usage. I am always surprised by how easily people apply a tick-box mentality to security and are uncomfortable with thinking critically beyond the compliance requirements. I remember a particular case years ago when the department security folk tried to penalise me for not applying a patch to a service my team was running, even through the patch was for the Windows operating system and we were running Linux. It took days to get formally agreed and documented that we weren’t non-compliant with security requirements. And yet we made a significant effort to ensure we were monitoring for users, usage, system changes, and data integrity, which wasn’t of much interest but we did it because we wanted people to trust our data.

One of the most important enablers of modernising your approach to security is to adopt agile, test-driven, and modular approaches to your security infrastructure, which then allows you to rapidly prototype, properly address genuine risk, and then scale what works. I was very impressed by the internal security compliance work done by the Australian Government Department of Agriculture as presented to a recent international security conference by Mark Mckenzie, so if you want validation that agile methods can drive great security solutions and outcomes, they are a great case study. Mark wrote a great primer about how security through obscurity simply doesn’t work (back when we were at the DTA).

Secondly, regularly war game your security approach. Actively try to understand your own vulnerabilities and engage with external and genuinely independent experts, researchers, and civic activists who can help you to identify these vulnerabilities for better public outcomes. When you involve a range of internal folk, including senior managers, it doubles as a useful education exercise because it will quickly reveal not just technical issues but also any gaps in process, communications, and areas of responsibility. You should engage external people too, though, or it can miss things. I know some public servants are nervous about engaging with genuinely independent folk (as opposed to just a contractor or vendor), but I’ve always been impressed by the work of applied researchers like Vanessa Teague, Chris Culnane, and Ben Rubinstein, each of whom I would trust to bring high integrity testing to the table and who would be trusted by others if they were to give something a clean bill of health.

Thirdly, a simple but powerful tool for improving digital security is to assume machines as ‘users’ from the start. If your security framework or digital design standard required policy, regulators, and service designers to consider machines as ‘end users’, you would get two areas for security improvement. It would help to:

  • Plan effective and proportionate security approaches to enable appropriate machine to machine usage, such as business systems of regulated entities or personal AI helpers.
  • Identify and plan for the security approach to also be designed to mitigate likely or potential inappropriate machine to machine usage, like Distributed Denial of Service attacks (which are BAU for most government services), criminal/nefarious usage of the system or data, or software to reverse engineering personal information through brute force attacks.

Basically, if you assume machines will interact with your systems, whether there are APIs or not, you can design appropriate and proportionate security approaches from the start rather than applying a compliance approach at the end.

Finally, publish your security approach for public access. Obviously not the level of detail that would create an opportunity for bad actors to compromise your system, but share your broad approach to help citizens, businesses and your clients/users/customers to have confidence in the digital integrity of your systems. If you want a good example, I’m particularly impressed by the security and data integrity approach taken by the NSW Data Analytics Centre, which includes a detailed and public security statement and outline of their data governance.

Ensuring appealability

How could a citizen appeal a machine generated decision let alone a human generated one? Mapping and meeting the citizens needs for this very important “user journey” would likely lead to the foundations for trust infrastructure for citizens. It would necessarily require a way for citizens to access decisions about them, the explanation and authority of those decisions, and a simple and equitable to access appeals process that respects the time and dignity of the citizen.

Interestingly, just the day before publishing this article, a parliamentary Advisory Report into the Identity-matching Services Bill 2019 and the Australian Passports Amendment (Identity-matching Services) Bill 2019 recommended both bills be strengthened to provide protections for Australian citizens. Many thanks to Leanne O’Donnell for highlighting it on Twitter at such a convenient moment. The Report includes Recommendation 3:

The Committee recommends that the Australian Passports Amendment (Identity-matching Services) Bill 2019 be amended to ensure that automated decision making can only be used for decisions that produce favourable or neutral outcomes for the subject, and that such decisions would not negatively affect a person’s legal rights or obligations, and would not generate a reason to seek review.” [emphasis mine]

This recommendation seems to subtly acknowledge that decisions that have a negative impact on people require greater due process than those of a positive or neutral impact, but I would suggest that all decisions that impact a person, whether they are a citizen or not, need to be explainable, immutably recorded, accessible, and appealable, because ‘positive’ or ‘neutral’ are somewhat in the eye of the beholder.

I hope this article has provided some food for thought, and I look forward to the discussions moving forward.

About the author
Premium

The essential resource for effective public sector leaders

Can you afford to miss the next briefing from Mandarin Premium? Sign up today.

Get Premium Today