THE PIA REVIEW: PIA ANDREWS This is a Public Sector Pia Review — a series on better public sectors.
People often reflect on change as a thing of the past, which can miss the change happening right now. I have heard many people talk confidently in recent times about the disruption to service delivery created by the smartphone and yet, at the time, many in the public sector resisted the change, resulting in late delivery of mobile-ready services compared with other sectors.
The next wave of dramatic service-channel change is happening right now, and again, people are largely seeing it as a future issue, not for now. Often I see great thinking and design work done in public sectors that result in the same-old website or app solutions. This article discusses the impact and opportunities of personal AI helpers (Siri, Alexa, etc.), outlines some ideas for consideration when creating or designing new public services, and briefly covers some changes to the infrastructure required right now if we want public services to be responsive to change and changing user needs.
Personal AI helpers
What are personal AI helpers? Well, you see them every day in myriad forms, from obvious examples like people asking questions to Siri, Alexa, or Google Assistant on their devices, to the more subtle cases of personalised search or an email from a travel agent being transformed automatically into your calendar. Personal AI helpers are rapidly moving from gimmicky usefulness into genuine means for normal people to navigate and manage the many aspects of their lives. You always see technologists talk about emerging tech like it is normal, but my favourite example of this was an elderly taxi driver who regaled to me stories of how Siri was helping him and his wife to manage their calendar, shopping lists, connection with relatives overseas, translations, and was even teaching him cooking! He mentioned in passing that he was surprised but delighted when Siri helpfully suggested something to him that had come up in earlier conversation with his wife — a story that sounded borderline horrifying to me as a technologist (from a privacy/security perspective), but to him, Siri continually listening in only made his life easier. More on that later.
Why is this relevant to public sector service delivery? Obviously, we all need to be responsive to changing user needs, but the key reason this particular change is so pivotal for public sectors is because it turns the idea of how we deliver services entirely on its head. Agencies are used to the idea that they build a service and the user comes to the service through a channel controlled by the agency (a website, app, helpdesk, storefront, etc). There has certainly been some work to integrate services across agencies, but it is still an inside-out view of the world through an end-to-end domain of control. Personal AI helpers are entirely outside the control of agencies, and represent a necessary shift where public sectors need to make aspects of themselves “consumable” or “mashable” by third parties. To be fair, this change has been long in the making, particularly when you consider the fact that no agency or jurisdiction is going to take on the responsibility of service delivery outside their scope of accountability, and yet people’s needs necessarily traverse agencies, jurisdictions, and sectors. So “user-centred design” usually becomes limited to building your service around the needs of your users. Even life-journey centric services, which are certainly much better than agency-centric services, assume the channel control stays with government, which is not the case when people are using personal AI helpers to help them navigate their life.
Desired modes of service delivery
I want to briefly talk about the different modes of service delivery, because too often I hear things like “why don’t we just proactively deliver all our services to citizens”, or “just get them to log in and we’ll take care of it”1. User research is often a bit too tunnel-visioned to provide a meta-view for what people expect from government. We did some research in the New Zealand government to explore how people wanted to engage with government using a range of transaction types through the lens of a life journey. It provided some fascinating insights that, I think, are critical for reflecting on when you are looking at how to design future public sector services. Otherwise, we end up assuming everyone wants a transaction when they don’t, or that informing the public is sufficient, or that automating something is appropriate.
The New Zealand Service Innovation Lab did open-research into user needs from a life journey and took into account the possibilities of multi-entity service delivery while ignoring the limitations of the current landscape. This work helped us realise that people like to have different types of interactions with government to meet different needs, and at different points of the journey. Three clear modes of service delivery as desired by end-users came out of the research conducted.
We started this experiment expecting to test a future state hypothesis of government service delivery, and ended up discovering that multi-modal service delivery is useful to deal with the different ways people want to deal with government at different times, directly or through intermediaries. We tested these ideas formally with users.
When trying to actually interact with government, many users talk about wanting some assistance both because it is complicated and they don’t want to get it wrong, and also because having someone who understands your context can dramatically speed up the process. We found people didn’t mind too much whether it is a person or a machine, but they really wanted two things: 1) a more conversational service when it came to transacting with government, and 2) full visibility of the interactions that happen between agencies on their behalf to give them the ability to correct the record (where required) without delay.
Users often record the names of people, the details, and dates/times of discussions specifically to have a record for their own purposes. So the third element of this guidance/conversational mode of service delivery was the idea that a transcript of all such interactions would be readily available to you, along with the option to interact in a timely way to keep processes that matter to you on track in real-time. The notion of taking a ‘conversational’ approach to service delivery might provide some interesting exploration in designing government transactional services.
Secondly, there is always a lot of interest within governments to deliver services proactively. There is also a lot of mythology around just how far users want the government to do this, and what is even possible. Our research found that users were quite keen for proactive service delivery, such as either being notified of being entitled to something or even automatically getting something from government, but with some interesting limitations or conditions. We found differences in appetites for proactive delivery that spanned topic sensitivity through to end-user vulnerability, and found some interesting cultural behaviours in New Zealand of people not necessarily wanting a financial service even if they were eligible (“someone might need it more than me”), so they wanted notification of eligibility but wanted to maintain agency over the choice to take it or not.
Government data is necessarily retrospective, however, so it can be dangerous and intrusive to try to predict, for instance, when people are moving country, or planning a child, or preparing for bereavement. We were aware of the danger of taking proactive delivery too far.
Our users talked to us about how something like being informed that their superannuation payments were changing due to the death of a spouse was actually useful, but that some cultures would certainly find it confronting. User input led to the insight that proactive delivery should certainly be both developed in collaboration with users across cultures, and that people should be able to opt-in to what they want proactive delivery for.
There were two factors that clearly need consideration in taking a proactive approach to government service delivery, as informed by our user research in New Zealand.
- The complexity of the service — Customers clearly had a preference for low- and medium-complexity services to be delivered proactively or automatically; however, where the criteria for the service provision became more complex or needed to take into account more of a user’s context or nuance, automatic provisioning wasn’t as preferred, due to customers doubting the accuracy of government data. Whilst we can expose that data to customers as per the scenarios we modelled, it might be worth highlighting that there was a somewhat inverse relationship between the complexity of the decision and the desire for automated provisioning.
- The nature of the decision — To a certain extent, the research reflected that people were happy for automated provisioning to be in place primarily where it drives value for the person rather than where it drives value for the government. For example, people were less comfortable with the idea of automating the withdrawal of services or automation of decisions that could result in people getting ‘less’ of something or being penalised for something. So as a rule of thumb, it seemed to be the case that people said: “Automating stuff is great if it means I get more of stuff and maintain control, but when it comes to me getting less stuff or getting punished, I’d be less keen”. Clearly, we can’t look to implement a ‘two-sided’ system that differentiates; however, it provides a good indication of where messaging might need to be clear.
Help me plan
Thirdly, sometimes you just want some guidance or some information to help plan your life. We had some people say that they didn’t want to ask a question in a way that identified them in case their lives were made harder by interacting with the government. There are many times in your life when you just need to plan: to move, to consider having a child, to get married, etc. Sometimes people don‘t know all the services, requirements, and implications of a life event, which can make them uneasy and also make planning difficult. For these types of interactions, we named this mode of delivery ‘Help me plan’.
The key here is that people want to be able to provide minimal information about themselves and their circumstances, in an anonymous/unauthenticated context, so they can find the services they need (including the ones they don’t already know about). We did a potential future-state mockup as purposefully generic, enabling a user’s needs to be met through diverse options of functionality and mashing up of content, business rules, and other functionality could be provided by government and non-government service providers; with government providing the baseline generic service and the private and community sectors specialising in particular user needs or market segments. Of course, having this kind of information publicly available in a machine-consumable form would provide a great means for personal AI helpers to guide people towards relevant government services and programs.
There are some examples of such ‘Help me plan’ services already — like SmartStart (for services when expecting a baby) and NZReady (for moving to NZ). These services do help people get informed and take the next steps, but the step beyond that for such an approach might be in:
- dynamic information and services based on user-provided context;
- blending/integrating life events (our research showed a number of people in multiple life events which impacts their needs and planning); and
- supporting people to then progress with transactions where possible or desirable.
Potential dependencies for personal AI helpers as a genuine service channel
We can see how Personal AI helpers are being used now, and there is a growing literature on their usage, benefits, risks, and challenges to personal and national agency and sovereignty. I do recommend tuning into this work, but also committing some of your time to considering two key things how should you change your current assumptions and approach to cater to personal AIs?
It is certainly harder to be agile and responsive to change in our public sectors, with four years set-in-stone business cases, siloed lines of responsibility and infrastructure, and legacy systems that are hard to integrate. But if we adopt a more modular architectural approach to our services, which provides reusable components of service delivery that are openly available (such as service registers, eligibility engines, structured content, even transactional APIs), and if we assume a constantly changing end-user channel, then we have a better chance of naturally building and being technically capable of responding to user and channel change for a more dynamic and cohesive service experience. The same services catalogue could feed into many services and channels, and if that same catalogue is publicly available, the intermediaries like personal AI helpers or third-party service providers could provide better pathways to relevant just in time government services. This is the basis of ‘Government as a platform’ which has been one of the foundations of my work for 10 years as the approach makes components of government open so intermediaries or service providers can build new services, products, and analysis on the shoulders of existing government components. The breadth of providers that are enabled using Government as a Platform can better serve the requirements of increasingly diverse and complex community needs.
What is the digital infrastructure required for maintaining trust in government service delivery, both trust from the community and internal trust? How do you audit, monitor, assure, and appeal decisions made when you don’t control the end-to-end experience? Please see the article on improving public trust in government for more ideas in this space, and I’ll do an article explicitly about “trust infrastructure for the 21st century” another day.
I strongly recommend that public sectors need to mandate decision explainability and ledgers to capture in realtime the decisions made (by humans or machines) along with the authority under which the decisions were made (legislation, policy, etc), especially where the ‘end user’ is an AI that may be operating on behalf of a person, organisation or criminal network. Citizens should have access to any decision making made about them for fairness, accountability and appealability purposes. We could also require ‘user research’ to include machine ‘users’ of systems, which would both improve appropriate machine usability (like personal AIs) and naturally design against nefarious machine usage (like Cambridge Analytica or criminal AIs). This means also ensuring and designing the MX (Machine Experience), not just the UX (User Experience) or DX (Developer Experience), because a machine, like a personal AI helper, is not going to read your terms and conditions or sign an NDA to reuse your content on providing information to their end-user. Or indeed it may be clever enough to tick your boxes and emulate a human but is hardly incentivised by legal, criminal or financial disincentives. So to ensure personal AI helpers can provide good services based on reality that includes relevant services from your agency/jurisdiction, you need to make publicly open and machine consumable what you can, whilst more effectively limiting access where required (like for high trust transactions).
Security today is not just about locking the metaphorical gates. It is about monitoring usage, ensuring the right outcomes are being realised, and being able to detect and respond when change is required. The various “digital service standards” around the world therefore need to also require mandatory real-time monitoring of service usage patterns aligned to policy and human quality of life measures to ensure things are trending in the right direction, to understand typical and atypical patterns, and to be able to identify and respond to changes as they happen, rather than months or years after the fact (which is what happens when you leave impact assessments to research projects or inquiries). This also means bringing policy folk into implementation, as partners in ensuring the policy intent is being met on an ongoing basis.
Finally, I want to briefly encourage all who read this to consider how to ensure you leverage the internal intelligence about service delivery from within your own organisation. Personally, I am fascinated by emerging patterns of trends, and how we collectively respond to them. Often individuals don’t have time or space to see and respond to broader change as it emerges, particularly when that change has impacts for the day to day work, but someone, somewhere, always knows where it is affecting them directly, and we don’t empower or leverage that intelligence well enough in public sectors. The habits and assumed sanctity of inherited processes can get significantly in the way of our collective responsiveness, particularly in hierarchical organisations like government where change is most keenly felt by the people closest to the front line of implementation, be it service delivery, policy reforms, regulation or anything else. Hierarchical structures are capable of highly effective feedback loops for important information, as any military person can tell you, but in most public sectors I’ve worked in, there is certainly a need to enable and empower more bottom up intelligence, rather than relying on top-down or outside intelligence. This is where embedding servant leadership in the senior executive is critical, as is recognising the genuine and legitimate value of internal intelligence. Internal feedback loops is the most important way you can keep your organisation responsive, resilient and effective in the face of change.
To my mind, ‘good services’ in the public sector sense need to be measured by two key criteria: their intended policy impact and the holistic impact on society. Service delivery is commonly measured by ‘customer satisfaction’, completion rates, cost to serve, etc. But if the purpose of a new rebate or service is to keep people comfortably in their homes longer to reduce pressure on the health and aged care systems, and you don’t measure that, then even a great service can have a counterproductive or negative impact on people and communities. We all need to identify the intended policy outcomes and then monitor relevant indicators as we make or change public services, so you can tell whether your efforts have made the expected impact. My strong recommendation is to also design services and how we measure them according to the quality of life indicators (like the NZ Living Standards Framework or the NSW Human Service Outcomes Framework), not just their needs, so we can can understand the holistic impact of services. Additionally, if we always take a values-led approach, we also have a better chance of designing, building and running government services that reflect the changing values of our communities.
- This section has some content republished from the New Zealand Digital Government Blog under the Creative Commons By Attribution Licence. Please see that blog for video content, links to research and more.