printer icon
Ministry of Business, Innovation and Employment

Moving Towards Responsible Government Use of AI in New Zealand

Alistair Knott
Dept of Computer Science, University of Otago

New Zealanders like to think of their country as an innovator in public policy. New Zealand was the first country to give women the vote in 1893, the first Western country to go nuclear-free in 1987, and among the first countries to focus its budget on ‘wellbeing’ in 2019. New Zealand also has form as an innovator in public technology, pioneering the use of electronic medical records and EFTPOS systems. As New Zealand is a small, forward-looking tech-savvy country, it feels natural for its government to look for initiatives in the area of AI regulation. New Zealand is well known internationally for its role in initiating the Christchurch Call, which commits countries and tech companies to regulating online extremist content, but the New Zealand Government has also been making moves to regulate its own use of AI. This is a story that isn’t so well known, I will tell some parts of it here.

Why focus on regulating government use of AI?       

Regulating AI technologies is a very hard task. To do it properly, governments need to gain expertise in new technologies and regulatory mechanisms. For many reasons, it makes sense for governments to begin by looking at their own use of AI methods, within their own departments and public bodies. Firstly, decisions by government bodies can have particularly large impacts on citizens’ lives. Government decisions can lead to people being imprisoned, or deported, or separated from their families, or allocated or denied long-term benefits of various kinds. If AI systems are involved in making such decisions, the government has a particular duty to make sure they work right. Secondly, the government has access to the systems it uses. AI systems used in commerce are normally the private property of companies, so the government has very limited ability to inspect what they do. But governments have unrestricted access to the systems they use in their own departments. Thirdly, regulation of government AI systems can be guided by a very clear overarching principle: the principle of open government, which dictates that citizens have a right to know how their government works. There is no comparable right for citizens to know how private companies operate. Finally, it’s far easier for a government to enforce rules about its own AI systems than rules about commercial AI systems. Regulations on companies often conflict with their commercial self-interest, leading them to seek ways to avoid rules; there is no comparable conflict if a government decides to ‘put its own house in order’. In addition, many of the commercial companies in line for AI regulation are multinational giants: regulating Facebook or Google is a matter for long and complex international negotiations. By contrast, a government has full control over any changes it wishes to make to its own use of AI systems. 

What is interesting is that the AI technologies used by governments are largely the same as the AI technologies used in commercial companies. And many of the concerns about these technologies are the same, whether they are applied in government or commercial spheres. Because regulation of government AI is easier than regulation of commercial AI, for the reasons just discussed, government policymakers can cut their teeth on regulation of ‘their own’ AI systems, in preparation for the greater complexities of regulating commercial and international AI systems.

A research project on regulating government AI in New Zealand

I was recently involved in a research project at the University of Otago, looking at how the New Zealand Government should regulate its own use of AI methods. We approached this task with the ‘teeth-cutting’ mentality just described. We wanted to define regulatory strategies which could be readily used within the New Zealand Government – but which would also serve as a foundation for the more complex discussions that are now happening about AI regulation in commercial contexts. In our report, we defined a particular category of AI model, called a predictive system, and we identified four key concerns that relate to use of models of this kind.   

A predictive system is a computer program that learns to map a set of inputs onto a set of outputs, through exposure to a set of training examples. Predictive systems can be used for a myriad of purposes. In machine vision, a system might take an image of an object (a set of pixels) as input, and produce the category of the object (‘dog’, ‘chair’, ‘cup’) as output. In criminal justice, a predictive system might take as input a set of facts about a defendant before the court, and deliver as output an estimation of the defendant’s risk of reoffending (‘high’, ‘medium’, ‘low’). Importantly, this definition of predictive systems encompasses many of the cutting-edge AI systems used in government (and industry), but also many of the older predictive tools that have been in use in government (and industry) for decades. Equally importantly, these systems are all evaluated in the same basic way. After a system is trained on a set of training examples, we can give it a set of test examples it didn’t see during training, and see if it produces the right outputs for these examples.

Predictive systems can be extremely useful in an organisation: they can automate repetitive decisions, and bring large amounts of information to bear on each decision, in ways that are hard for humans to replicate. But there are several concerns to consider. Our report highlights four.  

  • A key concern is performance. Baldly stated, how well does a given system work? Is it 100% accurate, or only 60% accurate? If a government department uses a predictive system, the public has a right to know its performance. Performance scores are normally available to the public on request – but they are often hard to get.
  • A related concern is bias. The data used to train a predictive system can incorporate many types of bias. Sometimes bias comes from the fact that training data is not gathered equally from all groups in society. At other times, it reflects existing biases in society. In New Zealand, for instance, a system that predicts likelihood of reoffending might ‘learn’ from its training examples that Māori people are more likely to reoffend – a fact which has its origin in longstanding human biases in reporting and responding to crime. Bias towards a given group can be readily shown by reporting the system’s performance of a system on that group, as well as average performance. For instance, we might find that a system tends to overestimate the reoffending risk of Māori, and underestimate the risk for non- Māori.
  • A third concern is transparency. Some predictive systems are so complex that it is very hard to see how they reached a given decision. A citizen who is the subject of a government decision often has the right to an explanation; if a predictive system was involved in making this decision, this arguably requires the system’s decision to be explainable too.
  • A final concern is about human use of predictive systems. Predictive systems often function as ‘aids’ to a human decision-maker who is ultimately responsible for decision-making. But if a system performs reasonably well, it’s very easy for its human user to fall into ‘auto-pilot’ mode, and rely more on the system than is warranted.

Our report recommended that the New Zealand Government establish an independent regulatory agency to oversee its use of predictive systems. The agency would maintain a register of predictive systems used in government departments, and produce an annual report indicating the performance of each system, in a readily-understandable format. The report would also indicate the performance of each system for various at-risk groups, to check for bias. The independent agency would also disseminate best-practice guidelines about system transparency, and about how to keep human operators ‘in the loop’ in decision processes.   

New Zealand’s Algorithm Charter

In response to our report, and to other studies, the New Zealand Government was impressively true to form: after just over a year, it released an Algorithm Charter about the use of ‘algorithms’ by government agencies. The charter refers to our definition of predictive systems, though its remit is somewhat broader. It addresses our recommendations about transparency and human use, and it includes some general suggestions about bias. A majority of government agencies are now signatories of the charter. Along with the Canadian government’s directive on Automated Decision-Making, the Algorithm Charter is one of the very first AI-related standards intended to apply to all branches of government, and we consider it a very good model for other countries to consider. However, the Charter doesn’t establish an independent regulatory agency: departments are expected to monitor their own compliance with the Charter and it lacks binding force. We feel there are still good reasons for a separate agency, in particular in its role as publisher of a government-wide register of predictive systems, and of standardised, regular reports about the performance of these systems.

Towards regulation of commercial AI

We believe that if governments can identify regulatory strategies their own uses of AI, this prepares them well to tackle the question of how to regulate AI in commercial contexts. Policymakers will have a clearer picture of the algorithms that are possible regulatory targets, and of the concerns that arise with these algorithms. Naturally, the ways to address these concerns will be very different for commercial AI, for the reasons already given. But perhaps a common currency of concepts and tools will have emerged.    

Ministry of Business, Innovation and Employment We are the New Zealand Government’s leading business-facing agency. We provide world class policy, advice, regulation and services to support New Zealand businesses to grow.