printer icon
AI Forum

Government Use of Artificial Intelligence in New Zealand

#

The AI Forum’s 2018 report, Artificial Intelligence: Shaping a Future New Zealand, suggested that adoption of AI in New Zealand Government is currently ‘disconnected and sparsely deployed.’ In the wake of this report, the Ministers for Government Digital Services and Statistics commissioned a report into the use of algorithms (with a focus on machine learning and AI) in NZ Government agencies. The Algorithms Assessment Report was released later in 2018.

The AI and Law in New Zealand project, funded by the NZ Law Foundation and running at the University of Otago, has just added to the discussion, with a report on Government Use of Artificial Intelligence in New Zealand. The five researchers on this project come from different disciplines: law, artificial Intelligence, and philosophy. In this post, we’ll summarise the content of the report.

While the AI Forum and Government reports focus on AI techniques defined quite broadly, the Law Foundation report focusses on a particular class of algorithms, which it calls predictive systems. A predictive system is an algorithm that has been trained to predict some output variable, given a set of known input variables. Training happens by exposing the predictive system to a set of training examples, in which the input and output variables are all known. For instance, we can train a system that predicts the likelihood that a bail applicant will reoffend, given some set of facts about the applicant. The training process would take historical data about applicants, where the relevant input facts are known, and also whether reoffending actually happened.

The class of predictive algorithms is interesting for two reasons. Firstly, it includes the cutting-edge AI algorithms that are disrupting industry and government (such as ‘deep networks’), but also the standard statistical modelling tools that have been used for a long time in government (in particular, regression). It emphasises the continuity between these technologies, rather than their differences.

Secondly, there are a set of ethical and legal issues that arise for all predictive systems used in government, regardless of their novelty. The report identifies four questions that could be asked of any predictive system.

  • How good are the system’s predictions? This can be gauged by running the system on a set of test examples, which the system wasn’t trained on, but whose outputs have been independently verified. This evaluation of a predictive system’s quality can be done in exactly the same way, whether it’s a newfangled deep network, or a venerable regression model.
  • Does the system show any bias towards or against any particular group? If a system is trained on biased data, it is likely to show bias in its performance.
  • Can the system offer intelligible explanations about the predictions it makes? Some systems can do this quite readily; others are opaque, and could only offer explanations if augmented with an ‘explanation system’. (The development of explanation systems is a rapidly growing area of AI in its own right.)   
  • Do caseworkers using the system to help make decisions retain control of the decision-making process?  If the system works reasonably well, it is quite easy for caseworkers to fall into ‘autopilot mode’, and miss mistakes they would have noticed if working unaided.

The Law Foundation report suggests it is useful to ask these questions systematically of the predictive systems used in government. It suggests that a regulatory body could be established to do this. The body would publish a register of the predictive systems used in government, and the variables they use—with discretion in cases where this knowledge would enable ‘gaming’ of the system. It would also oversee regular evaluations of each system on unseen data, and investigate the questions of control, explanations, and bias. (There are no easy answers to the bias question, but the report argues the issue of data bias should be a matter of public debate, rather than something resolved within government departments.)

The oversight of government predictive algorithms is important as a matter of public accountability – and it is also readily achievable, given that it relates to the government’s own operations. How much of this oversight should also apply to the plethora of predictive algorithms currently being used in commerce is another matter, one that the Law Foundation report does not address.    

Written by Alistair Knott, who is a member of the AI Forum’s Working Group on Adapting to AI Effects on Law, Ethics and Society. Any opinions expressed are the author’s and not necessarily those of the AI Forum NZ. Learn more about our working groups and work programme.  

AI Forum The AI Forum brings together New Zealand’s artificial intelligence community, working together to harness the power of AI technologies to enable a prosperous, inclusive and thriving future New Zealand.