printer icon
AI Forum

Reaction to the Christchurch Call from the AI Forum’s Ethics, Law and Society working group

#

In the Christchurch Call summit, Prime Minister Jacinda Ardern has achieved a milestone consensus between countries and tech companies. Ways need to be found to curb the dissemination of terrorist and violent extremist content on the Internet. Many of the methods required involve AI technologies. The Prime Minister’s initiative has placed New Zealand centre stage in the current discussions about AI regulation and practice. As the AI Forum’s Working Group on Ethics, Law and Society, we applaud her initiative, and offer some thoughts on how to structure this discussion.

There are three basic ways content is disseminated on the Internet. They are all possible targets of regulation (or could be adopted by self-regulating tech firms) in an implementation of the Christchurch Call agreement.

  • One type of dissemination is when a user uploads content: for instance, by posting or live streaming.
  • Another type of dissemination is Internet searches: for instance, on Google.
  • A final type of dissemination is social media feeds: for instance, Facebook users each receive a personalised stream of items in their feed: Facebook decides which items go in each user’s feed.

In each case, implementing the Christchurch Call would involve automated flagging or perhaps filtering the dissemination of information: either filtering at the point of upload, or filtering the results of searches, or filtering social media feeds presented to users. Some of this filtering process would have to be automatic, in view of the volume of material to be processed. In each case, identifying items to be filtered would involve checking items against a database of known extremist items – and also running an AI classification system, looking for new extremist items. Our focus is on the AI classification system, and how it might work in a content-flagging or filtering role.

Any form of content filtering raises ethical, legal, technical and economic questions.

  • The ethical and legal questions concern freedom of speech and censorship. What sorts of item should be filtered? Where do we draw the line?
  • The technical questions relate to how to build classifiers that classify items reliably according to the chosen legal or ethical criterion. Invariably, classifiers will make mistakes: allowing items which should be blocked, and vice versa. Companies have to tune their classifiers to err on the side of accepting or blocking. Companies may also train a system to classify and flag potentially violative content for human review and decision
  • The economic questions centre on the cost for Internet companies of running flagging and filtering systems.

It’s useful to note that these questions apply differently to the different Internet dissemination methods. For instance:

  • On the economic questions: it’s particularly expensive to run a classifier to filter user uploads (especially live streams) because this filtering has to be done in real time. It’s cheaper to run a classifier to filter social media feeds, because this doesn’t have to be done in real time.
  • On the ethical/legal questions: withholding items from social media feeds is arguably a milder form of censorship than blocking user uploads, or blocking responses to search queries. Users don’t ask for the items that appear in their feeds: they are automatically selected by the company. So filtering here is just a matter of refraining from performing an unasked-for action.
  • On the technical questions: it’s easier to justify the classifier erring on the side of caution (blocking) when filtering social media feeds than when filtering content uploads or searches, because (as already noted) this kind of filtering is a milder form of censorship. 

We look forward to an interesting global discussion around the role of AI algorithms in controlling the flow of content on the Internet. We would particularly like to encourage signatories of the Christchurch Call and interested members of the tech community to think about methods for filtering social media feeds, for the reasons just given. We think that small changes to feed recommendation algorithms could potentially have large effects – not only in curbing the transmission of extremist material, but also in reducing the ‘filter bubbles’ that can channel users towards extremist political positions.

Written by Alistair Knott and Sean Welsh, who are all members of the AI Forum’s Working Group on Adapting to AI Effects on Law, Ethics and Society . Any opinions expressed are the authors’ and not necessarily those of the AI Forum NZ. Learn more about our working groups and work programme.  

 

AI Forum The AI Forum brings together New Zealand’s artificial intelligence community, working together to harness the power of AI technologies to enable a prosperous, inclusive and thriving future New Zealand.