The deployment of artificial intelligence (AI) systems by public sector organisations is contentious. From the UK on the Met’s use of Automatic Facial Recognition (AFR), to Australia on the robodebt scandal where a class action is gathering momentum, to the US where some courts are using AI to determine parole and sentencing, this is for very good reason. While much of the media focus is on bias, and rightly so, there are questions being asked about the extent to which we can expect public services to be open, accountable, and objective.

This week The Committee on Standards in Public Life published its report into Artificial Intelligence and Public StandardsI was fortunate enough to attend the launch of the report.  Chair of the Committee, Lord Evans, and Independent Member, Dame Pearce, were interviewed by Professor Nick Jennings on the report’s key conclusions and recommendations, followed by an audience Q&A.

This work has been done at a critical time as AI is increasingly becoming built into public service delivery. The key question for the Committee has been: How does the traditional articulation of our ethical standards via the Nolan Principles relate to AI?

 

The Nolan Principles and AI

A core undertaking for everyone working in the UK as a public office-holder is to abide by the Seven Principles of Public Life. Also known as the ‘Nolan Principles’, these apply to everyone working in the development and delivery of public services at local and national levels.

The principles include Selflessness, Integrity, Objectivity, Accountability, Openness, Honesty, and Leadership.

While the Committee’s view is that the Nolan Principles do stand the test of time in this era of AI, there are three areas that require particular attention, namely openness, accountability, and objectivity.

 

AI and Public Standards – The recommendations

Here I’ll touch on some of the discussion points. The report itself is well worth a read for more detail.

Openness

Lord Evans noted that one of the challenges of undertaking this work was identifying where AI was being used in the delivery of public services. In the Committee’s view, this is very much a failure on the part of government. I know from my own research that identifying where AI is being used by the public sector to support decision-making is difficult and what I hear is more by word of mouth than from other sources. Freedom of Information requests are not the answer, instead, the focus should be on proactive disclosure by public bodies.

The Committee is not recommending a new regulator which is the right approach. What will be important is for organisations to think more clearly about the impact of AI on what and how they deliver services. What there should be instead is a body that could advise the various regulators such as the Centre for Data Ethics and Innovation (CDEI).

The Committee also flagged that there’s a real need for clarity on ethical principles to guide the use of AI in the public sector. This is not about establishing yet another ethical framework, but rather clarifying for public bodies on what principles should be at the fore.

They also noted that there’s also very a need for the public to understand the principles that govern the use of AI in the public sector. These should be identified, endorsed, and promoted. While there is a lot of work being done to understand public perceptions of AI, certainly more work needs to be done in engaging with the public, including on how AI is being used currently.

 

Accountability

Accountability has to be at the heart of public services. It needs to be clear at the national and local levels who is responsible for decisions being made. This is about the right documentation, clarity on who is responsible at each stage, oversight and monitoring, clarity on the legal basis for redress and appeal.

 

Objectivity

Much has been written about bias in the application of AI. What we do know is that if we’re not careful, then bias may not only be built into systems, but may well be replicated and expanded on. In the context of public services, this is deeply concerning. It matters for how decisions are made about our children, about social welfare, about how the judiciary operates. And is goes back very much to the issue of accountability.

 

Final thoughts

It was the view of the Committee that many of the challenges flagged above can be addressed during the procurement process, and the public sector clearly has huge purchasing power. It is incumbent on organisations to have a clear understanding of the services they are procuring, and any resultant risks and issues. Monitoring and evaluation also have a critical role to play – are we monitoring and evaluation for performance, or standards?

Linked to this is the need for AI Impact Assessments. These are recommended by the Committee as mandatory and to be published. Again something I very strongly agree with.

AI has the potential to positively transform public services in the UK. But there are real risks that need to be addressed. And this goes beyond the ‘information challenge’, or ‘educating’ the public. Instead, openness, accountability, and objectivity, are critical in understanding how decisions are made and in the delivery of services.  We need a system we can trust.

I very much welcome this report.

 

 

Get in touch

If you have a question or if you’re interested in working with me, or would just like a chat, drop me a message via my contact page.