A Magna Carta for the Age of AI? In April 2018, the House of Lords Select Committee on Artificial Intelligence published its report, AI in the UK: Ready, willing and able? Last night I was fortunate to hear Lord Clement-Jones, Committee Chair, talk about the report and where we’ve gotten to in the public debate.

Since the report’s publication, ‘it’s been like a non-stop circus’. There’s been a huge level of interest in the UK and abroad. This isn’t surprising and sits alongside the Prime Minister’s speech at Davos, AI being a key feature of the Government’s Industrial Strategy, and timely reports from NESTA, the Royal Society, the RSA and many others.

We are at a stage in the development of AI where there is a real opportunity to shape its functions and purposes for the benefit of all. And there’s a real opportunity for government to develop policy and an approach that builds public trust.

UPDATE: The government response to the Committee’s Report came out after this talk and can be found here.

The Approach

There’s a lot of hype and hysteria around AI. Pick up any newspaper and the scenarios provided swing between killing off cancer to being killed by autonomous robots. The approach of the Committee is realism, to understand that AI exists in the here and now, and that we can’t wait anymore to start thinking through its implications.

Sitting under this is a recognition that we’re pretty rubbish at predicting the future. Could we have imagined for instance how the invention of the car would lead to a fundamental restructure of how we organise our lives, how we live, how we work? And AI is already pervasive – you only have to pick up your smartphone.

The Committee set about to address five key questions:

  • How does AI affect people in their everyday lives, and how is this likely to change?
  • What are the potential opportunities presented by artificial intelligence for the United Kingdom? How can these be realised?
  • What are the possible risks and implications of artificial intelligence? How can these be avoided?
  • How should the public be engaged with in a responsible manner about AI?
  • What are the ethical issues presented by the development and use of artificial intelligence?

In addressing these, they took written and oral evidence, and made a number of visits to researchers and companies. The outcome was a series of recommendations that can be grouped around the following themes:

  • Leadership
  • Inclusion and Diversity
  • Equipping people for the future and
  • Control over data.

Underpinning all of this are discussions about the development of an ethical framework for AI.

 

Is the law fit for purpose?

A clear message from Lord Clement-Jones is that ethics is central to any discussion on AI and this is a time when there is a unique opportunity for it to be shaped positively for the public’s benefit. What remains unclear in this emergent field is the extent to which existing legal frameworks are fit for purpose, and if not, whether new regulatory mechanisms will need to be put in place.

The UK has a tradition in bringing ethics into the heart of policy making. Look for instance at the Human Fertilisation and Embryology Authority which is charged to regulate relevant research institutes and fertility clinics. One thing that is clear is that we must prepare for the potential misuse of AI.

There is no simple fix nor a single solution.

 

Building public trust

The UK is clearly positioning itself to be a world leader in the development of AI. For this to happen, a key requirement is building and maintaining public trust. This will only happen by action, not simply words. At present, we have a scenario where fewer people are concerned by AI, than are not. However, I suspect, as does the Committee, that this will change significantly as AI applications become more prevalent in our daily lives.

A real challenge for now is we don’t know where AI will take us.

While there’s clearly much to be done around leadership, appropriate data management and application, and equipping people for the future, a theme that Lord Clement-Jones and attendees kept coming back to is inclusion and diversity. And this is a theme that keeps cropping up in my own discussions, events I go to, and in the literature. This is for very good reasons.

Two of the issues flagged relating to inclusion and diversity were, transparency and explainability, and diversity in training and recruitment. We’ve seen examples in the US where AI algorithms have resulted in racial bias in deciding sentencing and paroles. We know there is an issue with diversity in training and recruitment and this too has real world impacts. It was encouraging to hear that in all Lord Clement-Jones discussions, this is the issue that has gone down best – many people responding positively and recognising that it’s an issue that must be addressed.

 

A Magna Carta in the Age of AI?

 

 

At present in discussions on AI, there are more questions than answers –

Will there be a document constituting a fundamental guarantee of rights and privileges with regard to AI?

What rights will we have going forward?

What control will we have over our data and how it is used?

How can we ensure that AI algorithms are transparent and explainable?

 

What is clear is that the ethics and implications of AI cannot be ignored. And why? I think Stephen Hawking said it best:

 

“AI is likely to be either the best or worst thing to happen to humanity.”

 

 

Work with me

Get in touch to see how Keyah Consulting can help your organisation.