Imagining AI futures gives us a space to consider our place in the world and ask those tough philosophical questions. What is human nature? What is the nature of the social contract? How do we relate to each other and the world around us? How do we shape technology and how does it in turn shape us and the world?

This is why I jumped at the chance to hear four speakers discussing their imagined AI futures. The event was hosted by Studentsfor.AI – an organisation connecting AI students, enthusiasts, and researchers, in the UK.

The speakers were:

  • Helene Guillaume – Founder and CEO of WILD
  • Robbie Stamp – CEO of Bioss International and producer of popular sci-fi film The Hitchhiker’s Guide to the Galaxy.
  • Ash Aberneithy – Founder and CEO of analytic.ai
  • Rachel Coldicutt – CEO of Doteveryone

In listening to the speakers, a number of themes emerged. To note, all errors of paraphrasing and interpretation are my own.

 

On learning from history in imagining AI futures

To think about the future requires understanding of the past. It’s through reading and understanding history, and the patterns that emerge, that we get insight into our constructive and destructive natures. Robbie is passionate on this topic.

Building on this, Robbie argues for the critical thinkers of the world, the reflectors, the artists, the authors. On this point, I wholeheartedly agree. The hard questions must be asked and the challenges made.

This is only becoming more necessary with the rise of fake news and algorithms shaping our social media feeds so it’s possible to live in a groupthink bubble – only having sight of those views and opinions that reflect our own. This becomes a frightening world of absolute rightness and wrongness and could lead to deeply dark consequences and potentially, the end of democracy. Is this a future we want?

 

On Algorithms

Something that keeps cropping up in many discussion is that of transparency and explainability. Will we live in a world where AI is a black box, where decisions are made without human insight as to how they were made? This raises all sorts of concerns for the near and not so near future – from insurance and home loans, getting jobs, what schools and universities we’re able to attend.

While analytics can certainly be put to good use, and in itself, relatively benign, there exists a real possibility of significant negative consequences. These concerns are not new. In bioethics, there has been much debate about the use of predictive genetic testing and real fears expressed and how that information could be used by insurers, employers, and society. Rachel noted that there is work being undertaken in the US using genetic data to predict educational outcomes. An interesting one to ponder.

 

Work

A number of the speakers reflected on the potential impact of AI on the nature of work. For Helene, these are legitimate concerns but there is the opportunity for automation to free our time from routine tasks. Here, AI presents a huge opportunity for replacing low value jobs with engaging with activities we love.

And it’s also about the type of work.

Ash focused on both workforce issues and financial services. He expects there to be huge disruption in financial services going forward. Regulation of this new emerging landscape he sees as being both positive and necessary. Data security will be a significant issue. On the workforce, while there will be disruption, there will also be opportunities with the demand for data scientists only increasing. In addition to universities building the numbers, it will be increasingly necessary to get older people coming back into the workforce to fill those gaps and this at present is an untapped market.

And the risks? Financial service companies are under huge pressure to deliver and there is the potential for the misuse of customer data. This will be difficult to regulate.

 

Human v Machine

Man with girl and robot and robot dogAre we at risk of losing connectedness or will it be reconstituted? In the love stakes, will machines replace people? For Helene, a dystopian future is one in which we lose our identity and fail to cherish our imperfections and messiness of being human. For Rachel, it is one in which machines lead and we follow.

The thread running through this is the importance of accountability which was raised by all the speakers. Will we be subservient to machines and decision making algorithms?

We can never lose sight of human accountability. AI cannot feel pain or shame. The complexity that makes us human cannot be replicated.

 

 

 

And I’ll leave you with a poem that Robbie read.

 

All watched over by machines of loving grace by Richard Brautigan

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

 

 

Get in touch

If you have a question or if you’re interested in working with me, or would just like a chat, drop me a message via my contact page.