A look beneath the hood of AI
As two major exhibitions about artificial intelligence open at King’s College London, Professor Michael Luck, Director of the Institute for Artificial Intelligence and Siddharth Khajuria, Director of Science Gallery London, discuss what visitors can expect…
Siddharth Khajuria (SK): So, Professor Michael Luck, let’s dive straight in! How do you define what artificial intelligence is? – and in particular thinking about the role of the King’s Institute for Artificial Intelligence and how its work maps across so much of the university?
Professor Michael Luck (ML): It's difficult to define artificial intelligence. For me, artificial intelligence is a behaviour that, when exhibited by a person, we would say was ‘intelligent’. And if a machine can do it, then it's AI. The difficulty, I think, is that people think it's a single technology. It's not. It's a great big bag of many different tools, techniques, and technologies that all come together in different ways to deal with different kinds of problems, with different applications. The issue we are seeing now is that there's been a lot of progress in a very particular set of applications that have some very significant implications for society - and it's all happening quickly.
SK: And to what extent do you think our attention is being drawn in certain directions? Is there potentially a cost to where we're not putting our attention? When thinking about that broader definition of artificial intelligence, I wonder where we should be looking collectively, that we're currently not?
ML: One of the challenges, I think, is that the breadth of AI is so huge that we don't really capture everything within it. What we're trying to do in the Bush House Arcade exhibition, ‘Bringing the Human to the Artificial’ is represent the breadth of the applications of AI, and to try to understand what it can do for us.
But also – crucially – we are highlighting where people must be involved in both the development and the use of AI, so that we can help to make it better. There are real issues that we are just starting to think about regarding ethics. For example, questions around bias and regulation. The UK Government, the European Union and others are looking at how we can take steps to ensure that AI doesn't get out of control. But it's hard to get regulation right. Researchers at King’s are thinking about the questions that we need to pay attention to if we want to use AI responsibly. These are crucial issues to tackle.
SK: Yes, we're thinking about exactly that: just how significant the ethical ramifications of new technologies on society can be. One of the things that I'm nervous about is the way in which developments like AI can be posited as ‘transformative’, ‘revolutionary’… as if they will create a landscape we haven't seen before. Whereas I think the lens that I'm keen for us to look through is that these technologies are magnifying glasses or accelerators. There are existing structures, trends or distributions of power or privilege in society, and these new technologies accelerate and amplify those.
What I'm keen for us to do with ‘AI: Who’s Looking After Me?’ at Science Gallery London is fracture the monolith of AI apart. Within AI, there are a myriad of different rooms, conversations, containers and communities that we need to sit with. When you start talking about the specifics of healthcare, love lives, decision making in court systems, our pets, I think suddenly we'll find conversations that are easier to grasp, rather than AI continuing to be this sleek, far away concept. I think healthy societies have engaged and informed citizens, and I’m wary that when it comes to AI we are wildly disengaged as publics, as legislators, as students. We are ill-equipped to have an engaged discourse as a society about the impacts AI is going to have on all of us. One of the things I'm hopeful for with the exhibition is that we are trying to look past the hype and the newspaper headlines. AI is already being woven into decisions that affect us and the more we understand about them, the more we'll be able to engage as citizens.
ML: I agree. I think educating the public is important because most people don’t currently understand AI adequately. This new force is coming into our lives, and we need to have a say in it. ‘Bringing the Human to the Artificial’ is addressing this by looking at how we interact with these technologies and what they can do for us, but also by debating what kinds of constraints we should put in place, ensuring there is a diversity of perspectives looking at the technology that we develop. We also require perspectives from different groups, including from minority groups. We need to anticipate some of the problems that we have seen in the past, but also those that we have not yet encountered as we develop more sophisticated and more powerful solutions. We need to be considering questions around who does not have a voice in some of these conversations right now.
There's an awful lot of hype around existential risk - big questions about consciousness, sentience and the future of humanity. But we have some real challenges right now, even before we consider those things. There are some amazing benefits that AI technologies can give us in our daily lives, but we need to make sure that we're addressing some of the current challenges around bias, ethics, deployment and governance before we even consider some of those more extreme considerations of existential risk. I think those are some of the things that you're also paying some attention to at Science Gallery London? What can we expect?
SK: I think if you step into the gallery, you will encounter a questioning, playful, curious, critical exploration of the ways in which AI is already woven into so many aspects of our lives. Something that excites me most about the gallery is that our projects emerge from collaboration across difference. So, throughout this exhibition you encounter projects that are born out of the perspectives of a range of people – be that young heart patients, cats, medical engineers, computer scientists, artists. And I think the product of that is an exhibition that has a series of entry points that I think resonate quite broadly, asking big questions like ‘would I trust an AI with looking after my pets?’ ‘How are decisions about my healthcare getting made?’ ‘Will I love differently as a result of the impact of AI on dating applications?’
And there's a civic act for King’s in this. We are creating ways into the bodies of knowledge that the university both harbours and generates, and showing that they are richer, stronger, more representative if they're co-produced in dialogue with perspectives that historically might not have been that widely represented through a research lifecycle. But to reflect that back at you, I'm excited to come over to the Arcade – what can we expect when we come to ‘Bringing the Human to the Artificial’?
ML: Hopefully we'll show you the many different applications and uses of different technologies from across the breadth of the university – from product marketing, to medicine, to security. But also, we're asking questions like ‘to what extent we can think about AI as doing valuable things for us?’ or indeed whether we should be concerned or fearful of some of the things that AI will bring. And finally, there is an artistic, cultural perspective on top of that, to help manifest some of these things in ways that are that are meaningful, informative and stimulating.
SK: I'm going to shift the topic a little bit here. A long time back, I studied History at university and one of the things that you’re always trained to think about is provenance. Where's the piece of information come from? How might that have informed it? What's missing in this set of source material? What's overrepresented?
That's something with the types of AI application represented in the news media currently where we're almost encouraged to find joy in the magic. ‘Oh my God. I can't believe it can do this!’ Whereas, in fact, what it's doing - by design – is hiding its provenance, hiding the component datasets, statistical models and pattern recognition that is piecing together what it is. Something that I feel like we're both trying to do is explore what it means to not have access to the provenance of those decisions.
ML: That's exactly right. The question of explanation is a critical one, increasingly so in the academic study of artificial intelligence, where people are trying to work on techniques that reveal what's going on. And in fact, we're asking those very precise questions in the exhibition. For example, the use of AI in court proceedings and decisions. Or, to give another example: if you're going to offer a loan to someone based on what an AI system tells you, you need to be able to explain it. It's not just an ethical question. It's increasingly a legal question, because of the regulation that's coming in. The AI techniques that we have are problematic in many respects, but they're so increasingly pervasive. The pace and scale of development is just phenomenal. It's crucial that we do have some of these checks and balances around requiring explanation.
SK: What would you say to someone who perhaps feels alienated by questions of AI and often thinks the arts are a bit intimidating to get them along to the exhibition in the Bush House Arcade?
ML: Look beneath the hood and find out why AI is relevant to you. The exhibition aims to show you what's working well, what needs your involvement, and what the key questions and challenges are for society. And at Science Gallery London?
SK: ‘AI: Who’s Looking After Me?’ will be fun. It will be playful. It will plant questions in your mind and make you a little bit more curious about what's going on in the world. It'll be an invitation to grapple with a bunch of topics that are shaping your life already. And I think it'll do it in a way that will energize you.