ʼһ

XClose

Brain Sciences

Home
Menu

The future of artificial intelligence in a post-truth world

8 August 2024

Professor Lasana Harris from the UCL Division of Psychology and Language Sciences is well placed to speculate about what the future of artificial intelligence (AI) holds.

Lasana Harris

Professor Harris describes his research as having become increasingly relevant in recent months in light of the UK and US elections. From anthropomorphism and dehumanisation, to AI and deep fakes, the subjects in which he specialises are hot topics of public debate.

His real interest is in understanding how our minds create our reality. This led him to forge an academic career as an expert in psychology and behavioural science.

“I think what's interesting about behavioural science and psychology is that we get to study things that no one can directly see,” he says. Behaviour, he explains, is really just an “output” of the mind.

Much of Professor Harris’s work involves studying how we perceive other people and our ability to imagine what others are thinking. “It turns out that we don't just reserve that skill for other people though. We also do it to things that aren’t human,” he says. This is where his interest in anthropomorphism comes in.

Professor Harris looks at how we perceive non-human objects, like AI, as human beings.

He is particularly intrigued by the moral dimension of anthropomorphism and AI. An example he gives is a self-driving car. If it kills a person, who do we hold accountable for that behaviour? Is it the manufacturer? The person who owns the car? The car itself? “You get these really interesting ethical questions that arise around agency,” he says. “Even though we can view AI in human terms, if the humanoid robot you were watching gets run over, you're probably not going to shed a tear.”

One of the key reasons we instinctively separate humans from AI, Professor Harris explains, is that when we engage with other people our brain waves and physiology spontaneously synchronise. Even our pupils dilate to match those of other people.

This doesn’t happen with AI, at least not yet. Things are changing though. Professor Harris uses brain imaging to look at the most recently evolved parts of the human brain, known as the neocortex. With the use of technology, he says, we could lose some of the processing in these regions. “Our hyper-social nature means that our brains are built to interact with other people. But the way we interact now doesn’t fully tap into this mechanism. For those of us who interact with others via devices frequently, it's a huge issue. There may come a point when we won't be able to tell humans and AI apart,” he warns.

Professor Harris is also worried about the impact of the erosion of the boundaries between real and digital life on society. “I think we live in a very interesting moment because it seems like we're in this movement where reality gets called into question constantly as part of political debate.” In this post-truth world, people’s reputations are shaped by the groups they're aligned with, not necessarily what they've done.

There is hope though. While the media may be increasingly encamped with political parties, Professor Harris doesn’t believe that AI-generated deep fakes are likely to sway elections. “I think people are much more savvy users of the internet than we give them credit for. Every time we get information through the internet, we know there's a lot of junk out there, so we’re not just naively consuming stuff – we evaluate. Is this real? Is it fake?”

Professor Harris’s main concerns are the lack of regulation from government and the danger of AI’s inbuilt biases. He points out that currently AI companies have free reign to do whatever they like, and we lack control over how our data is used. If AI is essentially a big correlation machine, then what we feed it determines what it produces. “AI has an idea of what something is, not because it experienced it in the world, but because we told it that's what the thing is,” he says. But not everyone is part of this new AI future. As Professor Harris explains, if you live in a remote village and don’t create online content, then your experiences are not influencing AI’s algorithm. Discrimination then becomes more entrenched.

It may sound like science fiction, but Professor Harris believes that in the future AI will gain similar perceptual mechanisms to those of humans. His question then is what AI culture might look like: “As humans, we use culture to help interpret the world. Will AI culture be a replica of ours? Or something entirely new?”

However, his main concern is with the present: “We’re living in a period where reality is becoming subjective,” he points out. “Ultimately we should focus on the issues facing us right now such as the climate crisis, political polarisation, and entrenched inequalities rather than worrying too much about technologies of the future.”

YouTube Widget Placeholder

Related