Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex, Great Britain. It is concerned with the nature of consciousness, with the phenomenon of perception and consciousness. He has published more than 100 scientific papers and book chapters and is the editor-in-chief of the journal Neuroscience of Consciousness. He is a regular contributor to New Scientist, The Guardian and the BBC and writes the NeuroBanter blog. In June 2023, Anil Seth was the distinguished guest of the Mindscapes conference in Bucharest (www.mindscapes.ro) and he participated in the launch of the Romanian translation of his bestseller Being you. A New Science of Consciousness, Re-Design Editions, 2023.
It is in Bucharest that we had this interesting conversation, in a little room in the Atheneum building, as someone nearby was rehearsing percussions, playing an obsessive rhythm, as to remind us of the fundamentals of life.
I don’t think we have a thing called „self“, independent of our bodies and our worlds. The experiences we have of ‘self’ trace real things happening in our bodies in relation to the world, but they do not directly reveal the existence of a „real self’.
How does Anil Seth know that he is Anil Seth?
In one sense, it is obvious. There is a continuity in the flow of sensations, emotions, and thoughts that comprise both my experience of „self” and my experience of the world. And part of this experience of „self” is a narrative of personal identity, with memories of the past and plans for the future. This is the basis of a part of ‘Anil’. But in another sense, the question is misleading. I don’t think we have a thing called „self”, independent of our bodies and our worlds. I believe that what we call the self—any self—is a collection of perceptual experiences, and therefore what it means to be a particular self is always changing. There is no unalterable essence of ‘Anil’ to be known. Just like Heraclitus said, you cannot step into the same river twice, because it is not the same river and because you are not the same person. We are always changing, but we rarely perceive these changes. The experience of self is a kind of perceptual construct, as is, for example, the experience of color. Colors do not exist independently of the mind experiencing them. There is no single „exact” way to perceive a color, but it is extremely useful for us to experience color the way we do. Likewise, the experiences we have of ‘self’ trace real things happening in our bodies in relation to the world, but they do not directly reveal the existence of a ‘real self’.
We, humans, we are the ones who design meaning and mind into the AI’s utterances
If Anil Seth was an artificial intelligence, how would he have responded to the question: How does Anil know that he is Anil?
Hopefully less convincing then myself! Our current large language-based artificial intelligence models are very good at imitating language. They do this by anticipating what comes next in any given sentence. If such a model is trained with billions of sentences, the result can be amazingly effective. So if you ask a language-based artificial intelligence model a question about consciousness, it will give you as an answer some sort of average of what is plausible for humans to answer under the circumstances. But of course, if an AI says it has consciousness or it has a self, that doesn’t automatically mean it does have them — despite what some naive Google engineers might think. We humans are the ones who design meaning and mind into the AI’s utterances.
Modern neuroscience makes a distinction between the conscious and the unconscious in all sorts of ways—notably by contrasting conscious perception with unconscious perception—but there is no sense in which The Unconscious has its own „mind“.
What would a contemporary neuroscientist say about what Freud called the „unconscious”?
It depends on the specialist! Some of my colleagues in neuroscience, like Mark Solms, are making valiant efforts to rehabilitate Freud, but I am not convinced. Freudian concepts carry too much baggage, in my opinion, to be really useful in modern neuroscience. „The Unconscious” is a good example. For Freud, the unconscious usually refers to a mind that has its own beliefs and desires, that are repressed by consciousness. But there is very little evidence for this. In contrast, modern neuroscience distinguishes between the conscious and the unconscious in all sorts of ways—notably by contrasting conscious perception with unconscious perception—but there is no sense in which the unconscious has its own mind. Rather, the point is that consciousness is implemented by processes that are themselves unconscious, and some things can be done without consciousness—though there are not many such processes, and they are not done well.
We don’t see things as they are, we see them as we are.
If perception is a construct, then do we all experience the world in the same way?
Almost certainly not! And we can never know what it’s really like in another person’s mind. Unlike external differences – body shape, skin colour, etc. – the inner „perceptual diversity” between us remains hidden from view. Revealing this perceptual diversity is what my colleagues and I are pursuing with a project launched last year called The Perception Census: an ambitious attempt to map the hidden landscape of how we each experience a unique world (www.perceptioncensus.dreamachine.world). The Perception Census differs from the usual science-art interferences, where science analyses art and art interprets science data – what we propose is a truly interdisciplinary project. We didn’t intend it as a therapeutic experience, but people with depression and anxiety started telling us that this experience has a very transformative power, and that’s why we’re thinking about using it as therapy in the future. It is much cheaper and more accessible than psychedelic drugs.
The bottom line is that everyone is in the same environment, but each one of us is experiencing very different things. For me it was an excellent opportunity to study the differences between people, in terms of perception and in general. For a long time my research in neuroscience aimed to show that perception is a construct – it’s an idea that goes way back in history, to Helmholtz, Feuerbach, Kant and so on. I talked about the brain always making the best prediction about what is and what the world is – which is a 180-degree reversal from common perception that we read and perceive the world exactly as it is. The consequence is that each of us has a brain that is slightly different from our neighbour, so although we seem to see the world as it is, in fact we each see it in our own way. We don’t see things as they are, we see them as we are. Sometimes this is very obvious – that’s why we talk about neurodiversity, like autism, ADHD, etc. But there’s a trap here too, because it’s assumed that if you’re not part of the „neurodiverse” groups, i.e. you don’t have autism or ADHD, then you’re „neurotypical” and see things as they are. But in reality we are dealing with a Gaussian distribution of neurodiversity. I was mostly interested in the middle of the Gaussian bell. The external world exists, evolutionary theory has demonstrated; if we had not obeyed its laws, we would have disappeared as a species long ago. But our perceptions of it are not identical. With my friend Fiona Macpherson from Glasgow, who is majoring in philosophy, I started talking about ‘perceptual diversity’ rather than ‘neurodiversity’ – this out of a need for a different label that wouldn’t lead people thinking of the idea of disease, but of differences that are naturally between us all. In psychology there are many studies about human differences, but they tend to focus on one or two specific topics: imagery, visual sensitivity. What we’re looking at is multiple aspects of perception in one experiment – we’re looking to see if there are patterns, a sort of ‘perceptual personality’.
Perception census presents a set of interactive illusions, covering vision, time perception, auditory perception, imagery, emotions, body perception. The scientific goal is to describe a more complex picture of perceptual diversity. But we also have a social purpose: to cultivate the idea that what we call perception is a construct, that the way I see may not be identical to the way you see. It is very important, because it can instill in us more humility about our participation in the world – here is a basis for more empathy and communication between people.
Personally, I think it’s possible that a conscious AI would need a LIVING AI first, and we’re not there yet. I think it’s a very bad idea to build a conscious AI, either deliberately or inadvertently. We should avoid building things that are truly conscious. It’s a kind of chicken-and-egg problem: in order to build something like this, we first need to know more about consciousness. The critical question is: What does it mean for a human to believe that a thing or a machine has consciousness? We tend to be anthropomorphic, that is, to project our own mind data, assign it to artificial language systems for example. This is harmful in itself; it will brutalise our conscience and turn us into sociopaths.
How could we ever be sure that an AI we’re talking to has already gained consciousness?
That’s a really difficult question. We do not fully understand how consciousness arises even in biological systems, so we are on shaky ground whether we attribute or deny it to machines. The best we can do is ask what the best theories of consciousness say—and each of them says different things. We should also recognize our own anthropomorphic biases, thus avoiding projecting consciousness into AI based on a superficial similarity to us. Personally, I think it’s possible that a conscious AI would need a LIVING AI first, and we’re not there yet. But I could be wrong about that, so it pays to be cautious. I think it’s a very bad idea to build a conscious AI, either deliberately or inadvertently. I think we need to avoid building things that are truly conscious. It’s a kind of chicken-and-egg problem: in order to build something like this, we first need to know more about consciousness.
We are now building systems that give us the impression that they are conscious, even though we know they are not. The critical question is: What does it mean for a human to believe that a thing or a machine has consciousness? We tend to be anthropomorphic, that is, to project our own mind data, to assign these date to artificial language systems, for example. This process is harmful in itself; it will brutalise our conscience and turn us into sociopaths. This is exactly what happens with linguistic AI models that speak in a human style. We already treat them badly enough – I myself feel that I am not polite enough in dealing with them. WestWorld’s sci-fi highlights the way we brutalise ourselves, the brutal way we interact with robots that give the strong impression of being human, even if they aren’t.
We need to build a new technological literacy here. The more we succumb to the tendency to anthropomorphise, the more we tend to ascribe to artificial systems more human capacities, including consciousness, and the more we tend to see them as colleagues rather than tools that come to extend our minds. However, we can make regulations to require companies that build such things to incorporate elements – design, for example – that make AI systems more resistant to human anthropomorphising tendencies. This must be implemented as soon as possible. Similar steps were taken when molecular biology and genetics reached the level of cloning – so we don’t do human cloning now, although the technology allows us to. In the case of computers it will be more difficult to impose regulations, however, because computers are very widespread, very numerous.
Why are people afraid of AI?
In many cultures there has always been a fear of producing living, sentient creatures that have power over us – from the Jewish myth of the Golem through Frankenstein to the Terminator. However, there is no real reason to believe that AI will try to enslave or destroy us.
But there are much more prosaic reasons to fear AI. There is the difficult problem of values alignment: how do we ensure that we imprint values on artificial intelligences that guarantee they will act in our best interest as a species – or as a planet. This is known to be hard to do, and if we get it wrong, the effects could be disastrous. And then there are dangers that are already among us: those of misinformation, confusion, the erosion of trust in institutions and democracy, bias, social and economic disruption, coercion, and so on. We need to act now to mitigate these challenges – and this has nothing to do with the issue of consciousness in the machine.
The idea that the phenomenon of consciousness depends on language is very wrong. To truly understand consciousness, language may not be enough; I think we will need other forms of expressing ideas, such as mathematics.
Can we forget about language when discussing the question of consciousness?
I think the idea that the phenomenon of consciousness depends on language is very wrong. This idea probably stems from a kind of human exceptionalism, where we associate something we value (consciousness) with something we believe to be typically human (language). But consciousness is largely about perceptual experiences of the world and the body; about emotion, mood and action. It is true that language allows us to talk about our conscious experiences, is a kind of experience in itself, and in some cases can shape other kinds of perceptual experience (this is the Sapir-Whorf hypothesis), but language itself is not necessary in order for someone to be conscious. Of course, we need language to express our ideas, to communicate the content of what we experience, and ultimately to build knowledge and understanding. But while necessary, language may not be sufficient. To really understand consciousness, I think we will need other forms of expressing ideas, such as mathematics.
A very frequent and vey wrong assumption is that consciousness is a function of intelligence. But intelligence is about functional behaviour: doing exactly the right thing at the right time. Consciousness, however, refers to how we experience things. If we make artificial intelligences more and more intelligent, this does not mean that they will also become conscious.
Does a machine have perception? How does a machine perceive?
Here’s a good question! How do imaging systems work? They are based on neural networks, so the principles of operation should be similar to human ones. But there are also differences. These systems read feed-forward information: information comes in and is read – a very good analogue for how unconscious perception works in humans. But there is ample evidence that for conscious perception we need more activity, recurrent, operating bottom-up and top-down, making predictions based on what already exists in the perceptual system. There are also artificial neural architectures that work this way – I call them Helmholtz machines – but the connection is not as strong. And even if we build a machine that makes predictions within perception, that still doesn’t mean it’s a conscious machine, but that it has a better computational model of how perception works for us. It is very correct to say that a machine can perceive – it interprets sensory signals – but that in itself is not consciousness, just the generation of meaningful output based on information that itself is meaningless. There is of course an artificial perception, but artificial consciousness is something entirely different. Here we have a lot of uncertainties.
A very frequent and vey wrong assumption is that consciousness is a function of intelligence. Psychology, however, defines intelligence very different from consciousness: intelligence is about functional behaviour: doing exactly the right thing at the right time. Consciousness, however, refers to how we experience things. We humans think they go together, but this is only human exceptionalism, it has nothing to do with the nature of things. So if we make artificial intelligences more and more intelligent, that does not mean that they will also become conscious. But if we were deliberately aiming to build a conscious machine, we would have to settle on one of the many definitions of consciousness. As for me, I always emphasise the role of controlled hallucination, a brain architecture focused on prediction, with regulation and control coming from the body. At this point, consciousness, mind and life appear very well interwoven. For a machine to be conscious, it would have to be alive – we probably won’t be able to implement consciousness on a silicon-based computer. In fact, why should consciousness be independent of our organicity? The brain does not radically differentiate those organic circuits that keep it alive from the organic circuits that produce predictions and calculations.