AZEEM AZHAR: Welcome to the Exponential View podcast. Im your host, Azeem Azhar. Now every week I speak to the people who are shaping on their future. So far in this series weve had experts in everything from fusion and quantum computing to cryptocurrency and the future of the car. Now this weeks episode is a little different. Bear with us. It is just as mind expanding. My guest today is Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex. He is a friend of mine and the author of a recent book, Being You: A New Science of Consciousness. In it, Anil posits at what we think of as reality is a series of controlled hallucinations. We construct our version of the world according to our preconceptions and best guesses. Both the science and philosophy of consciousness is fascinating. And it has me for more than 30 years. Recent developments hint at a range of real world applications that could change the way we live. From the clinical uses to applications in virtual reality and artificial intelligence, the science of consciousness touches on so many exciting areas and no one is better placed to explain why than todays guest. Anil Seth, welcome to Exponential View.
ANIL SETH: Hi Azeem, its really great to be here. Im glad were able to talk now.
AZEEM AZHAR: And I am glad that you have summed up the energy to be here. And I think this is going to be the first time where I feel Ill be able to keep pace with you, only because you are slightly under the weather, but Im feeling perfectly fine. So, thank you for giving me that slight handicap advantage.
ANIL SETH: Lets see how that goes.
AZEEM AZHAR: Well, we met several years ago when there was a social media meme that went crazy. It was about a dress, whether it was black and blue, or white and gold. And we were both asked to go on television to talk about it. I had to talk about why Kim Kardashian was tweeting it. And you got to talk about why we perceive things the way we do. Which is really the heart of your work and your professional career over the past decades. Now, consciousness has occupied thinkers from millennia. We think about Descartes or Thomas Nagels paper. What is it like to be a bat? And in the 90s, there was a lot of work, a lot of emphasis on new ideas, perhaps relying on new instrumentation, like MRIs and other kind of experiments that we could use to understand what consciousness is, take us through your view and, and how you got that?
ANIL SETH: My approach to this question actually touches on a couple of the things you mentioned. First thing is youve got to start with a definition. What do we mean by consciousness? There are all sorts of definitions out there. But I mean, something very specific, very biological, very personal. It is any kind of subjective experience. And this is what the philosopher Tom Nagel said, of course. He said, For a conscious organism, there is something it is like to be that organism. It feels like something to be me. It feels like something to be you, right? But it doesnt necessarily feel that anything like it is to be a table or a chair or an iPhone. Theres what David Charm is called this hard problem. You have this world made of physical stuff, made of material, atoms or quacks or whatever it might be. And somehow out of this world of physical interactions, the magic of consciousness emerges or arises, and its called a hard problem because it seems almost impossible to solve, as if no explanation in terms of physical goings on could ever explain why it feels like anything to be a physical system. But we are existence proofs that it does. So instead of addressing that hard problem head on, my approach, and its not only my approach, it builds on a history of similar approaches. Is to accept that consciousness exists. And instead of trying to explain how its magic out of mere mechanism to break it up into its different parts and explain the properties of those different parts. And in that way, the idea or the hope is that this hard problem of consciousness, instead of being solved outright will be dissolved in much the same way that weve come to understand life, not through identifying the spark of life, but through explaining its properties as part of this overall big concept of what it is to be a living system.
AZEEM AZHAR: The hard problem that Charms talks about, I guess, back in the mid-nineties, perhaps when you were an undergraduate is a really, really tricky one, but even the easy problems of consciousness, how the mechanisms function were pretty difficult. But if your approach is neither the easy problem or tackling the hard problem, you call it, the real problem. Why do you say its the real problem?
ANIL SETH: Well, partly to wind up David Charm, I mean, hes been in a fantastic influence on the field, of course, but dividing the game between the hard problem and the easy problem. I think I forces people to ignore consciousness entirely. If you focus on the easy problems youre studying all the things that brains are capable of, that you can think about without needing to think about consciousness. These are challenging problems, but theyre not conceptually difficult in the same way that the hard problem is. And so if you divide it this way, youre either sweeping consciousness under the carpet, or you are facing this apparently unsolvable mystery. So, I call it the real problem. Simply to emphasize that yes, we have conscious experiences and importantly, consciousness is not one single big, scary mystery. It can be addressed from different angles. We can think about whats happening when you lose consciousness under anesthesia or sleep. We can think about perception. Why did some people see a golden white dress and why do other people see a blue and black dress? And then for me, the most interesting aspect is we can think about the self. Now, the self is not a sort of essence of you that sits somewhere inside the skull, doing or perceiving. The self is a kind of perceptual experience. Two, and it has many properties. Experience of being a body. The experience of free will, all these things are aspects of selfhood. And I think well make a lot more progress by addressing these aspects of consciousness somewhat separately. We can take the approach of trying to explain what makes them distinctive and get a lot further in understanding why our conscious experiences are the way they are. And as we do that, whats happening, certainly for me, is that this hard problem seems to lose its luster of mystery a bit. Were doing what science always does, which is were able to explain, predict and control the properties of a system. And theres no reason we cant do that when it comes to consciousness. Thats the real problem of consciousness.
AZEEM AZHAR: One of the things that we could do, I mean, this comes back from our own experience. It comes back from the Nagel paper, is that we can recognize that there is this quality to be a thing, and to have that sense of self and this sense that we have of consciousness. But lets take a step back. If we know that consciousness is there, why do we have it?
ANIL SETH: I dont think there needs to be any single reason why consciousness is part of the universe. We dont know why it extends either. But for all creatures that are conscious, I think theres a good hint about function. When we think about what we can call the phenomenology of consciousness, what our experiences are actually like. And if you think about your conscious experience at any particular time, it brings together a vast amount of information about the world in a way thats not reflecting the world as it is, but is reflecting the world in a way thats useful to guide your behavior. You experience all of these things in a unified scene together with the experience of being a self, together with the experience of emotion, things feel good or bad, and with the opportunities that you have to act in that way world. So theres this incredibly useful unified format for conscious experiences that provide a very efficient way for the organism to guide its decision making, its behavior in ways that are best suited basically to keeping the organism alive over time. And actually thats how I ground my whole ideas about consciousness. Theyre fundamentally rooted in this basic biological imperative to stay alive.
AZEEM AZHAR: So that is evolution all the way down. And we have evolved this capability, because it helps us make sense of all of our experiences, all the stimulation that we get in the external world and put it into ourselves so that we can experience that in ways that allow us to survive and allow us to potentially thrive, take the right kind of actions, that sort of thing.
ANIL SETH: Thats right. But also its worth emphasizing that the self is not the recipient of all these experience. The self is part of that experience. Its all part of the same thing. And this is one of the more difficult intuitions to wrap ones head around. And I think when thinking about consciousness, I always use this heuristic. I always remind myself that how things seem is not necessarily how they are. So it seems as though we are perceiving the world as it really is. That colors, like the color of the dress, exist objectively out there in the world. Now stuff does exist out there in the world, but the way we experience it, especially for something like color, depends on the mind and the brain too. And it seems as though the self is the thing thats receiving all these perceptions, but that again is not how things are. The self is also a kind of perception. And the fact that its all integrated into a unified conscious experience where we experience the self in relation to the world, that I think points to the function of consciousness, that its useful to guide the behavior of the organism.
AZEEM AZHAR: This key idea, you have this sentence, the purpose of perception is to guide action and behavior to promote the organisms prospect of survival. We perceive the world not as it is, but as it is used useful for us. So this is the rationale for why consciousness exists. And you then connect it to the notion of it being a controlled hallucination, capturing the idea that in a way consciousness is directing what we I hesitate to use the word choice, but what we choose to access from the real physical world, this mechanism of controlled hallucination.
ANIL SETH: Its a bit of a tricky term to think about perceptual experience. Because theres a lot of baggage to things like hallucination. The reason I use controlled hallucination to describe perceptual experience is to emphasize that all of our experiences are generated from within. And we dont just receive the world through the transparent windows of the senses. What we perceive is the brain making a best guess and inference about the causes of its sensory signals. And the sensory signals that come into the eyes and the ears and all the senses, theyre not just read out by the self inside the brain. No. The sensory signals are there to calibrate these perceptual predictions, to update these perceptual predictions. Again, according to criteria of utility, not necessarily according to criteria of accuracy. So the control is just as important as the hallucination here. Im not saying that our perceptions are all arbitrary or that the mind makes up reality. No. Experiences are always constructed, but theyre tied in very, very important. And as weve just said, evolutionarily sculpted ways. So that the way we experience the world is in general useful for the organism. So what we might think of hallucination colloquially, when, like I see some, I just have a visual experience that nobody else does, and theres nothing in the world that relates to it. You can think of that as an uncontrolled perception, when this process of brain-based best guessing becomes untethered from causes in the world.
AZEEM AZHAR: There are a few words that you used in your last answer that you talked about, inference and prediction and utility. And these are all words that we might use when were talking about artificial intelligence. So thank you for putting those words out there. Because when we talk about AI with you later in this discussion, we will come back to some of them. But lets go back to this notion of consciousness having this purpose. It helps organisms prospects for survival, that there is this notion of a kind of controlled hallucination, given all of these signals that are coming toward us. Now, for this to be a scientific theory, we have to be able to test it. We have to be able to run experiments on aspects of these assertions. So once youve made an assertion like that, what are the kind of experiments that you can run now to demonstrate parts of this theory?
ANIL SETH: This is a really good question. Because, of course, theories need to be testable in order to have any traction and to have a future. The idea of the brain as a prediction machine does have a long history. And you can take that idea and you can generate a lot of testable hypotheses about it. For instance, a whole range of work. Some of it from my own lab, others from other labs asks how our perceptual experience changes based on the expectations that our brain explicitly or implicitly has? If this controlled hallucination view is right then perceptual content should be determined, not by the sensory signals, but by the brains top down predictions. So we can test this in the lab and what we would call psychophysical experiments, where we carefully control the stimuli people are exposed to.
AZEEM AZHAR: You, sort of prime someone in advance, right? With a cue, they might interpret their experience one way, and youve primed them a different way theyll interpret it in a different way.
ANIL SETH: Right. This is a very blunt, a very blunt, very simplistic way to get at this. You can, for instance, tell people that seeing a face is more likely than seeing a house. And then you give them a situation which experimentally weve set up, so that theres an ambiguous image. And theyre more likely to see what they expect than what they dont expect, or theyll see what they expect more accurately and more quickly than what they dont expect to see. So thats a very simple kind of prediction that you make. It by no means validates or proves this whole theory. We need to do brain imaging studies as well. And these are beginning to happen in our lab, in other labs across the world, too, where we find that indeed we can read out what people are perceiving by looking at these top down flows of information in the brain. Certainly, in vision.
AZEEM AZHAR: Are these experiments that weve seen recently where you put someone into an MRI machine thats looking at their brain and you get them to think about a dog. And then you are able to look at the output and have a system that predicts that they were looking at a dog and recreate what they are thinking about. Is it that sort of thing that were talking about here?
ANIL SETH: Its based on the same sort of idea. So thats this emerging technology of brain reading, right? Can you decode what someone is looking at or thinking simply by basically chucking a load of brain imaging data out to machine learning classification algorithm. And you can. And, theres a lot of debate in the field about, is this telling us anything about the brain or is it just telling us that machine learning classification algorithms are quite good? But if you do this in a way thats more constrained to the anatomy of the brain. And for instance, you show people an image and then a quadrant of it might be missing, but it turns out a machine learning algorithm can still decode the content of the image. From brain imaging data from part of the visual cortex, where there was no stimulation and indeed from a layer of that visual cortex that receives top down input. And so the fact that you can do that is telling you theres information in this top down signaling that at least partly determines, or is relevant to the content of what someone is at experiencing. So experiments that build on this kind of approach are helping us disentangle, not just which regions are implicated in perception. I mean, neuro imaging has this history and starting point of focusing on, is this region a hotspot? Does it light up? Does this region light up? And I think these days we are moving beyond that, to think about networks and mechanisms and processes, rather than just this area or that area.
AZEEM AZHAR: There is a relationship between scientific theory and the tools that we have to run an experiment. And sometimes the two get somewhat out of sync. I think one of my favorite examples is when Einstein came up with the general theory of relativity in 1916, and he had these ideas of gravitational waves. It took us a century, until the LIGO device was available to actually experimentally prove that theory. When you look at the progress in your field and the types of experiments that have happened, certainly over the last 20 years, do you think that youve got the science of consciousness on a path that is more in sync with the tools that we have to do the tests, or is this going to end up being a little bit like general relativity where we have to sort of rely on it and then wait a hundred years before we can prove it?
ANIL SETH: Now, neuroscience, and especially the neuroscience of consciousness faces three specific challenges. One is brain imaging. We dont yet have a single brain imaging technology that is able to record both in high time resolution, in high spacial resolution at is from many, many different small parts of the brain at once, and coverage. So we can get any two out of three maybe, or one out of three, but we cant visualize the activity of a brain in the detail that we would ideally have. Thats one challenge. So developing new technologies that can manage that, I think is not necessarily going to be critical, but would certainly be helpful. The second challenge is specific to consciousness. And that is that the data by which we test theories of consciousness are of a different kind. Theres subjective data. Its not some data that we can get from LIGO or the James Webb telescope and or agree about it. Its subjective data. Now, some people say this means you cant do a science of consciousness at all, because you are dealing with data that is intrinsically private and subjective. I dont think thats quite true. I think it just adds a layer of difficulty. Theres a whole tradition and philosophy called phenomenology, which is about how to describe, how to report whats actually happening in the space of Contra experience. And there are methods now in psychology and in psychophysics where we can try to remove various biases in how people report what they experienced. So, it adds complication, but its not a deal breaker. The third thing, and this is something thats actually going on now, is that theres a movement to come up with experiments that disambiguate between competing theories of consciousness. Over the last 10 or 15 years in consciousness science, there have been a number of different theories refined and proposed this idea of the prediction machine. But then there are other ideas too, that consciousness is to do with integrated information in the brain or that its to do with broadcasted of information around the brain. And the challenge is to come up with experiments that distinguish between these theories, rather than just trying to be aligned with any particular one. And these experiments are now beginning to happen, which I think is very promising to the field.
AZEEM AZHAR: I then start to think about what the real world applications of all of this might be and what it might be telling us in practice. I think of perhaps a roughly sort of three areas I think about whats happening within medicine and within neurological and psychological conditions. I think about whats happening within artificial intelligence and the sort of work thats happening there. And, also whats happening in this the field of virtual reality. Because, I can see that virtual reality presents us with a whole set of sensory experiences that we may want to have sort of controlled hallucinations around. So Id love to explore those three areas. Perhaps starting with that first one, which is thinking about medical applications. I mean, what are we learning about psychiatric conditions or psychological conditions or neurological ones that is being illuminated by this kind of work?
ANIL SETH: If you take an example from neurology, people who suffer severe brain trauma often go into a coma where they unambiguously lose consciousness, and then they may recover partially to something called the persistent vegetative states. And this is a state when you diagnose it from the outside as a neurologist, the state in which the patients go through sleep, wake cycles, but there really doesnt seem to be anyone at home. Now theres no voluntary action. Theres no response to command or the questions. It seems like no consciousness is there. And people are often treated that way. That becomes a diagnosis of sort of wakefulness without awareness. But what the science of consciousness is allowing clinicians to do now is to not just rely on external science of consciousness, but look inside the brain. And theres a great example of this. Its now about 10 years old, but its a way of measuring the complexity of brain activity, by basically disturbing the brain with a very strong, very brief electromagnetic pulse. And then listening to the echo, listening to how this pulse bounces around the circuits of the brain. And this measure turns out to be quite a good approximate measure of how conscious somebody is, and has been validated under anesthesia and in sleep and so on.
AZEEM AZHAR: So its like a consciousness meter.
ANIL SETH: Its like the start of a consciousness meter. And I wouldnt want to make that analogy too tight. Because I dont think consciousness does lie along a single dimension, but I think in these clinical cases, it can be useful approximated that way. And indeed it is being in certain clinics now. If you take this measure, this consciousness meter measure, call it the perturbation complexity index, it will was developed by Marcello Massimini and Giulio Tononi and colleagues. That gives quite a good indication of whether somebody is in fact conscious, even though they cant express it outwardly, or will recover at least some conscious awareness.
ANIL SETH: Because if you track the trajectory of patients over time, youll find people that score high on this perturbation complexity index tend to be the ones that do better over time. And this is a direct clinical application of focusing on the brain basis of consciousness. And accompany that, theres of course theres many applications in psychiatry too. Because the primary symptom of most psychiatric conditions is disturbance in experience. The world seems different. People have actual hallucinations. People experience their body in different ways. People have delusional beliefs. And so now theres this whole field of computational psychiatry, which is trying to understand the mechanisms that give rise to the symptoms that appear at the level of conscious experience. Because, once we understand the mechanisms, we can start to think about really targeted interventions and bring psychiatry up into the 21st century, where it should be for medicine these days.
AZEEM AZHAR: Is consciousness to be found in a single place in the brain, or is it emergent? I mean, do we know what the minimal physiological requirements for consciousness are?
ANIL SETH: Certainly consciousness is not generated in any single area. Theres no seat of the soul, whether its the pineal gland that Day Cat identified or anywhere else. Consciousness emerges in some way from activity patterns that span multiple areas of the brain. But do we know the minimal neural correlate for conscious experience in a human brain? The answer still know, but there are some that argue that a very basic form of consciousness can emerge just from the brain stem, that it doesnt require any cortex at all. Thats sort of one extreme. And I dont think theres strong evidence for that. Then theres a very lively debate in the field at the moment about whether consciousness depends more on the front of the brain or on the back of the brain. Different theories might predict different involvement of the frontal parts of the brain. Some theories say that its absolutely essential. Other theories say its not. And so by designing experiments that can test the contribution of frontal parts of the brain. We can begin to distinguish between different theories too.
AZEEM AZHAR: Now Im interested in interaction between consciousness and machines as well. I go back one of the ways in which you describe consciousness. You say, The purpose of perception is to guide action and behavior to promote an organisms prospect of survival. It reminds me of the definition of intelligence that is often used in the artificial intelligence field, within computer science. Where people say an agent is said to be intelligent, if it can perceive its environment and act rationally to achieve its goals. So there seem to be a parallel from these different disciplines about the definition that you use about consciousness and the definition that some artificial intelligence researchers use for intelligence. Theyre not really the same thing at all, but Im curious about those parallels.
ANIL SETH: Right? There are parallels, but I think there are also important distinctions just in the specifics of the definite that you have. Theres a lot of work being done by the word rational, in that definition of intelligence from the AI community. But consciousness should not be defined that way. Consciousness, back to our very beginning, is any kind of subjective experience whatsoever. Instead of just being sad when something bad happens, we can be disappointed. We can experience regret. We can even regret things. We havent even done anticipatory regret. But to conflate consciousness and intelligence I think is to underestimate what consciousness really is about. And making this distinction, I think, has a lot of consequences. For one thing, it means at consciousness is not likely to just emerge as AI systems become smarter and smarter. Which they are doing. And theres a common assumption that theres this threshold. And it might be the threshold that people talk about as being general AI, when an AI acquires the functional abilities characteristic of a human, Oh, well thats when consciousness happens, thats when the light comes on for that AI system. And I just dont see any particular reason, apart from our human tendency to see ourselves at the center of everything on the top of every pyramid to think thats going to be true. I think we can have AI systems that do smart things that need not be conscious in order to do them.
AZEEM AZHAR: You call this idea of pernicious anthropocentrism. The idea that we have to be at the center of all of these. But when we think about what happens with engineered machines, as opposed to biological organisms, why are we saying this particular set of qualities that we call consciousness is present within sort of biological living organisms, but cant be present in engineered built ones.
ANIL SETH: I think theres just this big open question about whether consciousness depends on being made out of the particular kind of stuff. We are made out of carbon and neurons and wetware. Computers are made out of Silicon, mostly at least most modern day computers. Now, some people would say that it really doesnt matter what a system is made out of. It just matters what it does, how it transforms inputs into outputs. This may be true. It may be that consciousness is a sort of thing that if you simulate you instantiate. Like playing chess is like this. If you have a computer that plays chess, it actually plays chess. But then there are other things in the world for which functionalism is not true. And that the substrate, the what its made out of actually matters. Think about a really detailed simulation of the weather. Now this can be as detailed as you like, but it never actually gets wet or windy inside that simulation. Rain is not substrate independent. So, theres an open question. Here is consciousness dependent on our biology. Its very hard to come up with a convincing reason why it must be, but its equally hard to come up with a knock down argument that it has to be independent of that substrate. And thats why Im agnostic. But I do tend a little bit more towards the biological naturalism position. And thats primarily because when we think about a living creature, and we talk about the substrate, like what is the wetware that the mindware is running on? Well in the computer, youve generally got quite sharp distinction you can make the hardware and the software. But in a living organism, theres no sharp distinction between Mindware and wetware. And if you cant draw a line between these, then it almost becomes an unanswerable question about whether its independent of the substrate or not. Added to that, the only examples of things that we know are conscious are biological system. So that should be a kind of a default starting point until proven and otherwise.
AZEEM AZHAR: If we did get to a stage where, because you havent ruled this out, a computer became conscious, how could we know it was if it chose not to tell us?
ANIL SETH: This is a big problem. And bear in mind that being conscious just doesnt necessarily have with it, the ability to report. The system might not even be able to. Again, brain damage patients cant report things, even though they are conscious. I think the real danger in this area of artificial consciousness is that even though we dont know what it would take to build a conscious machine, we dont know what it wouldnt take. We dont know enough to rule it out. So it might in fact even happen by accident. And then indeed, how would we know. The only way to answer that question is to just discover more about the nature of consciousness in those examples that we know have it, that will allow us to make more informed judgements. I actually think a more short term danger is that we will develop systems that give the strong appearance of being conscious, even if we have no good reason to believe that they actually are. I mean, were almost all already there, right? We have combinations of things like language generation, algorithms, like GPT-3 or GPT-4, shortly and Deepfakes, which can animate virtual human expressions very, very convincingly. You couple these things together, and apart from the actual physical instantiation stuff, were already in a kind of pseudo Westworld environment where were interacting with agents.
AZEEM AZHAR: And youve also identified this challenge through some of your experiments of the idea of priming, that you can take something ambiguous and you can prime me, and I might hear the description of a lovely meal and someone else might hear the description of a political position. And so theres perhaps a vulnerability in the consciousness system that towards things that also look and walk and talk as if theyre conscious.
ANIL SETH: Absolutely. And I think this is something we need to keep very much front of mind as AI develops. Which is that, we have a lot of cognitive vulnerabilities, our cognitive vulnerabilities already being exploited by social media algorithms and the like. AI systems that give the appearance of being conscious will be able to exploit these vulnerabilities even more. So, theres a project Im working on with some colleagues in Canada, Joshua Benjio, and Blake Richards and others. Where what were trying to do is figure out how implementing some of the functions associated with consciousness can actually enhance AI, overcome some of its bottlenecks, like its ability to generalize quickly to novel situations, choose the data that it learns from, all these sorts of things, which we can do, and which are closely associated with consciousness in us. Without that having the goal of actually building a conscious machine, which want to adapt some of the functional benefits, but also do so in a way that we can help mitigate some of these dangers. For instance, an AI system that is actually able to recognize its own biases and correct for them might be a very useful change in of where AI is currently going.
AZEEM AZHAR: So, theres another technology theme that people are getting really excited about in 2022, which is the idea of the metaverse. And I guess that the metaverses, 2020s version of virtual reality. Creating environments that will be increasingly sensorially rich and immersive. To what extent would those appear to be real, real experiences to organisms that exhibit consciousness?
ANIL SETH: I have quite a problem with the overall objective of something like the metaverse. And its a very basic problem, which is that I think in the society, in which we live at the moment, we should be doing everything we can to reconnect ourselves with the world as it is. And with nature as it is, rather than trying to escape into some commercially driven virtual universe, however, glittering it might be. But I also think theres important lessons here or an important role that understanding consciousness has to play. When we experience a visual scene, were engaging with it all the time. We dont just passively experience a scene and sit there like a brain and a jar. Were interacting with it all the time and to understand how these interactions shape our experience. Now, these are the sorts of experiments, which VR is very useful. And of course the flip side of that is when we understand the role of interactions and shaping experiences, we can design VR environments to be more engaging, to be less frustrating, to perhaps be more useful, to the extent that they can be. And of course there are many very valuable applications as well. I just want to tell you this one experiment that weve been doing in the lab for a while, that I think is super interesting in this domain. Which is really what you said about will VR get to the point that its indistinguishable from real experience, setting aside whether we actually want to get there or not? Its an interesting question, right? And so, one of our experiments led by Keisuke Suzuki and Alberto Mariola is developing something. We call constitutional reality. This is the idea. Instead of using computer generated graphics, we use, in this case, real world video of lets say my lab. And we replay that real world video through a head mounted display so that as people look around, they can see the part of the room that they would see anyway. And in fact, thats what we do. We invite them in, they wear a headset, it has a camera on the front. And so to begin with, they are indeed experiencing their environment through the camera, projected into the headset, but then we can flip the feed, and run the prerecord video instead. And if you do it in the right way, people dont notice. So heres a situation. I think its really the first situation where people are fully convinced that what theyre experiencing is real in a way that you never get in standard VR or in a cinema. However, good the movie is. People really have the conviction what theyre experiencing is real, and yet it isnt. And this is a platform we can use to figure out, okay, now what happens if we mess with this movie in various ways? What happens to the persons perception when theyre high level prediction of whats going on? Is that this is indeed the real world. And thats a set of experiments that were working on right now.
AZEEM AZHAR: But that speaks to at sort of the potency or the potential potency of that set of technologies, that it could really deliver real experiences, right? Experiences that based on the idea of the controlled hallucination, the organism, the human is conscious of and believes they are experiencing and may make decisions based on those experiences.
ANIL SETH: Yeah, potentially. I mean, at the moment, this is obviously only possible in a very restricted circumstance. People have to come and sit in exactly the same place we recorded footage from and so on. But these are technological constraints. Theres not an in principle objection to extending that kind of technology. And theres another benefit of doing this. And this gets back to the first set of the applications. Which is that there are a range of psychiatric conditions, which are generally characterized, not by people having positive hallucinations, like seeing things that other people dont or hearing things. But rather reality seems drained of its quality of realness that their perceptions start to feel unreal. Their self can start to feel as if its not really there. And these kinds of conditions we might call them dissociative conditions. Are very, very tricky to deal with because they dont present with these obvious positive symptoms. And so this general line of research and thinking, what does it take for our brains to endow our perceptions with the quality of being real? Understanding that, I think, will refract back onto some of these applications in psychiatry as well. Where that quality of being real is attenuated or even abolished.
AZEEM AZHAR: I mean, Im curious about where this might go. I mean, science helps us get to settled understandings. It helped us get to a settle understanding of the relationship between the earth and the sun. It took Darwin to come along and then many years of arguing the discovery of DNA until we got a settled understanding about how new species come to earth and how they developed. When do you think science will come to a settled understanding of what consciousness is?
ANIL SETH: Oh, I hate that question so much. But, its an important question to ask. One of the strange things that I often hear when people talk about consciousness science and philosophy is that we still know nothing about how the brain generates consciousness or about how consciousness happens. Its still this complete mystery. But if I think back to what people were saying and thinking 20, 30 years ago, when I was just getting going, theres been a massive increase in understanding, not only of the brain networks that are involved, but also the kinds of questions that people ask. To just throw something very controversial and ride at the end. Theres this question about free will. Do we have it? Do we not have it? Does it matter? Yes, it matters because it influences all sorts of things like jury processes in law, when we hold people responsible and so on. But the questions are starting to change. Its not become a question of whether or not we have free will, but more a question of why do experiences of voluntary actions feel the way they do? How are they constructed and what role do they play in guiding our behavior? They become more sophisticated questions. And I think that is going to be part of the evolution of consciousness science, just as much as finding new answers. The questions will start to change, and well go Just like happened in the science of life. Well go beyond looking for the spark of life, the lan vital, and well come up with a richer picture of what consciousness actually is and what the right sorts of questions are to be asking about it. So the process of settling I think, is going to be quite slow. I dont think its going to be a mystery thats solved at any one Eureka moment. But the progress really is heartening. And I think the last thing Id say about it is that its very useful even to gain a partial understanding of consciousness. Thats useful for developing applications and technology and society and medicine. And fundamentally, its useful for us. Because, besides all these applications, I think most of us, at some point in our lives, certainly when we were kids, we were asking ourselves these questions. Who am I? What does it mean to be me? Why am I me and not you? What happens after I die? Understanding how experiences of the self and the world are constructed can help each of us understand our relationship with the rest of the world, with each other, and with nature much, much better at a deeper level. And I think that sufficient reward, and that reward is just going to keep on coming as we progress our understanding of the biology of consciousness.
AZEEM AZHAR: I know you cover many of these ideas in your new book, Being You, which is doing very well and is a great read. And of course, so much more to come. Thank you so much for your time today.
ANIL SETH: Thank you, Azeem. Its a real pleasure. Thanks for having me on. Ive really enjoyed the conversation.
AZEEM AZHAR: Well, thanks for listening to this podcast. If you want to learn more about the cutting edge of AI, enjoy a previous discussion I had with Nathan Benaich and Ian Hogarth, authors of The Annual State Of AI Report. And if you want to know more about how the science of consciousness and philosophy of mind interacts with virtual reality, watch this space. Weve got to great guest coming on to discuss what the metaverse might mean for us through the lens of consciousness. To become a premium subscriber of my weekly newsletter, go to http://www.exponential view.co/listener. Youll find a 20% off discount there. And stay in touch. Follow me on Twitter. Im @azeem, A-Z-E-E-M. This podcast was produced by Mischa Frankl-Duval, Fred Casella, and Marija Gavrilov. Bojan Sabioncello is the sound editor.
The rest is here:
- AI File Extension - Open . AI Files - FileInfo [Last Updated On: June 14th, 2016] [Originally Added On: June 14th, 2016]
- Ai | Define Ai at Dictionary.com [Last Updated On: June 16th, 2016] [Originally Added On: June 16th, 2016]
- ai - Wiktionary [Last Updated On: June 22nd, 2016] [Originally Added On: June 22nd, 2016]
- Adobe Illustrator Artwork - Wikipedia, the free encyclopedia [Last Updated On: June 25th, 2016] [Originally Added On: June 25th, 2016]
- AI File - What is it and how do I open it? [Last Updated On: June 29th, 2016] [Originally Added On: June 29th, 2016]
- Ai - Definition and Meaning, Bible Dictionary [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- ai - Dizionario italiano-inglese WordReference [Last Updated On: July 25th, 2016] [Originally Added On: July 25th, 2016]
- Bible Map: Ai [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai dictionary definition | ai defined - YourDictionary [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Ai (poet) - Wikipedia, the free encyclopedia [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- AI file extension - Open, view and convert .ai files [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- History of artificial intelligence - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- Artificial intelligence (video games) - Wikipedia, the free ... [Last Updated On: August 30th, 2016] [Originally Added On: August 30th, 2016]
- North Carolina Chapter of the Appraisal Institute [Last Updated On: September 8th, 2016] [Originally Added On: September 8th, 2016]
- Ai Weiwei - Wikipedia, the free encyclopedia [Last Updated On: September 11th, 2016] [Originally Added On: September 11th, 2016]
- Adobe Illustrator Artwork - Wikipedia [Last Updated On: November 17th, 2016] [Originally Added On: November 17th, 2016]
- 5 everyday products and services ripe for AI domination - VentureBeat [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Realdoll builds artificially intelligent sex robots with programmable personalities - Fox News [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- ZeroStack Launches AI Suite for Self-Driving Clouds - Yahoo Finance [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI and the Ghost in the Machine - Hackaday [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Why Google, Ideo, And IBM Are Betting On AI To Make Us Better Storytellers - Fast Company [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- Who Leads On AI: The CIO Or The CDO? - Forbes [Last Updated On: February 6th, 2017] [Originally Added On: February 6th, 2017]
- AI For Matching Images With Spoken Word Gets A Boost From MIT - Fast Company [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Teach undergrads ethics to ensure future AI is safe compsci boffins - The Register [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- AI is here to save your career, not destroy it - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- With a $16M Series A, Chorus.ai listens to your sales calls to help your team close deals - TechCrunch [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Microsoft AI's next leap forward: Helping you play video games - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Samsung Galaxy S8's Bixby AI could beat Google Assistant on this front - CNET [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- 3 common jobs AI will augment or displace - VentureBeat [Last Updated On: February 7th, 2017] [Originally Added On: February 7th, 2017]
- Stephen Hawking and Elon Musk endorse new AI code - Irish Times [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- SumUp co-founders are back with bookkeeping AI startup Zeitgold - TechCrunch [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Five Trends Business-Oriented AI Will Inspire - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI Systems Are Learning to Communicate With Humans - Futurism [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Pinterest uses AI and your camera to recommend pins - Engadget [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Chinese Firms Racing to the Front of the AI Revolution - TOP500 News [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- AI could transform the way governments deliver public services - The Guardian [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Amazon Is Humiliating Google & Apple In The AI Wars - Forbes [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- What's Still Missing From The AI Revolution - Co.Design (blog) [Last Updated On: February 9th, 2017] [Originally Added On: February 9th, 2017]
- Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Can AI make Facebook more inclusive? - Christian Science Monitor [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- AI and the end of truth - VentureBeat [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Taser bought two computer vision AI companies - Engadget [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- The Coming AI Wars - Huffington Post [Last Updated On: February 10th, 2017] [Originally Added On: February 10th, 2017]
- Is President Trump a model for AI? - CIO [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Who will have the AI edge? - Bulletin of the Atomic Scientists [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- How an AI took down four world-class poker pros - Engadget [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- We Need a Plan for When AI Becomes Smarter Than Us - Futurism [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- See how old Amazon's AI thinks you are - The Verge [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford to invest $1 billion in autonomous vehicle tech firm Argo AI - Reuters [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Zero One: Are You Ready for AI? - MSPmentor [Last Updated On: February 11th, 2017] [Originally Added On: February 11th, 2017]
- Ford bets $1B on Argo AI: Why Silicon Valley and Detroit are teaming up - Christian Science Monitor [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google Test Of AI's Killer Instinct Shows We Should Be Very Careful - Gizmodo [Last Updated On: February 12th, 2017] [Originally Added On: February 12th, 2017]
- Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations - ScienceAlert [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- An artificially intelligent pathologist bags India's biggest funding in healthcare AI - Tech in Asia [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Ford pledges $1bn for AI start-up - BBC News [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Dyson opens new Singapore tech center with focus on R&D in AI and software - TechCrunch [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How to Keep Your AI From Turning Into a Racist Monster - WIRED [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- How Chinese Internet Giant Baidu Uses AI And Machine Learning - Forbes [Last Updated On: February 13th, 2017] [Originally Added On: February 13th, 2017]
- Humans engage AI in translation competition - The Stack [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Watch Drive.ai's self-driving car handle California city streets on a ... - TechCrunch [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Cryptographers Dismiss AI, Quantum Computing Threats - Threatpost [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Is AI making credit scores better, or more confusing? - American Banker [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI and Robotics Trends: Experts Predict - Datamation [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- IoT And AI: Improving Customer Satisfaction - Forbes [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI's Factions Get Feisty. But Really, They're All on the Same Team - WIRED [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Facebook Push Into Video Allows Time To Catch Up On AI Applications - Investor's Business Daily [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Defining AI, Machine Learning, and Deep Learning - insideHPC [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI Predicts Autism From Infant Brain Scans - IEEE Spectrum [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- The Rise of AI Makes Emotional Intelligence More Important - Harvard Business Review [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- Google's AI Learns Betrayal and "Aggressive" Actions Pay Off - Big Think [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- AI faces hype, skepticism at RSA cybersecurity show - PCWorld [Last Updated On: February 15th, 2017] [Originally Added On: February 15th, 2017]
- New AI Can Write and Rewrite Its Own Code to Increase Its Intelligence - Futurism [Last Updated On: February 17th, 2017] [Originally Added On: February 17th, 2017]