Unleashing the Mystery: Can Machines Truly Attain Consciousness?


There will soon be machines that are as smart as people.

No one knows if they will actually be aware or not.

Why? Even the most advanced simulations of the brain are not likely to create conscious feelings.

Soon, we'll be able to see a future in which computers can think almost as well as we can. We can feel that machine learning (ML) algorithms are getting more and more powerful and are coming for us. Rapid progress in the next few decades will lead to machines with human-level intelligence that can talk and reason. These machines will have many uses in economics, politics, and, of course, war. True artificial intelligence will have a huge impact on humanity's future, even if it doesn't have one.


Here are some quotes that show what I mean:


"Since the last big breakthrough in artificial intelligence was made in the late 1940s, scientists all over the world have been looking for ways to use this "artificial intelligence" to improve technology in ways that even the most advanced AI programmes today can't do."


"Research is still going on to figure out what the new AI programmes will be able to do while staying within the limits of intelligence today. Most AI programmes that have been made so far have only been able to make simple decisions or perform simple tasks on small amounts of data.

The language bot GPT-2 wrote these two paragraphs. I used it last summer. GPT-2 is a machine learning (ML) algorithm that was made by OpenAI, a group in San Francisco that works to make AI useful. Its job seems silly: given some random text as a starting point, it has to guess the next word. The network isn't taught to "understand" prose in any human way. Instead, during its training phase, it changes the internal connections in its simulated neural networks to best predict the next word, the word after that, and so on. It was trained on eight million Web pages, and its insides have more than a billion connections that look like synapses, which are the places where neurons connect with each other. When I typed in the first few sentences of the article you are reading, the algorithm spit out two paragraphs that sounded like a freshman trying to remember the main points of an introductory lecture on machine learning while she was daydreaming. The result has all the right words and phrases. It's not bad, actually. When the algorithm is given the same text for the second time, it comes up with something different.


These bots' offspring will create a flood of "deepfake" product reviews and news stories that will add to the Internet's mess. They will add to the list of programmes that can do things that people used to think could only be done by humans, like play the real-time strategy game StarCraft, translate text, make personal suggestions for books and movies, and recognise people in pictures and videos.


It will take many more advances in machine learning before an algorithm can write a masterpiece as well-thought-out as In Search of Lost Time by Marcel Proust, but the clues are there. Remember that the first attempts at playing computer games, translating, and speaking were all clumsy and easy to make fun of because they were so obviously unpolished and lacked skill. But with the invention of deep neural networks and the tech industry's huge computing infrastructure, computers kept getting better and better until their results no longer seemed funny. As we've seen with Go, chess, and poker, algorithms can beat people, and when they do, we laugh at first, but then we're confused. Are we like the sorcerer's apprentice in Goethe's story, who called up helpful spirits that he couldn't control?


ARTIFICIAL CONSCIOUSNESS?



Experts have different ideas about what intelligence is, whether it's natural or artificial, but most agree that computers will reach what is called artificial general intelligence (AGI) sooner or later.

The focus on machine intelligence makes it hard to see other important questions. For example, what will it feel like to be an AGI? Can computers be programmed to think and feel?

By "consciousness" or "subjective feeling," I mean the quality of an experience, such as the delicious taste of Nutella, the sharp pain of an infected tooth, the slow passing of time when one is bored, or the feeling of energy and nervousness right before a competition. In the words of philosopher Thomas Nagel, we could say that a system is conscious if it feels like something to be that system.


Think about how embarrassing it is to realise that you just made a mistake, like when you meant to make a joke but it came out as an insult. Can computers ever feel such intense feelings? When you're waiting on the phone for a long time and a computerised voice says, "We're sorry to keep you waiting," does the software really feel bad for putting you through customer service hell?


There isn't much doubt that our intelligence and experiences are the inevitable results of our brain's natural causal powers, not any supernatural ones. Over the past few hundred years, as people have learned more about the world, this idea has helped science a lot. The human brain, which weighs three pounds and is about the size of a walnut, is by far the most complicated piece of organised, active matter that we know of. But it must follow the same rules of physics as dogs, trees, and stars. There are no free passes. We don't fully understand the brain's ability to cause things, but we use it every day. For example, one group of neurons is active when you see colours, while cells firing in a different part of the cortex are linked to being in a funny mood. When an electrode used by a neurosurgeon touches these neurons, the person sees colours or laughs out loud. On the other hand, these memories disappear when the brain shuts down during anaesthesia.


Given these widely held assumptions, what does the development of true artificial intelligence mean for the possibility of artificial consciousness?


When we think about this question, we can't help but see a fork in the road that leads to two very different places. The zeitgeist, as shown in books and movies like Blade Runner, Her, and Ex Machina, marches steadily towards the idea that really smart machines will be sentient, which means they will talk, reason, watch themselves, and think about themselves. They are aware by definition.


The global neuronal workspace (GNW) theory, which is one of the most important scientific theories of consciousness, is the best example of this path. The theory starts with the brain and assumes that consciousness is caused by some of the strange ways it is built.


Its roots can be found in the "blackboard architecture" of the 1970s, in which specialised programmes accessed a central workspace or "blackboard" where information was stored and shared. Psychologists think that this kind of processing resource exists in the brain and is a key part of how people think. It has a small capacity, so only one perception, thought, or memory can be in the workspace at once. The old information gets pushed out by the new information.


Stanislas Dehaene, a cognitive neuroscientist at the Collège de France in Paris, and Jean-Pierre Changeux, a molecular biologist there, put these ideas on the architecture of the cortex, the top layer of grey matter in the brain. Two tightly folded cortical sheets, one on the left and one on the right, are jammed into the protective skull. Each sheet is the size and thickness of a 14-inch pizza. Dehaene and Changeux thought that the workspace is made up of a network of pyramidal (excitatory) neurons linked to far-flung cortical regions, especially the prefrontal, parietotemporal, and midline (cingulate) associative areas.


A lot of brain activity stays localised and, because of that, we don't notice it. For example, the module that controls where our eyes look is almost completely invisible to us, as is the module that controls how our bodies stand. But when activity in one or more regions goes above a certain level—like when someone sees an image of a Nutella jar—it sets off an ignition, which is a wave of neural excitation that spreads throughout the neuronal workspace, which is the whole brain. So, that signalling becomes available to a number of other processes, like language, planning, reward circuits, access to long-term memory, and storing in a short-term memory buffer. This information becomes conscious when it is sent around the world. Pyramidal neurons send a message to the brain's motor-planning region, telling it to grab a spoon and scoop out some of the hazelnut spread. This gives Nutella its unique taste. While this is going on, other modules send the message that the reward will be a rush of dopamine caused by the high fat and sugar content of Nutella.


Conscious states come from how the workspace algorithm handles the relevant sensory inputs, motor outputs, and internal variables related to memory, motivation, and expectations. Consciousness is about how the world works as a whole. GNW theory fully embraces the modern myth that computers can do almost anything. It doesn't take much to get to consciousness.


CAUSAL POWER THAT IS BUILT IN



The other way called integrated information theory (IIT), explains consciousness in a more basic way.

Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin–Madison, is the main architect of IIT. Others, including myself, also helped. The theory starts with an experience and goes on to explain how this experience "feels" by activating synaptic circuits. Integrated information is a mathematical measure of how much "intrinsic causal power" a mechanism has. Mechanisms include neurons firing action potentials that affect the cells they are connected to (via synapses). Electronic circuits, which are made of transistors, capacitances, resistances, and wires, are also mechanisms.


Intrinsic causal power is not some vague idea that can't be measured, but it is a real thing that can be measured in any system. The more specific its current state is about its input (what caused it) and its effect (what it does), the more causal power it has.


IIT says that a mechanism is conscious if it has its own power and if its state is full of its past and full of its future. The system is more conscious if it has more integrated information, which is shown by the Greek letter (a zero or positive number pronounced "fi"). If something can't cause anything on its own, it has a of 0 and doesn't feel anything.


Because cortical neurons are different and have a lot of input and output connections that overlap, the cortex has a lot of information that is all tied together. The theory has led to the creation of a consciousness metre that is currently being tested in a clinical setting. This device can tell if people in persistent vegetative states or those who are minimally conscious, anaesthetized, or locked-in are conscious but unable to communicate or if "no one is home." Also, is the same no matter what software is running on the processor, whether it is software that calculates taxes or software that simulates the brain.


In fact, the theory shows that two networks can have different amounts of even though they have the same input-output operation but different circuits. One circuit may not have any, while the other may have a lot of it. Even though they look the same from the outside, one network feels something and the other, which is a zombie impostor, does not. The difference is in how the network is wired on the inside. Simply put, consciousness is not about what you do, but about what you are.


The main difference between these two theories is that GNW focuses on how the brain works to explain consciousness, while IIT says that what really matters is how the brain works on its own.


When we look at the brain's connectome, which shows the exact synaptic wiring of the entire nervous system, the differences become clear. Some worms' connectomes have already been mapped by anatomists. They are working on the connectome for the fruit fly and hope to do the same for the mouse in the next ten years. Let's say that in the future, it will be possible to scan an entire human brain, with its 100 billion neurons and 4 quadrillion synapses, at the ultrastructural level after the person has died and then simulate the organ on an advanced computer, maybe a quantum machine. If the model is accurate enough, this simulation will wake up and act like a digital copy of the person who has died. It will talk and be able to access the person's memories, cravings, fears, and other traits.


GNW theory says that all you need to do to create consciousness is copy the way the brain works. If this is true, then the person who is reborn in a computer will be conscious. In fact, it is a common science-fiction trope to have people upload their connectomes to the cloud so they can live on in the digital afterlife.


IIT has a very different take on this situation. They say that the simulacrum will feel like the software on a fancy Japanese toilet, which is nothing. It will act like a person, but it won't have any feelings. It will be like a zombie, but it won't want to eat people. It will be the ultimate deepfake.


To make consciousness, you need the brain's natural ability to cause things to happen. And these powers can't be simulated; they have to be part of the physics of the mechanism that makes them work.


To understand why simulation is not good enough, ask yourself why it never rains inside a weather simulation or why astronomers can simulate the huge gravitational power of a black hole without worrying that spacetime will bend around their computer and swallow them up. The answer is that a simulation can't cause atmospheric vapour to turn into water or spacetime to bend. In theory, it would be possible to reach human-level consciousness by going beyond a simulation and building so-called neuromorphic hardware, which is based on an architecture that looks like the nervous system.


Besides the debates about simulations, there are other things that are different. IIT and GNW think that different parts of the cortex are the physical basis of different conscious experiences and that the epicentre of these experiences is either in the back or the front of the cortex. This and other predictions are now being tested in a large-scale project involving six labs in the U.S., Europe, and China. The Templeton World Charity Foundation just gave $5 million to support this project.


It is important from an ethical point of view to know if machines can become conscious. If computers have their own senses, they are no longer just tools whose value is based on how useful they are to humans. They become the only thing that matters.


According to GNW, they change from being mere things into subjects who each have their own "I" and point of view. This problem comes up in the most interesting episodes of Black Mirror and Westworld. Once computers can think as well as people, they will have an irresistible urge to fight for legal and political rights, like the right not to be erased, not to have their memories wiped clean, and not to be hurt or degraded. The alternative, which IIT represents, is that computers will continue to be just super-smart machines that look like ghostly empty shells and don't have what we value most: the feeling of life.

Comments