This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.

Will human consciousness ever be transferable?

The question is not whether consciousness can be created or duplicated. Can it be transferred to a new body or a machine (robot, cyborg, computer, avatar)?

Can you transfer your own consciousness and memory and leave your biological body without creating two selves?
100+ Answers
Bradley Voytek
Bradley Voytek, Ph.D. neuroscience, UCSD Asst. Professor Cognitive Science
User is referring to the Blue Brain Project. Henry Markram gave a TED talk about this:


It is very important to not conflate simulating <human number> of interconnected neurons in a computer with transferring a biological brain to a digital format.

We are still learning fundamental details about neuroanatomy and neuronal communication. For example, about 15 years ago it was found that nitric oxide (NO) works as a neurotransmitter. NO diffuses across, through, and past local synapses and can have long distance effects. This is very hard to model.

Even more recently, it was found that up to 5% of glial cells--traditionally thought of as non-information carrying/transmitting structural cells--actually communicate via action potentials. There are about 10-20 times more glial cells in the brain than there are neurons. If this 5% number turns out to be true, the math says that there might be twice as many information conveying cells in the brain as was previously thought.

That's a huge error!

So sure, we can simulate a simple neuron connected to thousands or millions of other neurons via synapses in a pseudo-biologically plausible manner, but I believe it's naive to think that is the brain.

I think the neurosciences right now are where physics was in the early 1900s. A bunch of people thought Newtonian mechanics could explain everything. Turns out, the physical universe is much more complicated than that. Now we have quantum mechanics, relativity, the multiverse, dark energy, etc.

Neuroscience knows a lot about the biology and structure of the neuron, but not about how the brain works.

EDIT
Now that I've thought about this more, I'm even more confused by the idea that simulating a bunch of neuron-like elements digitally is anything like the brain. That's like making the argument that soldering together a couple of million transistors in a "computer-like" fashion would give you a working MacbookPro.
Paul King
Paul King, Computational Neuroscientist, fmr Redwood Center for Theoretical Neuroscience

This is not theoretically impossible, so it seems likely to happen eventually, however the way it happens may be different from what people expect.

The idea of consciousness being transferred into computers is typically portrayed in one of two ways:

  1. In sci-fi movies, conscious identity is "extracted" (somehow) from the brain and "injected" into a computer that has a capacity for conscious emulation.
  2. In philosophical thought experiments and in the singularity community it is imagined that eventually brain scanning will be so detailed that the entire brain down to every neuron, synapse, and receptor can be scanned and simulated brute-force in a giant computer.

It seems unlikely that either of these will ever happen. The sci-fi approach relies too much on magic, and the simulation approach requires enormous sophistication, scanning access, and computational capacities. If it's possible, it seems at least 100 years away.

But there is another way.

Current models of consciousness suggest that consciousness, and neural processing in the brain generally, is a decentralized adaptive process. If true, consciousness could be transferred to a computer incrementally via adaptation.

As an analogy, consider the marketing/PR team working at a Fortune 1000 corporation. The marketing team puts out a coherent message -- the voice of the company -- but this is created by a team of people who collaborate and synthesize their thinking into a consistent framework. New people join the marketing department all the time, and they learn the ropes and take over from people who leave. Every few years, everyone working there is new, but the voice of the company (its identity, messages, and memory) remain intact.

Now suppose robots were rotated into the marketing department. Over time, the marketing identity of the company would be fully run by robots.

In the case of the brain, with neural prosthesis and brain-machine interfaces, it is possible that a clever computer could become quite integrated into the brain in support of memory, enhanced perception, and even enhanced thought. Such a machine might adapt and support neural activity to such an extent that it becomes better at it than the brain. Eventually it doesn't really need its biological half, and could operate just fine without it, carrying the identity of the former biological brain owner forward.

Stephen Larson
Stephen Larson, Ph.D in Computational Neuroscience from UC San Diego
It is technically possible. Here's why:

Your consciousness is the result of biological activity in your brain. The cells in your brain do a lot of complex things, and one of those things is work together to produce your complete experience of the world.

Consciousness is like the "software" that is running on the "hardware" of your brain. Your brain's hardware is a physical system that is built on the laws of physics and chemistry and molecular biology. Biologists and neurobiologists publish thousands of papers in academic journals each year where they tease apart additional pieces of the puzzle. So far, they find everything in there follows physical laws. So far, they don't have to resort to explanations of quantum effects or spooky non-physical behaviors to draw conclusions of how things work. It's just physics and chemistry and molecular biology.

On the other hand, we have computers. Computers are really good at allowing us to reproduce (with approximations) the rules behind many different aspects of reality. Boeing designed, pre-assembled and tested the 777 completely inside a computer system prior to building the first mock-up. If you've seen any movies with serious computer graphics recently, you know how powerful computers can be at reproducing visual reality. Since there is a lot of money to be made making airplanes and movies, these are the most obvious uses of computational simulation technology today. But if the economics shifted towards making brain simulations lucrative, we could see very similar techniques used to simulate the brain.

Bradley Voytek is correct – it's more complicated than just simulating neurons with our current understanding of neurons. Modern neuron simulations still leave too much out. "Simulating a bunch of neuron-like elements digitally" probably isn't going to cut it. However, there's no reason to believe that more sophisticated simulations, which incorporate a lot more biological reality, couldn't do what brains do. There are already simulators that replicate atomic behavior, chemical behavior, molecular behavior, and protein behavior. By knitting these together in the way that nature has built our brains out of cells, and built cells out of proteins, we could build a brain simulator.

If by "transferable" you mean, "could I upload my brain to a computer and have it do what my brain does", that also seems technically possible. Today we have a technology called Functional Magnetic Resonance Imaging (fMRI) that lets us see activity in brains. It may be technically possible to build better fMRI machines that allow us to see the activity of individual cells, or maybe even the activity of individual proteins within cells with the help of magnetic dyes that haven't been invented yet. Today we have dyes to mark individual proteins to see them under the microscope in living cells – a Nobel Prize was handed out for this a couple of years ago. So it seems technically possible to do this with something that can be seen using fMRI too. If we can scan your brain and see what all the proteins in it are doing, and we can combine that data to "tune the knobs" of a human brain simulator, it is conceivable that we could put your consciousness into a computer.

I'm not saying that doing this will be easy or happen any time soon. It would be very expensive (billions, maybe trillions of dollars), require learning a lot more about the biology of the brain, require a lot of really fast processors, and take a long time. It may take a long time until enough money has collectively been invested in biological / computational science crossovers. But there's nothing I see that suggests it isn't technically possible.

I have some ideas for how to get started though if anyone is interested ... :)
Brad Templeton
Brad Templeton, EFF Director, Robocar blogger
Why not?

This is not a flip answer, but rather the path to the answer. We don't yet understand what consciousness is or how the brain and mind work, though we know a number of things. But what's crucial is that there isn't much evidence that we could not do it, even if we obviously can't demonstrate that we can do it.

Many people are "materialists" which means they follow a philosophy that there is nothing supernatural to mind and consciousness. That they are products of matter and energy like everything else in the Universe. Further materialists believe that mind and consciousness are evolved properties of our biological systems.

While there can be much debate over whether mind is strictly information processing and computation, whether it involves quantum effects (computational or otherwise) or what other special properties it might have, if you accept the mind as part of the physical, material universe, it's a very bold claim to suggest that wet bags of protein are the only way to create something that does what a brain does – namely hosts mind and consciousness. I think it's fair to say that there is little evidence for that proposition and many strong arguments against it.

So, can you create a conscious thinking mind? Well, any fertile man and woman can do that with things they have around the house. We don't quite know how we do it, but that we can do it is beyond question, unless you believe that it only happens because a supernatural being sticks in a "soul" while we aren't looking.

If we can do it in this one way, the one that evolved, how can it be argued that we will never figure out another way to do it. Perhaps another way that is mysterious to us (like copying patterns from an existing brain) or perhaps a way that we understand better. Nobody can tell you the method, but those who claim it's impossible we could work out a way to do it have a pretty strong but of demonstration to do.
Mitch Ratcliffe
Mitch Ratcliffe, Cyborg spine, human mind.
We may, with massively increased computing power and the introduction of a very broad range of sensory IO capabilities, someday be able to model the brain of an embodied mind, but we will accomplish only the model of a "consciousness" and not a fully realized conscious being. Transferring a human consciousness into that "body" will produce something not-human and, I suspect, more like a recording or parody of a conscious being than a "person."
 
Of course, at that point, we will have to argue whether the model is a superset or subset of conscious activity. My reading of Chalmers, Damasio, Searle and many others leads me to believe that a computational model of consciousness will never be comparable to human consciousness, except on a capacity basis, such as the capacity to play chess. Many capacities do not add up to subjective being.

ADDING to the response: I responded to Justin Golden below, but the comment doesn't seem to be visible, so here it goes:

Speaking as a bionic person, I think you are making a very poor argument. I have artificial discs in my cervical spine that were impossible to build even a decade ago -- and that won't be on the market for another five years. Physical systems are immensely difficult to manufacture, and there is no universally recognized definition of what a consciousness is to rival physical laws on which artificial biological systems must be built.

Ray Kurzweil may be famous, but that doesn't make him right, nor does exponential advances in technology assure success in this venture, because the complexity of consciousness, separate from the effort of building a mechanical environment that could host consciousness, may be a barrier to transferring consciousness. If, because of the multi-dimensional nature of consciousness, as it is a sum and a product of many inputs and storage/data transfer modalities, it may be the case that no computational system could be built to store a particular mind's consciousness without extensive individual modification of the machine's architecture.

In other words, consciousness may be so complex that it can't run on a computational system that isn't a unique replica of the mind in which it emerged. That would make the economics of consciousness transference almost unimaginably expensive. Since there would be no test environment, as consciousness could not be stored in anything other than a compatible consciousness machine, the potential for errors that would corrupt a transferred consciousness is astronomically high, as well, since there would be no backup. Even if it could be built, the failure of the machine to meet the needs of a specific consciousness is likely to prevent this from ever being feasible.

It is also not clear that a static system could accommodate consciousness. Since, as Damasio argues, much of the activity of the brain and mind is a mapping of the body, which also contributes to the metaphorical physicality of language, as George Lakoff documented in several excellent studies, a great deal of computational capacity may need to be dedicated to fooling the consiousness into thinking it has a body in order for its humanity to remain. One may constantly need to upgrade or reengineer a consciousness machine, which would contribute to its prohibitive expense.

One may simply find that a human consciousness, finding itself implanted in a non-human body, with a different mapping entirely, would become vegetative, deeply psychotic or just simply break from the stress of being disembodied. Even if you disagree with Thomas Nagel's argument about the specific nature of consciousness in different bodies in his "What is it like to be a bat?" can you seriously argue that a bat consciousness would be functional in a squirrel's body, or that, implanted in a human body, the bat's consciousness could be enlarged to make use of the new capacities it found in its new home?

For those of you arguing consciousness is simply a mechanical-physical process, a modern version of the most simplified monism, consider that even the simplest software does not transfer reliably from one computational "body" to another without fragility due to differences in OS at the grossest level, or configuration of the hardware and conflicts with processes that were not present in previous systems capturing and not releasing resources.

Finally, except for someone who was very ill, given the chances of dying from this transference being so high, it is far more likely that we will focus on prolonging human life than it is that human consciousness transference will be a viable path.

And, just for the heck of it, adding my response to Mark Harrison's more compelling argument:

I think you rely on a fallacy in arguing from the continuity of the consciousness in a body slowly being replaced by mechanical prosthetics. Let's call it the Steve Austin Fallacy or, better, the Lee Majors Fallacy, since everyone knows his character's bionic character was full of human compassion despite his machine parts.

You assume there is no impact from the gradual replacement of the organs of the brain, relying on the familiar argument that a robotic body is body-like. Gradual replacement of parts of the brain, whenever it might take place, is not the same as transferring a consciousness to a machine environment. Optical processing may be replicable by a machine, but it may be like having an artificial leg, which may perform beautifully but provides its user few of the physical experiences delivered by a leg of flesh.

Granted for example that a partially brained person, a hydranencephelic child, who has no higher brain components than the brain stem, can demonstrate mindful behaviors, such as reflexes and laughter, but that does not mean if you inserted a computational replacement of the cerebral cortex, thalamus and other structures, that the child would suddenly have a whole consciousness. If you transferred an existing consciousness into that part-hydranencephelic part-computer brain, would that new consciousness be complete. Might it not, given the existing patterns of the hydranencephilic's mind, find its lower brain functions to be incompatible with a "normal" brain within a human body? To the same degree that you dismiss Roger Penrose's quantum consciousness hypothesis for lack of evidence -- some evidence for which may at least have been demonstrated logically by Ludvik Bass in The Mind of Wigner's Friend, although I believe his argument is flawed, as well -- you must dismiss for lack of evidence your argument that replacing brain components one piece at a time would not destroy the consciousness, because it hasn't been done.

Scientists may be able to interface silicon and neuron, they may be able to manipulate a single neuron with a laser, but there's no evidence that these neurons produce the same experience that a neuron in a complex brain structure does. It may fire, but that firing, absent the wash of chemicals the brain relies on to help modulate signaling, may not be like a brain's neuronal firing.

Given the complete lack of rules of consciousness comparable to the laws of physics, which are also incomplete, it isn't possible to dismiss skeptics who question whether consciousness is transferrable without concurrently dismissing the argument that it is inevitable.

And while I agree with you that there is the chance it could happen someday, the practical challenges of transferring consciousness, even if not physically prohibitive, may be economically impossible.
View More Answers