“Beam me up, Meta !” — uploading brains in the Metaverse.

Liviu Coconu
11 min readApr 12, 2022

--

“Upload” brain scanner

It has always been a recurrent theme in pop culture and SciFi movies - Black Mirror comes to mind. In a recent (and funny) instance, Greg Daniels’ “Upload” series, future tech is scanning your brain just before you die, and — tadaam ! — here’s a digital version of yourself uploaded in a simulated reality (Metaverse, as we call it these days). Also, scientists are working on getting your body cloned and downloading the brain scan to it, but that doesn’t quite work yet. Spoiler alert: your head’s exploding. Literally.

Obviously, the tech they use in the movies is “indistinguishable from magic”, as Isaac Asimov adequatly puts it. Scannig a body into a realistic 3D model is doable (see Epic’s digital humans tool), but can a brain scan/clone really ever work ? It’s this rather tough question that I’ll examine here.

First of all, the core question here is: what, exactly, are we supposed to scan as “your brain”(*) ? What makes you you ? The most honest answer to this is we don’t know precisely and this has been subject to millenia of philosophy. But we can examine a range of options and educated guesses.

0. To start at all, we’d have to accept the physicalist premise: being able to scan a person’s mind assume that your mind (the entirety of what you know, feel, remember and the subjective sense of self) is entirely determined by physical phenomena happening in the brain: whatever makes you you is something that happens within your skull(*) obeying the laws of physics in our Universe. This non-trivial assumption is in obvious conflict with metaphysical concepts such as of “soul”, postulated as a not-of-this-world entity (although I met religious people willing to admit a notion of a soul that is still physical, but somehow hidden to us mortals, yadda, yadda).

The reason why most people accept this premise is relatively simple: we don’t know of anything else in our Universe that exists independently of any physical substrate. The very notion of “existence” makes little sense for something that cannot be probed by any physical means. Even information, which is not in itself physical, can only exist and be processed if it is encoded in a physical substrate: you can only read this article encoded as ink on paper, computer memory, pixels on a screen, the neural network of your brain, etc.

That being said, there are philosophers that reject the physicalist premise. They shall not be ever scanned or cloned. Kidding aside, I might come back to arguments against physicalism in another article. For now, let me conclude with the obvious observation that a non-physically encoded mind is something we can never hope be able to scan with a physical device.

So, then, what can/should we scan ?

1. The sledgehammer approach: scan everything at subatomic level.

Since any information is encoded physically, one can scan it by scanning the physical support in all the detail. This is akin to scanning a book by encoding the position of each and every atom that makes each and every fiber in the paper it is made of and each ink molecule on it. In the book’s case, this is obviously overkill, because we know the relevant information is encoded as letters, so it’s enough to scan the letters or picture thereof. But in the case of a brain it is no longer a safe or trivial assumption that the physical substrate is irrelevant. Why ? Because we don’t know exactly how the brain encodes information (see point 4). So the safest way would be to just scan everything. But this raises technical difficulties that seem to put it forever out of reach.

Namely, at subatomic level, matter is a quantum system and in Quantum Mechanics measuring (scanning) stuff inevitablly changes it. If you try to clone a quantum system preserving all its encoded states, the QM no-clone theorem says that you’ll end up destroying the original. And what’s the point in getting a clone anyway, if what you want is to actually decode the information in such a way that it can be transferred in a simulated reality in a computer ?

Scanning at the lowest level thus seems utterly useless even it we could somehow make it technically possible (we can’t, due to the sheer amount of particles in a human brain, requiring the scan to probably take more than the age of Universe even with the most performant imaginable scanner equipment). Not a very promising approach here.

2. The naive approach: Neural Network scan

So what if we go up in the hierarchy and “just” (we’re talking about 80+ billion neurons..) scan all neurons with their connections ? It’s certainly many orders of magnitude “easier” than scanning at subatomic level (but that only says something about how hard the later is). But let’s assume, for the sake of the argument, that future scanners will be able to scan your entire neural structure. Would that be enough or useful ?

Most likely not. The “content” of your brain (whatever it encodes) is not only about neurons, but also their state (electrochemical signals propagating through the network). Just scanning the neurons will give you the equivalent of a “dead” computer: the circuits are there, but the software that was running is long gone. Not very useful.

3. The naive “instant scan” approach: Neural Network + state scan

So what about scanning the neural structure plus the state at a given time ? That should give us what we want, no ?

Well, sort of. One could, in theory, take such a scan and simulate the system on a different substrate (silicon). With the current technology, that simulation is beyond anything imaginable with the efficiency of today’s supercomputers. It’s not “just” the 80 billions neurons. It’s also the whole electro-chemical state that, in fact, is pushing us jn terms of information complexity towards the orders of magnitude of the subatomic scan.

But the actual difficulty is elsewhere: the scanning process itself. Scanning physically encoded information is a physical process that takes time. There is no such thing as “instant scanning” of 80 billions neurons and their changing electrochemical processes and state. That means that, by the time scanning ends, the scanned system will change — probably to a large extent. Who can guarantee that the scan is consistent ? Processes in the brain tend to be rather fast, a moving target, so unless we can somehow scan it all in seconds, our scan will probably be garbage. It’s like trying to scan a TV screen pixel-by-pixel while a movie is playing: if the scan takes an hour, the result is neither a frame of the movie, nor the movie itself, but garbage. Stopping the brain processes for the duration of the scan will most likely kill the subject and bring us back to option 2. Not a very promising route.

4. The halfway realistic approach: decoded Neural Network model/content scan

The only way to scan that seems at least faintly more realistic is to figure out how exactly the brain encodes the mind — an immensely difficult task by any standard, but at least in principle possible or, actually, not impossible in the sense of being forbidden by the laws of physics in our Universe(**).

Hopefully — and I must stress there is no guarantee for that — knowing the encoding could make both the scanning process as well as the subsequent simulation orders of magnitude faster/more efficient. The hidden assumption here is that the emergent model (mind) is significantly less complex than its encoding in the physical substrate (brain) or it can be reduced to a significantly less complex model. This is not a trivial assumption and, while neurosciences did enormous progress, we’re still very far form even nearly understanding in enough detail the way the brain encodes the mind to confidently say this is a workable approach.

Also, knowing the encoding could help handle fast-changing processes in the brain. If we know a movie is running on a screen but we can only scan a few pixels of each frame, we can replay it several times until we scan it all. In a person’s brain, we’d have to maybe stimulate the same neural path several times until we get all the information. The downside is that the scanned subject would have to live and feel everything again: incidentally, our brains also use a sort of re-living of memories in order to “store” them.

5. The delta > 0 approach: lossy scan

So far, we aimed at a very ambitious goal: a “lossless” scan with 100% accuracy. After all, you wouldn’t want the virtual you to be just 99,99%, 99% or — good heavens ! — 90% of the real you, would you ?

First of all, scanning a physical system as complex and dynamically changing as a human brain with 100% accuracy within a reasonable time is likely an unrealistic goal: scanning anything physical is always a trade-off between accuracy, on one hand, and time and storage space on the other hand. The good news is that it also seems to be unnecessary for reasons I will try to outline.

One reason is that the knowledge of what makes you you and what you know about yourself is also not accurate in detail — “know thyself” is wishful thinking. Let’s make a thought experiment: before you went to sleep yesterday you had some specific thoughts, emotions and memories. Did you wake up with the same set of thoughts and emotions ? The answer is probably no. You do remember, partially, how it felt like to be you yesterday, but you don’t remember the details, nor do you remember exactly what thoughts and emotions you had. Chances are, you partially forgot some of the memories you had the other night. But you don’t feel less like you. Then also, both yourself and others sometimes perceive you as “not quite being you”. The key point here is: your self is also “lossy”. A 100% accurate account of what’s in your brain does not seem mandatory, because there is no “ground truth” to compare with.

Secondly, if your sense of identity is of concern, we have perhaps even less reasons to be worried. From investigating patients with mental illness like schizophrenia and personality disorders, we know that it is possible to have multiple and strong senses of identity even within the same brain. So a lossy scan of you won’t probably notice small, or even large, differences — as long as the scan is consistent, albeit not entirely accurate, you won’t know it’s not really you. I’m not suggesting that anything will do, just that the brain, while computational in nature, is not a lossless storage.

The hope here is that a “lossy” scan would be a much less demanding way of the decoding approach we discusses at point 4 above, allowing to reduce the amount of information to be scanned and the scan time to a practically acceptable level one day — hopefully by orders of magnitude. Again a bold assumption here.

One way this could work is analogous to training artificial Deep Neural Networks to specific accuracy: one would train a large DNN (or maybe a modified GAN pair in which the brain is the generator ?) to mimic your brain within a desired accuracy. This is, again, not impossible (**), but we’d have to solve many technical hurdles, including figuring out what the training data should be. Some baby steps in Brain-Computer Interfaces (BCI) have been done by Neuralink, but there’s still a long way to go. A DNN would not be an explicit model of the mind that would allow us to point out which thought is stored where or easily edit it, but that is also not necessary, an implicit model capturing the essence of “you” would be acceptable.

You would still want, however, a decently accurate scan. What accuracy is acceptable leads to both a range of different price tags, payments plans and inequity (see “Upload” series), as well as to serious ethical questions. Particularly the identity question raises huge concerns, both ethical and legal: how can you tell a clone from the original ? This is a broad topic in itself that I’ll leave for another article. Here, I will only state that uploading persons in simulated realities will have to be regulated in order to ensure that there is one single person claiming one identity — and, of course, extending the notion of a “person” to a digital self living in a simulated reality.

So, in conclusion, my personal view is that uploading your self in the Metaverse one day has nothing to do with [spoiler alert] chopping your head off in a scanner. Rather, the most promising approach seems to be a “lossy” scan based on learning the mind model encoded in the brain. Given the enormous technical hurdles ahead, I wouldn’t hold my breath, money or freeze my brain on it in the next 100 years.

(*) I’m referring to “brain” as not only what’s in your skull, but the entirety of our neural system including nerves and body “sensors”, because they are all part of our neural model.

(**) I’ve borrowed this from David Deutsch’s great book “The Beginning of Infinity”. There, Deutsch paraphrases the famous Sherlock Holmes “after you’ve eliminated the impossible, whatever is left must be the truth” to say: “something that is not forbidden by the laws of physics must be possible”.

However, as odd as it might sound, “possible” and “impossible” are ambiguous terms in the theory of computation governing our Universe (as a physical theory). Thet’s because there is more than one level of “impossible” and the distinction is important:

1. Computability: is it possible to compute a model of the brain? A mathematical function is computable if we can, well, compute its value for a given input. A physical system like the human brain is made of particles obeying the laws of physics, which are relatively simple laws (differential equations). Hence, the evolution of any physical system — including the human brain — can be, in principle, computed from the initial state and the evolution laws. In the sense of computability, cracking the code for the human mind is not impossible.
2. Computational complexity: is it possible to compute a model of the brain efficiently (as in: it should not take aeons)? Note the in principle above. Something that is computable is not necessarily efficiently computable (the extended Church-Turing thesis). Simulating models of even moderately large systems quickly becomes an intractable problem even for today’s supercomputers and deciphering systems the size of the human brain might be impossible in the sense of computational complexity: the amount of computation necessary hopelessly surpassing the available resources. Quantum Computing might help one day, but that’s still open to debate.
3. Computational reducibility: is it possible to reduce the overly complex encoded model of the brain to a more simple, but computationally equivalent one ? The only realistic chance to overcome the kind of impossible described by 2. is if the answer here is yes and the reduction is significant (read: orders of magnitude). We mentioned this at point 4 above when talking about encoding and the hope that the encoded model is computationally reducible to something less complex. Since both computationally irreducible as well as computational reducible systems exist, the computational reducibility of the human brain is a hypothesis begging a big question: how much can it be reduced ? Here, a “lossy” model as described at point 5. seems the most promising approach.

--

--