To seriously pursue the question: “What’s outside the simulation? (Elon Musk’s Question for Super-Smart AI)”, we first have to be clear about what simulation is and with which special features a simulation is characterized.

It makes sense to compare the term simulation with its relative duplication in order to draw a sharper picture of the term.

### Duplication vs. Simulation

The philosopher John Searle has attached great importance to this point by explaining that a simulation is not duplication and that a machine cannot duplicate human thought, but at best, simulate it. On the point that simulation and duplication are two pairs of boots, I fully agree with him.

Suppose we have two kinds of objects in front of us, say, an Audi A4 (neither my favorite car nor do I drive it) and a second object that someone claims to be a “duplicate” or a “model” of the Audi A4. What exactly does that mean? What is a model of the A4? It means exactly what a ten-year-old who is interested in car models understands by it. Namely that there is a direct correspondence between the external stimuli, the internal states, and behavior of the A4 and the inputs, internal states, and outputs of the model. The correspondence does not necessarily have to be one hundred percent. Thus, some external stimuli, states, and behaviors of Model A4 may not be present in the model. One human brain is not the same as another. If, for example, you go to Ingolstadt and look at a model of the A4 in the wind tunnel, you will see that the seats, the navigation, etc., maybe in the model… and all the other equipment details that make up many of the internal states of the “real” Audi A4 are missing – for the simple reason that they are irrelevant to the purpose of the model, i.e. testing the aerodynamic properties of the right car. Nevertheless, the external stimuli, states, and behaviors of the model are directly related to a subset of the inputs, states, and behaviors of the real engine. Such correspondence results in a model relationship between the real A4 and the object in the wind tunnel. Note that the model is more straightforward than the real object it replicates in that it has fewer states. This property is characteristic of model names: Models are always more straightforward than their originals.

### What about a Simulation?

Let’s take a printer of the brand X, whose operating instructions assure me that I can imitate, i.e., “simulate,” another type of printer, e.g., an HP Laserjet Plus. What does it mean when people say that my X machine can simulate another machine?

That means that the inputs and states of the HP machine can be encoded into the states of my machine and those same states of my machine can then be decoded into the correct outputs that a real HP printer would produce. What is important is that my machine has to be more complicated than the HP in a certain sense if such a dictionary of encryption and decryption is to be created. To be more precise: To encrypt the inputs and the states of the HP into the states of my “simulator,” my machine must have more states than the HP printer if you regard both devices as abstract machines. Therefore, the simulator (my printer) must be more complicated than the simulated object (the HP printer). In general, simulation is always more complicated than the system it simulates.

These short, perhaps even common and casual explanations about models and simulations can be translated into exact mathematical terms, provided, of course, that there are criteria that can be verified in principle and that we can use to distinguish a program that simulates human thought processes in the model from another that merely simulates them. In this context, it is exciting that a simulation of the brain necessarily requires a system that has more states than the brain itself. This fact justifiably makes much doubt whether the brain as a whole can ever be simulated.

The brain with its approximately 100 billion neurons has at least 2 to the power of 10 to the power of 11 possible states – a number that deserves the highest respect in every respect, because it far exceeds even the number of protons in the universe known to us (10 to the power of 79) by a factor of approximately two to the power of 100 billion. Even this number is so large that it is difficult to express it in words. Not to mention his idea. We can therefore safely assume that there will be no simulation of the human brain in the medium and long term (the Human Brain Project, funded by the EU, has a similar objective).

Brain models are an entirely different matter, and it is a good thing that the “strong AI, human” needs models and not simulations. All in all, I have the impression that the thinking machine debate is a battle between the philosophers and not the computer scientist and programmer.

My feeling tells me that in the next ten to fifteen years, we will have a genuine machine in our house. My “hopes” are mainly based on the fact that in information processing, we will work out new concepts in connection with new hardware, such as quantum computers (to name just one of the upcoming innovations in information processing). Can it then be called “strong AI, human”? Yes, that’s another interesting question that will have to be answered in due course. According to what criteria, standards? Philosophers, psychologists, anthropologists, etc. have to determine this in due course…

However, for my part, I can conclude this brief excursion with a statement that is unambiguous and definitive: Whatever the outcome of the matter of “strong AI, human,” the result will radically change our self-image and our view of our position in the cosmic order.

Finally Nick Bostrom’s trilemma “the simulation argument”

In 2003, philosopher Nick Bostrom proposed a trilemma that he called “the simulation argument”. Despite the name, Bostrom’s “simulation argument” does not directly argue that we live in a simulation; instead, Bostrom’s trilemma argues that one of three unlikely-seeming propositions is almost certainly true:

1. “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
2. “The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero”, or
3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”

(more info: ARE YOU LIVING IN A COMPUTER SIMULATION?)