the brain. It is a central monitoring, observing, controlling entity. There are at least three reasons why the social circuitry requires such widespread connectivity.
First, to reconstruct John’s mind state, I must rely on a variety of sensory cues. His facial expressions, his body language, the sound of his voice—information from a range of sources must flow into the machinery for social perception such that I can read the cues and reconstruct the happenings inside his head.
Second, to reconstruct John’s mind state I must reconstruct the world as he experiences it. I must compute, “John is aware of A, John is aware of B,” and so on. A and B might be objects on the ground, they might be odors or sounds, they might be abstract ideas. To successfully construct a model of John’s mind, my social machinery must have access to information on the whole world of objects and ideas.
Third, part of understanding another person is the process of mirroring, a process I will describe in greater detail in Chapter 8. If John is happy, not only does my social machinery infer the emotional state in the abstract, but it contacts and activates my own happiness mechanism, perhaps to provide a richer, a truer model of John’s mind-state. If John is about to throw a baseball, not only does the social machinery in my brain reconstruct the action, recognizing it for what it is, but that machinery contacts and activates my own arm control system, prompting me to imagine throwing the ball—again perhaps to provide a richer, truer model of John’s actions. The social machinery, therefore, not only takes in information from far-flung sources, but also sends out orders, controlling and manipulating the circuits and subroutines of the brain.
Think how much more complicated, in a recursive, loop-the-loop way, the system becomes when the process of social perception is turned inward. I construct a model of my own mind. Only information that flows to my social circuitry can be incorporated into the self model, and therefore only that information can be a part of my conscious self report. Ask me if I am consciously aware of the stick on the ground, and the social circuitry searches my self model in order to answer the question. Ask me if I am aware of being aware of the stick and, well, yes, that information is now present too. Ask me if I am aware of being aware of being aware of. . . . I don’t know how many iterations are possible, but there is some naturally built-in recursion to the process.
When my social machinery constructs hypotheses about my own mind state, it uses mirroring to consult other brain circuits, imposing its inferences on the rest of the brain, altering the very thing it is perceiving. Suppose my self model, my self image, my self understanding computed by my social circuitry, includes the hypothesis that I am happy right now; to enhance that hypothesis, to enrich the details, it contacts and activates my happiness mechanism. This is the same process of mirroring that the mechanism uses to model happiness in anyone’s mind, whether my own or someone else’s. In this case, if I wasn’t actually happy—if the hypothesis was wrong—the mirroring process might actually make me become so as a side product; if I was already happy, perhaps I become more so. My self model and my self become intertwined in complicated ways. Perceiving my own mind changes the thing being perceived.
To update the Turing test: how will we know when a computer has achieved consciousness? When it has algorithms to model the contents of another person’s mind. When those algorithms are so complete that the model contains a reconstruction of the world as seen by the other person—of the contents of the other person’s awareness. When the algorithms can be used to create a model of the computer itself.
The same analysis can apply to any information processing system. A mosquito has a brain, albeit a small one. Is a mosquito conscious?