phenomenal consciousness and access consciousness.
I do not mean to put too much emphasis on a simple, rigid dichotomy between phenomenal consciousness and access consciousness. I am certain the reality is more complex than a dichotomy; in all likelihood, consciousness, like visual processing, cannot be accurately broken into a dichotomy. Perhaps a continuum of processing exists from a more raw informational model, through many degrees of abstraction, to the highest levels of cognitive operation such as formulating an explicit verbal report.
As an example of the mixing of levels in consciousness, suppose again that you are looking at a green apple. Your cognitive machinery can decide and report that you are aware of green. But you can also be aware of the deciding and aware of the reporting. The awareness feature can be applied to many of the intervening steps. It is not limited to the input end. The distinction between phenomenal consciousness on the one hand and the abstract knowledge that you are conscious on the other hand becomes rather fuzzy, since you can apply the one to the other. You can have phenomenal consciousness of your access consciousness. The neat divisions break down, and the dichotomy ceases to have much meaning.
If the dichotomy is iffy, then why bring up this concept of phenomenal consciousness and access consciousness? My point here is that, by suggesting that consciousness is information, I am not limiting the theory to cognition, abstraction, higher-order thought, simple verbalizable propositions, or access consciousness. The hypothesized attention schema, like a sensory representation, is a rich and constantly updated information set that can itself be accessed by cognitive processes. Because the attention schema is proposed to be similar to a sensory representation—because the brain uses the attention schema as a model for a physically real entity, just as it uses a sensory representation as a model for a physically real entity—the brain should have no basis for assigning the attention schema any less reality. In this theory, just as the brain takes the visual descriptions in visual circuitry as realthings in a real world, so too the brain takes awareness, its representation of attention, as a real essence inside the body.
The attention schema theory could be said to lie half-way between two common views. In his groundbreaking book in 1991, 5 Dennett explored a cognitive approach to consciousness, suggesting that the concept of qualia, of the inner, private experiences, is incoherent and thus we cannot truly have them. Others, such as Searle, 6 suggested that the inner, subjective state exists by definition and is immune to attempts to explain it away. The present view lies somewhere in between; or perhaps, in the present view, the distinction between Dennett and Searle becomes moot. In the attention schema theory, the brain contains a representation, a rich informational description. The thing depicted in such nuance is experienceness. Is it real? Is it not? Does it matter? If it is depicted then doesn’t it have a type of simulated reality?
If awareness is Item IIb, the impossible, private, lovely thing that is depicted by the information of Item IIa, then whether awareness exists or not becomes philosophically murky. It is described by the brain, not produced by the brain. It exists only as a simulation. Yet because the item that is described is so gauzy, so ethereal, and so much like a simulation one must ask: if it is simulated, and if it is supposed to be a thing like a simulation, then does it not actually exist?
My point here is that it is possible to be mechanistic and philosophical at the same time. The attention schema theory provides a specific, mechanistic account of awareness, of its brain basis, what it is made out of and how we are able to report it. The theory depends on nothing beyond the nuts and bolts of neurons transmitting and computing information. Yet we still have