Can a Computer Count? What the Mechanism of Consciousness Reveals about Forms of Intelligence

(Summary of Contact 2000 Presentation)

William J. Clancey
March 8, 2000

Consciousness can be understood as a mechanism for physically and temporally coordinating the process of categorization. This process, called conceptual coordination, can be articulated by examining in detail a variety of phenomena, including grammatical sentence comprehension, dreaming, autism, slips, animal cognition, and human genius. The essential idea is that relationships between categories are not stored descriptively (in programs, rules, tables, or maps), but physically and temporally related by how they have been activated in the past, in what is called a "process memory." Human knowledge is not stored in a long-term memory, in the manner of traditional computer systems, by which structures are copied into registers for comparison and manipulation. Rather categories are activated "in place," and there are strict physical limitations on how they may be incorporated in ongoing constructions.

There are two primary forms of conceptual coordination: Simultaneous (categories co-activate) and sequential (categories activate serially). The most primitive form of categorization found in animals and insects is perceptual categorization; simultaneous activation of perceptual categorizations is the most basic and common form of conceptualization, allowing a form of "primary consciousness" exemplified by facial and scene recognition (e.g., chess board "chunking"). Analysis of primate and human cognition reveals that higher-order consciousness requires at least the ability to categorize sequences of activation (i.e., procedural learning), by which habits become units that are composed and flexibly replayed. Analysis of language comprehension shows the importance of being able to hold categories active, apart from the current sequence of behavior, so they may be correlated and physically composed (e.g., as in center-embedded noun phrases). This coordination ability is missing in autistics as well as during the primary consciousness of dreaming.

Higher-order categorization develops a semantic coordination capability, often called intentionality, by which animals categorize (form the idea) that other agents have beliefs, and (at a higher level yet) that beliefs are attributions (i.e., they may be wrong). Three-way relations may be formed, corresponding to referential categorization ("that agent's belief is about this thing/situation") and then symbolic categorization ("this artifact/thing represents/means/implies this agent's belief/fact about..."). Thus a conceptual system is developed in which categorizing operates on its own products: Categories become symbols, and inference becomes a primary means of coordinating behavior. In this formulation, symbols are not just a collection of individually existing "tokens," but physically and temporally related by serial and co-activation, and perceptually grounded in the agent's interactive behaviors. Thus a conceptual system is literally a *physical* symbol *system*.

On the basis of this architectural theory, one can develop specifications for replicating and improving the human conceptual coordination mechanism, as well as heuristics for recognizing extraterrestrial intelligence. Regarding SETI, one might look for very long phrases, modality blending (e.g., tasting shapes), noise that is actually music, and descriptions that articulate relationships that humans express kinesthetically as gestures and facial expressions.

From the perspective of artificial intelligence research, a theory of consciousness explains how varieties of intelligence are possible (as different ways of coordinating behavior), how to leverage human intellectual capabilities through AI tools ("cognitive prostheses"), and how to go beyond human capability. A means-ends engineering perspective is essential, in which the differences between animal/human cognition and computational systems are continuously examined, and these differences are used to drive AI research. In recent years, such an approach has produced the "situated robots" that navigate without reading maps and the models of neuronal group selection that perceive without describing and remember without indexing labels. These advances in robotics and neuroscience are the beginnings of a process memory architecture that will become the foundation of a successful computational theory of intelligence.

In short, consciousness is not a mystical phenomenon or a topic to be shunned, but is instead the key for understanding how human intelligence is possible at all, how it is distinguished from other forms of intelligence on this planet, and indeed, how it is distinguished from current computer systems. In some important ways, even simple programming languages exceed the capability of human consciousness. But in other ways, related to the flexibility of "run-time" learning and relating perceptual-motor modalities, no computer system replicates the mechanism of human conceptual coordination.

I conclude with a story that shows the power of this perspective.

--------

AI Old Timer: "What is consciousness good for? Name one thing."

Bill Clancey: In the animal kingdom, only animals with a form of secondary consciousness can count.

Old Timer: Well, any computer can count.

BC: Okay, let's see, can I borrow that laptop? (Takes nearby PC and places it in front of the room.) Okay, let's see if this computer can count the number of chairs in the front row.

---------

This story illustrates how in modeling intelligent behavior (e.g., "counting") we often abstract away its inherent interactive nature. Thinking is not something than can be bottled in a disembodied logical machine; rather thinking is a conceptual means for coordinating agent behaviors (in a real or virtual world). Without a behavioral context, there is no means for defining intelligence (as the counting example reveals). Thus, robotics is not an application of AI, but embodies the essential problems that AI must address (pun intended).

 

References

Situated Cognition: On Human Knowledge and Computer Representations (Cambridge, 1997)

"Abstraction without Description" IJ Ed Rsch 27(1) 1997

Conceptual Coordination: How the Mind Orders Experience in Time (Erlbaum, 1999)

"Studying the varieties of consciousness: Stories about zombies or the life about us?" J Learning Sciences, 8(3-4) 1999

"You are conscious when you dream" (Why: Questions in Science, in press)

"Conceptual coordination bridges information processing and neurophysiology" BBS Commentary (Sleep and Dreaming special issue, in preparation)


Back to William J. Clancey Home Page