Don't Confuse Geologists with Their Tools

By William J. Clancey

February 2003

"The MER rovers are robotic field geologists" tout the press releases describing the machines sent to Mars in mid-2003.

But Steven Squyres, the Principal Investigator of the Mars Exploration Rover (MER) project, explains, "Field Geology is an iterative process of scientific hypothesis formulation and testing, performed in a field setting  It is normally done by an experienced geologist...."

Squyres statement is of course correct. Today's rovers are incapable of forming scientific hypotheses and testing them. So clearly, the MER rovers are not geologists, let alone "robotic geologists." The claim in the press release is like saying "My Cuisinart is a robotic chef" or "My Cadillac Limousine is a chauffeur." As John Dewey said over a century ago, such statements confuse the carpenter with his tools.

The MER rovers are teleoperated, robotic field geology tools. If you wish to be poetic, you can call them "assistants." But as they have no ability to do anything autonomously but pursue a path, the assistance is purely that of being a perceptual-motor extension of the human body. We don't call eyeglasses "assistants" either. Nor is an electron-microscope an assistant. Or to take the extreme, nobody calls the Hubble telescope "a robotic astronomer."

The problem of anthropomorphizing technology has been rampant in the field of Computer Science called Artificial Intelligence, and I have argued is the chief reason for its stagnation: The terminology of describing what has been accomplished is identical to the description of the most distant objective. So instead of model-based diagnostic aids, we have "expert system physicians." The European research community saw this inflation and hyperbole, and suggested in the mid 1980s that we call our AI programs, "systems for experts," which is exactly right, they are tools. Indeed, the problem of using fantasy names for AI programs was clearly identified by Drew McDermott much earlier, in the 1970s. Then he mocked the common tendency to call programs by wishful names like "Understand." Little has changed in 25 years.

Human-centered computing (HCC) is one approach for cleaning up how we create and talk about tools. HCC is founded on clear thinking about the differences between people and our technology.

If we start instead with an inflated view of machines, we get a diminished view of people, and the design process focusses instead on mitigating failures (because we're now chasing our tails on specifications and capabilities).

HCC research aims to be scientific by treating the difference between people and machines with integrity. We choose our terminology carefully. We do not use the terms knowledge, intelligent, collaboration, and the like loosely. We do not describe what we have done in terms of the vision we have come nowhere near accomplishing. We call programs "model-based" rather than "knowledge-based"--because they contain models, not knowledge. We call programs (at most) assistants, rather than "experts" or " collaborators." We call teleoperated machines (at most) robotic tools, not geologists.

In most of AI research the distinctions can be subtle, not because the differences between people and machines are subtle, but because scientists and engineers have been seduced by their own terminology into wooly thinking. So many people in the AI research community were at first confused when I said in 1985 that the rules in the medical 'expert system' MYCIN were not knowledge, but a model of knowledge--the map is not the territory. This was confusing not because the distinction is slight, but because the metaphors of "knowledge = rules" and "memory = storage" had fully consumed our thinking. This problem grew to paralyze advances in most of AI by the late 1980s--as words in a network became identified with concepts, and model-based inference identified with reasoning. In all cases, the nature of human thought and capability became disguised by how we described our technology and representational methods. (See "Situated Cognition: On Human Knowledge and Computer Representations," Cambridge Press, 1997.)

The nature of today's rovers is less confusing because they do not do model-based inference to post hypotheses and strategize behaviors. When we do deploy such a system--which many researchers hope will occur in the next generation of rovers--we will need to clarify the limits of a "hypothesis formulation and testing" system that has no ability to conceive ideas. Without an ability to conceive, there is no way to dynamically blend and adapt alternative value-based perspectives for making judgments. Thus, without human intervention "autonomous" systems for the foreseeable future will be relatively rigid in their planning and carrying out of plans. This is a fundamental limitation of model-based approaches, and we have nothing on the horizon to replace it.

I believe that the primary responsibility of all scientists is to ensure the integrity of our work. For cognitive and social scientists, this means first and foremost preserving clarity about what we know about people, and not allowing descriptions of technology to demean or obscure the reality of how people think, behave, and live. Without this clarity, our requirements analyses, tools, and evaluations will be confused. A sharp, uncompromising understanding about the nature of people is essential if we are to design and fit new technologies that are appropriate and successful for mission operations.

Copyright © 2003-2004 William J. Clancey. All Rights Reserved.


Back to William J. Clancey Home Page