Okay, so I'm a MetaScientist: New Kinds of Tools for the New Kind of Science

By William J. Clancey


Gathered in the expedition's mess tent at Haughton Crater in the High Canadian Arctic, we were planning a simulated overnight excursion in our "pressurized rover," a converted Humvee. Referring to goals for the exercise, our group leader mentioned "science goals" and "Bill Clancey's goals." I had to smile, for the distinction was not unfamiliar. Formal presentations of NASA's Haughton-Mars Project's research since 1998 have broken our work into "Science" and "Human Exploration." Perhaps the distinction stems from how NASA describes missions in terms of "science" (by applying instruments or gathering samples) and "operations" (everything that supports science, such as the familiar flight controllers huddled over telemetry displays). Some missions, like Mars Pathfinder (1997), are even designated as technology tests, not science missions. And to emphasize the assumed purpose and product of space exploration, different mission designs are measured by "how much science" is produced-as if knowledge can be weighed.

Because science during an HMP expedition-the science that justifies Mars missions-is defined as geology, biology, physics, and astronomy, what I do-the study of people-is not called science. The intent is not to say that psychology, sociology, and anthropology are not sciences, but their focus is not "the science of a Mars mission." Ironically, even if a central purpose of an activity, such as our Humvee excursion, is to study people, it is still not elevated to being included in "the science of the mission."

One might just leave this distinction aside if it didn't have a historical basis fully apart from NASA and space exploration.

For example, for several years starting in the 1980s the proceedings of the annual conference of Artificial Intelligence were broken into two parts: Science and Engineering. (Always capitalizing artificial intelligence in AI publications is additional evidence of insecurity.) The "science" part included all of the theoretical, usually formal, mathematical work; the "engineering" part included all programs that actually did something, such as medical diagnosis or teaching. The few AI researchers who studied people usually also built programs (e.g., the process of "capturing" expertise, ironically called "knowledge engineering"), so these papers were put in the engineering section. Of course, having a Science paper was a mark of status, membership in the elite of the field.

Good arguments can be made that computer science is not a science, consistent with the only semi-humorous idea that any discipline that puts the word "science" in its name can't actually be a science. Aside from studies of the practice of programming (which properly falls within cognitive science), no specific, naturally occurring phenomena are the subject of a computer scientist's inquiry. Simon's designation Sciences of the Artificial only makes the point more clear: Computer science is a craft (following Knuth's The Art of Computer Programming). Computer science is more like architecture than mechanical engineering (the theoretical basis for building construction) or materials science (the theoretical basis for mechanical engineering).

The scrambling, holier-than-thou effort to elevate AI to a science has often been generalized as "physics envy." Indeed, proposals were made by AI leaders during the 1980s to secure funding for mega-projects on the scale of linear accelerators. In part, such efforts were like a child's reach for his father's affirmation-that the student son had become a scientific man. If only Congress would give AI $1 billion, then our work would be legitimate.

Within this historical context, NASA's very practical distinction between science and operations inherits the scientific community's norms. (The distinction degenerated into "payload" vs. "operations" on the Space Shuttle, to promote a privatization policy.) These norms require experimentation and quantifiable hypotheses, expressed as results in graphical form. My use of time lapse analysis, for example studying how people work together in a simulated Mars habitat, is implicitly directed at assuaging these expectations. I produce graphs and reveal surprising statistical relations (e.g., the average time in the work tent at HMP-1998 was under a minute, showing it was really a storage tent). I certainly feel more secure at scientific conferences when I include charts in my PowerPoint presentations.

A related well-known distinction, which might be falling into disuse, is between "hard" science and "soft" science. Here the definition of science emphasizes models that predict system behavior, Newton's laws of physics being the paradigmatic example. (My usual rejoinder here is that social psychology is harder because it is impossible to predict individual human behavior.) With the use of distributed systems models (e.g., cellular automata, multi-agent systems), emergent organizations are now modeled, making possible statistic predictions. Indeed, systems sciences are transformed into hard sciences by formalizing a new mathematical framework for modeling processes as components, relations, and behaviors.

These ideas are presented in Waldrop's Complexity (1992) and Wolfram's A New Kind of Science (2002, note the title), as well as Buchanan's Nexus (2002):

Will a network science emerge that helps us understand a variety of complex organizational systems by describing the puzzles of human behavior and connections in mathematical terms? So argues Buchanan, former editor of Nature and New Scientist. Buchanan, who holds a Ph.D. in physics, delivers a good introduction to theoretical physics and the "small worlds" theory of networks. He sees biology, computer science, physics, and sociology as intimately connected. Buchanan illustrates social and physical networks with examples ranging from the infamous "six degrees of separation" theories, to the spread of the AIDS virus, to the mapping of the nervous system of the nematode worm. Are the similarities among these networks merely a coincidence or the result of some underlying physics? Only further research will tell, but in the meantime this book is a good primer to basic network concepts and contains references to key journal articles and studies for further reading. The subject will be of particular interest to mathematicians, physicists, and computer scientists and of general interest to those in most other disciplines. (Amazon.com)

Christopher Alexander's work has been building on the same idea of emergent order, but including human behavior, activities, and emotions in the mix. For example, a review for the new The Phenomenon of Life says that emergent order is:

...suggestive of nothing less than a new scientific world view. The essence of that view is this: the universe is not made of 'things,' but of patterns, of complex, interactive geometries. Furthermore, this way of understanding the world can unlock marvelous secrets of nature, and perhaps even make possible a renaissance of human-scale design and technology. (Amazon.com)

From the perspective of complexity theory, which I associate especially with Prigogine's Order out of Chaos (1984), the hard sciences of physics, geology, astronomy must adapt. The study of life, including biology and people more specifically (sociology, architecture, economics), has demanded new kinds of models (now computational, i.e., computer programs), and new forms of evidence and predictions (statistical). Focus shifts from systems in isolation to dynamics of cells, bodies, persons, organizations interacting over time, and their emergent order.

In conclusion, one might ignore the "science" vs. "operations" distinction because it only reflects the purpose of the mission. In the case of NASA's HMP, the distinction between "science" and "human exploration" is apt because we seek to promote both.

Indeed, my simulation of a Mars crew in a "multi-agent" model is a prime example of the new kind of science, grounded in distributed computation and oriented towards "human-scale design and technology" (AKA human-centered computing). Our invention of an activity-based modeling system shows that computer science is not a science, but a new kind of mathematics, and that's the basis on which we will model and understand cognition, communication, and social behavior.

As I argued in the mid-1980s in reformulating expert systems as a "qualitative modeling methodology," what AI researchers discovered was a new way to model and simulate complex processes. The work is not done, because neural memory is not based on storage of packets or procedures, but rather their reactivation and recombination. We don't know how to build a process memory like the brain. However, the computational, process-oriented direction of today's research is more or less sound.

Those physics enviers back in the 1980s should have called the parts of the AI proceedings: Process Mathematics, Scientific Applications, and Qualitative Systems Engineering. We are meta-scientists, developing the modeling tools tomorrow's scientists will require.

July 22, 2003
Devon Island

Copyright © 2004 William J. Clancey. All Rights Reserved.


Back to William J. Clancey Home Page