The editors of this collection of interviews are two Germans who took advantage of a stint as research scholars at the Institute for Cognitive Studies in Berkeley to ask twenty well known figures in cognitive science to "speak their minds" about the state of their field. Their foreign accents gave them a certain license to appear naive, but their questions show them to have been well informed, making it possible for them to keep a fairly tight focus on a number of central issues. As a result, most of the interviewees have interesting things to say, and the whole affords a nice snapshot of the mood of cognitive scientists in the past few years.
The editors/authors rightly remark that the book might have been even more useful as a hypertext publication (it is not in fact, alas, available in that form either on the World Wide Web or in disk form), since one would like to satisfy different currents of curiosity: about what others in the book have to say about each particular problem, or about other aspects of a given person's views, or about the exact significance attributed to a certain term in this or that member discipline of the amalgam known as cognitive science.
I'll begin with a brief glossary for later reference. (A fuller glossary is also provided in the book as an appendix.)
GOFAI, or "Good Old-Fashioned Artificial Intelligence", is based on the idea that intelligent thought essentially consists in the rule-governed manipulation of symbolic tokens. The tokens can be physical objects, which allows for the possibility that machines should be intelligent, but they are organized into systems governed by syntactic rules and susceptible of semantic interpretation. GOFAI differs from its ancestor cybernetics, as Terry Winograd points out (289), in that the latter "did not have symbols, they had feedback loops." There are signs of its hegemony being challenged by the hot "new" way of doing Artificial Intelligence, connectionism.
CONNECTIONISM is now officially a decade old, if one counts the publication of David Rumelhart and James McClelland's two 1986 volumes of readings* as the official date of that research program passing from back to front stage. It differs from GOFAI in that its models are not based on a prior analysis of the tasks to be performed, but on empirical exploration of how groups of interconnected units arranged in specific ways, and connected by pathways of variable strengths, might modify their output in response to any given input. Such systems appear to be capable of learning, and in particular do surprisingly well at pattern recognition, one of the tasks old-fashioned computers find hardest. Connectionism is sometimes still referred to as "parallel distributed processing", because the information which its systems encodes is not localized in any particular point in a computer's memory, but is represented by the totality of the simultaneous connection strengths between nodes.
The TURING TEST was devised by Alan Turing as a convenient substitute for the impossible question "Could machines think?" (Turing 1953). Ask instead, Turing suggested, whether a machine could, when suitably rigged to converse with a human, convincingly simulate a human's conversation. Some workers in Artificial Intelligence have adopted this idea as a measure of success, but just as often it has been derided as irrelevant to interesting questions about the differences between people and machines. Most prominent among the deriders is John Searle, inventor of the now ubiquitous Chinese Room argument.
CHINESE ROOM ARGUMENT. Searle imagined a unilingual English speaker locked in a room with a large set of dictionaries of (strings of) Chinese symbols. Through an opening, slips are passed in, on which are printed strings of Chinese characters. The resident unilinguist looks up the strings in one of the dictionaries and finds a response strings, which then is duly inscribed onto an outgoing slip. Providing the dictionaries have been competently compiled, Searle argued, the room would behave as if "it" were capable of conversing in Chinese. But neither it nor its resident knows a word of Chinese. This thought experiment is taken, by Searle and some others, to show that computers, being capable only of the sort of syntactic manipulation of meaningless symbols effected by the Chinese room's resident, could never in principle be said to think, let alone be conscious.
The BACKGROUND or COMMONSENSE PROBLEM, also sometimes seen as closely related to the FRAME PROBLEM, stems from the fact that most of the "knowledge" that is presupposed in our everyday life is tacit, and that both its extent and its nature makes it unlikely or impossible (depending on who's talking) that it should ever be recorded in any usable database.
Most of the twenty men and women (men and woman, I should say: it is unfortunately not unrepresentative of the field that there are 19 men here, plus Patricia Churchland) have views about each of the problems raised by the issues just sketched. Rather than catalogue their responses, I'll encourage readers to turn to the book. In the meantime, I'll attempt to give here the flavour of some of the main debates.
Each interview/chapter is ornamented with pictures of their protagonist. Several of these pictures are blatantly taken from some previous decade -- perhaps to accentuate the enduringly precocious self-image of the field. By happy chance of alphabetical order, the first two present the uncannily handsome faces of Pat Churchland and Paul Churchland in their prime, like tutelary deities presiding in changeless youth over the perpetual revolution in cognitive science. For them and a few others in this collection, indeed, the twin turn to neurology and Artificial Intelligence in the study of mind remains ever new, the wave of the future. Others are less sanguine.
Both Churchlands are boldly modest about the proper role of philosophy in the collective enterprise of cognitive science: "Philosophers play a relatively minor role in any scientific endeavor that has gotten on its feet" (38). Pat Churchland describes her early resistance to the overblown claims of philosophical analysis and the sense of relief brought by a five year plunge into the study of real neuroscience; Paul insists on philosophers' natural bent for intellectual poaching: "philosophers are supposed to learn the business of other disciplines." (39). From the point of view of philosophy as a whole, the Churchlands represent one radical pole in this conversation, a position starkly known as "eliminativism". (More of the opposite pole in a moment.) This view sets traditional philosophy on its head, seeing neither scientific value nor privileged ontological reality in the purely experiential aspect of mental activity which is the starting point of all philosophy of mind for Descartes, Empiricism, and Phenomenology. This implies a rejection not only of phenomenological approaches to the mind, but also of the "traditional" approaches to artificial intelligence, insofar as these are based on a conception of mental activity as problem solving modeled on the conscious applications of rational methods.
It's striking to note that traditional GOFAI is here represented only by Herb Simon and Alan Newell (who died before he could revise his interview). Both of these, incidentally, see no threat in connectionism, viewing it as an adjunct dealing with the implementation of the symbol-manipulating operations posited by GOFAI. All the other advocates of artificial intelligence are more or less committed to more "neurologically" oriented approaches such as connectionism, or, like Winograd and Joseph Weizenbaum, have become disenchanted not so much with the technical prospects as with the social and ideological impact of computers in general. The case of Weizenbaum is especially interesting, since it was his success with ELIZA, his "canned" "parody" of a psychotherapy program, which turned him against the power of computers. He was "stunned" that "psychiatrists and other people took this parody seriously" (254), deciding, in effect, that the gullibility of humans made the Turing test too easy to pass and computers dangerous sources of delusions. As a result, Weizenbaum describes himself as looking "with a critical eye on the whole of the computer field" especially insofar as it lulls us into "binary thinking"(252). Among philosophers, however, there is at least one prominent defender of the GOFAI program, namely Daniel Dennett, though he too thinks of it as entirely hospitable to contributions from connectionism. The lines are increasingly blurred however, and there is little trace in this book of the latest version of the great Platonic "battle of gods and giants" waged in the previous decade between pure computational functionalists and dyed-in-the-neuron connectionists. Witness Hilary Putnam, for example: Putnam is probably entitled to take the credit for inventing the term and the concept offunctionalism. Originally -- as what is now known as computational functionalism -- this was the notion that mental states were logical or computational states of the brain. These were implemented, as it happened, in living tissue, but there was no logical reason why they might not be implemented on a computer. Putnam here rephrases the part of this view with which he still agrees as the view that "mental states are compositionally plastic". But he is no longer a computationalist, in that he now thinks that "our mental states are also computationally plastic. Just as two creatures can have different chemistry and physics and be in the same mental state, it must be considered that computers can have different programs and be in the same mental state." (179).
At the opposite pole from eliminative materialists are the out-and-out resisters. Connectionism has weakened their resolve, but they are still up and fighting. For three decades now, Hubert Dreyfus has attacked the project at its core. His objections are principled, rooted in a certain conception of the impossibility of programming commonsense; but while he insists on such facts as that "nowhere in the Encyclopaedia Britannica does it say that people move forward more easily than they move backward" (75), his conviction has been expressed more forcefully in predictions than in arguments. He has made persistent and regularly adjusted predictions of failure, which have fared no better than anyone else's prophecies for the future. ("Years ago," observes Robert Wilensky, joining in the game even as he derides it, "Hubert Dreyfus said that a computer would never beat him at chess. Then a computer beat him shortly afterward. He said that computers would never play master-level chess. Now computers do this routinely. Then he said that a computer would never be world champion of chess. It looks like even this is right on the horizon." (269). Now Dreyfus offers a new prediction, "that symbolic information processing and cognitivism will disappear within ten years" (79). Like other resisters, such as Searle, however, he is more hopeful about connectionism.
Searle is the other great resister. In his interview here he claims to have been too soft on Artificial Intelligence in the past. For his Chinese Room argument was originally intended as a refutation of what he called "strong" AI: the proposal to make real artificial intelligence, in the sense in which artificial light is real light but artificial flowers are not real flowers. He now thinks that even some weaker claims, "such as the view that says that minds are partly constituted by unconscious computer programs", are wrong. His reason is that whatever takes place in the brain must be natural processes, and that computation is not a natural process because it is what it is only relative to the interpretation placed upon it by an observer. In that sense, then, there is literally nothing that can count as symbol manipulation of any kind that actually takes place in the brain. (205). This still leaves standing what Searle originally designated as the "weak" program of Artificial Intelligence, which proposes merely to illuminate intelligence by means of computer models; but it weakens it further since it implies that no computer model could ever literally apply to a brain process.
As Dennett points out here, however, this argument simply begs the question of whether something that exists only "relative to an observer" couldn't therefore be really and naturally in the mind/brain: "There is not any original, intrinsic intentionality. The intentionality that gets ascribed to complex intentional systems is all there is." (66). Moreover, Dennett insists, the Chinese Room Argument is "not an argument, it is a thought experiment." (64). And many who summon up this experiment simply don't share Searle's intuitions about it. Insofar as Searle provides arguments to accompany the thought experiment, (and he has provided several over the years, of which the one just mentioned is new here), Dennett claims to have shown elsewhere that they are all fallacious. To the latest, Dennett's reply seems to be that "you can approximate the performance of the ideal semantic engine... by a device that is `just syntax'." (65).
But this, in effect, seems to be none other than the very contentious assumption that the Turing test is a conclusive proof of semantic competence. No one else seems so sanguine, except possibly Lofti Zadeh, one of the pioneers of "fuzzy logic." Zadeh, curiously enough, claims that the Turing test is useless, but then describes a test that "no machine will be able to pass", namely to summarize what someone has said (307) -- which is none other than a version of the Turing test as it is commonly understood. Most of the contributors describe the Turing test as "useful," including Rumelhart (198), Simon (241), and the Gestalt psychologist Stephen Palmer (169,172), who adds the interesting twist that the test might be sufficient to establish intelligence but not consciousness. (But "the problem is," as Winograd asks, "useful for what"? (293).) Those who deride it do so usually because it is purely behaviouristic. This means, for Searle (208), that it remains always logically possible that something should pass it without real thought or consciousness, or, more interestingly, for Paul Churchland, that it tells us too little about how various tasks are performed. (43). Jerry Fodor, whose characteristically perverse interview is entitled "The folly of simulation", expresses the view that the whole idea of simulation is misguided as a research strategy: "You don't try to build a device that would pass the Turing Test for God; that would be a crazy way of doing physics". (86). The Turing test, he claims, shows nothing since it consists merely in the absence of certain sorts of perceptible differences. Terrence Sejnowki has a rather different objection to the Turing Test, which is that it relies excessively on language: but "everyone knows children can be incredibly intelligent without being able to speak" (228).
John Haugeland, who invented the acronym GOFAI, now shares Dreyfus's scepticism about the commonsense problem, saying flatly that Douglas Lenat's project of putting commonsense into a huge database (Lenat 1990) "is hopeless". But while this sentiment is fairly widely shared, there isn't anything in this volume in the way of positive suggestions about how to deal with it. The closest we come is in Winograd's suggestive remark that "The connectionists ... say that there is something not on top of the symbol structure but below the symbol level, in the way things are built" (295). There is also an indication of an important trend in George Lakoff's interview, which doesn't actually mention the problem explicitly, when he stresses that "the terms on which you understand the world... comes out of interaction with the world." (121). This interactive view, the view that the body has an essential role to play in the development of the mind, taps into a movement that is rapidly gaining ground. It fits in with the positive views of Dreyfus, Haugeland, and Gerald Edelman, whose work is praised by Hilary Putnam (185), all of whom see knowledge as biological, so that survival in a world where motion in and interaction with the world is the crucial causal determinant of all epistemological principles. Pat Churchland is close to them in spirit when she recalls her re-orientation under the influence of neurology: "I thought the juicy topics were memory, perception, consciousness, why should I be interested in how animals move? The lab taught me ....that, for evolutionary reasons, cognition is closely tied to motor control, since adaptive movement is critical in survival." (23).
Most participants, in fact, agree that GOFAI has not met the high hopes vested in it, and many see neuroscience, and the modeling of neural processes in neural networks, as promising the breakthrough required to bring Artificial Intelligence to more realistic achievements.
Traditional programming as practiced in "GOFAI" has two related features: one, it is "top down", relying on increasingly sophisticated analyses of the steps involved in the accomplishment of tasks, broken down stage by stage into more elementary tasks. This idea represented a crucial breakthrough from a philosophical point of view, since it affords a constructive reply to the objections of Ryle (1956) and Wittgenstein (1953) to mentalistic accounts of thought and language. Ryle and Wittgenstein frequently raised the specter of infinite regress of homunculi; but Dennett pointed out that computers provided a model of a "finite regress of homunculi. The homunculi were stupider and stupider and stupider, so finally you can ... break [the homunculus] down into parts ... replaceable by machines." (67). This ability to break down tasks was originally premised on a capacity thoroughly to understand the processes in question, as a stage preliminary to their successful modeling. Connectionism, by contrast, does not rely on any prior understanding of those capacities that it ends up modeling.
One stark resulting contrast is brought out by the following question: is it possible for a practitioner of Artificial Intelligence to understand perfectly how their system functions and yet to have no idea how it performs the tasks it was designed to perform? For a traditional practitioner, the answer is No (though any piece of software might well be -- and nowadays usually is -- too complicated for any one person to satisfy the first condition). But for a connectionist, the answer is a straightforward Yes. Since the machine has been programmed to learn, understanding how it works is understanding how it learns: it is not necessarily to understand what it has learned.
Other important distinctions follow from the basic difference between GOFAI and connectionism:
-- GOFAI is "brittle", and therefore very unlike real brains, because once a particular location of memory is destroyed the entire program that relies on that information may become inoperative. By contrast, neural nets are robust: since the information they encode is distributed, it sill survive the destruction of any particular location.
-- GOFAI is top down, meaning that to program it requires one to analyze the targeted tasks at increasingly detailed levels; neural nets are bottom up, in the sense that what is understood is the mechanism of their basic units, but the way these units work together may neither be planned not even understood.
-- Hence, while GOFAI relies on the prior understanding of tasks, connectionists rely on programming low level processes and getting them to produce, by a kind of "emergent" alchemy, the higher level functions that one ultimately intends to model. Hence its description as "bottom-up."
-- GOFAI takes language for granted and works with various kinds and levels of language: connectionists see themselves as committed to the explanation of linguistic capacity.
-- GOFAI prides itself on being “implementation neutral”, which means that a program, once designed, should be able to run more or less well on any machine; connectionists, on the contrary, are committed to implementation as the basic level that should produce and explain the other levels of analysis. One useful way to think about these levels, taken up by Robert Wilensky (277) and echoed by Hilary Putnam (179), is due to David Marr (1982): at the "computational" level, a certain task is defined (in typically though not necessarily intentional terms); the mathematical procedure for performing that task is defined at the "algorithmic" level. And finally the algorithms are performed by means of physical operations of one level or another, which is the "implementational" level.
One of the ways that attention to the nitty gritty of how things work on a material basis pays off is that limitations that look like merely technical constraints are seen to be theoretically crucial. (Sejnowski's interview is entitled: "The hardware really matters".) There is a tradition of experiments in psychology that rests on testing the "psychological reality" of different models by looking at the way that they unfold in real time. To take a classic example, we know we can't be doing arithmetic the way our computers do it because the computer is just too good at it: for a computer to pass the Turing test we would need to put in a routine requiring it to be slower and dumber. No serial computer, unless it has parallel processors, can come close to the "real time" solution of the problem of vision, for example: and that, as Fodor points out (93), is evidence that vision is achieved by processors working in parallel. But Fodor is also at pains to point out that von Neumann machines can run in parallel too. To add to the confusion, most connectionist machines don't actually exist: rather they are simulated on serial, von Neumann computers. So a few skeptics, notably Fodor, actually resist the rush to connectionism simply on the ground that it does not provide any real alternative. For connectionist networks, claims Fodor, enable nothing that a statistical approach to belief change would not yield. (95). Hence "connectionist systems will just disappear the way expert systems have disappeared." (Fodor, 94).
Two sets of problems are mentioned in passing in this collection but are essentially neglected, reflecting not so much their lack of interest as their intractability. These are the problem of experienced qualities, or qualia, or consciousness as such; the other is the problem of the social dependency of mental states. Qualia form a notorious block for eliminative materialism. Thus Paul Churchland is careful to note, though (possibly rejecting his own earlier views) that "in the case of qualia, I am disinclined to be an eliminativist." Instead, he hopes, "we will be able to explain qualia very nicely" (42). Aaron Cicourel raises a different level of formal analogy, stressing the parallel between socially distributed knowledge and distributed knowledge (in a neural net) though what he actually mentions is only the concepts of "socially distributed cognition and socially distributed knowledge" (49), and "the mutual influence of socially distributed knowledge and the constraints imposed by the neural organization of knowledge." (50)
The editors/authors have provided concise but excellent bibliographies with each interview, and a well designed index. All in all, this book will provide a nice mix of voyeuristic titillation and genuine information to readers equipped with a nodding acquaintance with the field. Even those who come to it cold will be able to get pleasure from it, providing they first study the introduction and glossary.References:
Ronald de Sousa is Professor of Philosophy at the University of Toronto. His book, The Rationality of Emotion (MIT 1987) is forthcoming in German from Suhrkamp as Die Rationalität des Gefühls. His current research project is on various aspects of Human Individuality.