Go toSemiotic Review of Books Home Page
Go toSRB Highlights
Go toSRB Archives


SRB Archives

This article appeared in Volume 2 (2) of The Semiotic Review of Books.

The Semiotic Organ: Language and the Brain

by James H. Bradford

Language: An Invitation to Cognitive Science. Edited by Daniel N. Osherson and Howard Lasnik. Cambridge Mass.: Bradford Books, MIT Press, 1990. 273 p. ISBN 0-262-15035-2.

This book offers a broad survey of language and the sciences of language from a specifically psychological point of view. Each of the chapters summarizes a different perspective on the psychological processes of language with an emphasis on recent experimental work. The book is well written and provides a fascinating account of the scientific study of language which is suitable for the general reader. Read on one level, the book provides a good (If necessarily brief) description of the experimental facts and prevailing theories that bear on such intriguing questions as: how is syntactic information represented in the brain? how do we organize and access knowledge about words? how are complete sentences analyzed and perceived? But read on another level, the book offers fascinating insights, based on recent scientific discoveries, into what might be called "the innateness controversy" (the term is borrowed from an article by Fodor (Fodor 1981)).

The innateness controversy concerns some very deep issues on the nature of language. Specifically, is language an artifact arising from the collective human intelligence (in much the same way as television sets and highway systems)? Or is language in some way, an inherited, species-specific trait in the same way as the distinctive human bipedal gait? (Lennenberg 1964). Modern debate on the controversy began with Chomsky's famous review of B.F. Skinner's book: Verbal Behaviour (Chomsky 1959). Osherson and Lasnik's book takes the view that much of human language use and acquisition can be explained by specialized structures within the human brain, a kind of "language organ." As a consequence, language is seen as a trait rather than an artifact. The different human languages are different expressions of this trait, just as the human trait of walking is different in different terrains. The most interesting (and exciting -- if true) claim is made in chapter 8. In the tradition of Chomsky's universal grammar (Baker 1984) the author claims that human language may be described as a "parametric model" in which different parameters are turned off or on for any specific language (Pinker 1990) (for example, word order within phrases may or may not be rigidly fixed within a language, but for any given language it is either one or the other).

Osherson and Lasnik's book is structured to survey language issues in an orderly fashion, beginning with syntax and semantics, then a brief look at speech (with an interesting account of various aphasias), and then a look at various language processes as they occur in the brain. The book concludes with a chapter by Higginbotham on some of the philosophical issues raised by the psychological study of language.

Syntax & Semantics

Lasnik begins the book with a chapter on syntax and grammar, and this is followed by Larson's chapter on semantics. As might be expected the two chapters are closely linked in both area and perspective. Another feature common to these and all other sections of the book is the extensive use of compelling examples to make specific points. This is obviously a consequence of strong editorial control, and Osherson and Lasnik should be congratulated for enforcing this standard. Most scientists and scholars know that "proof by example" is no proof at all, and as a result much of their writing is burdened by abstruse argument that is at once technicality correct and impenetrable. Osherson and Lasnik have adhered to the principle of "explication by example" and their book is highly readable as a consequence.

The chapter on syntax makes the case for Chomsky's transformational grammar (Chomsky 1965). The author uses a number of ingenious illustrations to make some difficult points. For example, it is argued that when the (presumed) deep grammatical structures that underlie language are transformed to produce the surface structure that we see in everyday use, some of the transformations leave invisible traces that nevertheless have grammatical effect. Consider the following from chapter 1 (Lasnik 1990):

...colloquial English has a contraction process by which want and to become wanna when they are immediately adjacent:

(37)a. You want to solve this problem.

b. You wanna solve this problem.

...The relevant (and surprising) property of wanna contraction is that even if the student (in the following)...is displaced by Topicalization, contraction is still blocked for most speakers:

(39)a. The student you want to solve this problem.

b. * The student you wanna solve this problem.

Superficially nothing appears to intervene between want and to in (39a); hence there seems to be nothing to prevent contraction. But if we assume that Tropicalization leaves a trace then in fact something does intervene: the trace of the student.

If this chapter has a weakness, it is that it may leave the naive reader with the impression that the understanding of the grammatical structures underlying human language is a solved problem, and that Chomsky's approach is the solution.

The issue of Semantics has, of course, produced an enormous body of knowledge in philosophy, linguistics, mathematics and computer science. I was interested to see how Larson would handle it in his chapter. For the most part, the author restricts himself to a description of "Model Semantics" which is similar to the idea of "Semantic Markers," an approach developed by Katz and Fodor (Katz 1983) and much favoured by the Artificial Intelligence research community (Charniak 1978, Tennant 1981). Larson summarizes as follows (Larson, 1990):

...truth-conditional theories take the view that meaning is fundamentally a relation between language and the world. Interpretation involves systematically correlating sentences with the world through the notion of truth. Model theory studies the relation between languages and worlds in a formal way.

The essence of Model Semantics is to character words such as nouns and verbs in terms of the sets of objects or actions that they describe. As these words and those from other grammatical categories combine grammatically, so are the associated sets operated on by a series of predefined set-operations. So for example, if "cars" is associated with C, the set of all cars, then "green cars" is the subset of C that satisfies the property green. These kinds of set operations are easy to do on a computer and this is one of the reasons that this approach to language semantics is favoured by those interested in computer understanding of natural languages. If there is any criticism of Larson's chapter it is that the reader does not come away with a good impression of the breadth and complexity of the issues surrounding studies of semantics. However this may have been traded in favour of brevity and clarity.


Chapters 3 and 4 deal with Phonology, and Speech Perception respectively. They describe the mechanics by which humans interpret patterns of compression and rarefaction in the air as the coherent communication of language. Chapter 3 by Halle is a fairly complete review of phonetics and speech mechanisms. It provides the reader with a good description of how the components of our speech mechanisms produce the basic sounds of human speech. From the perspective of the wider issues addressed by this book - specifically the aspects of language that are universal within the human race, section 3.3 on the psychological reality of phonetic features is the most interesting. Halle presents some fairly elaborate rules that describe how English speakers use suffixes to form the plurals of words and how the resulting plurals are pronounced (this is known technically as "feature composition" and refers to the sequence of allowable sounds- not the lexical aspects of pluralisation). He cites research that indicates these rules of phonetic feature composition extend across languages even when the physical structures of our speech production apparatus would not require them to do so.

Chapter 4 by Joanne Miller deals with speech perception and presents a number of fascinating studies. The chapter begins with a brief overview of why speech perception is a difficult and complex task. This chapter is essential reading for all those who have wondered why modern computers, with all their computational power, are still unable to carry out spoken dialogues with their users (see also Ainsworth 1988, Waterworth 1987, and Frauenfelder 1987). The chapter also gives a succinct description of the two major theories of how humans perceive speech: the motor theory (developed by Liberman et al. (Liberman, 1985)) contends that we recognize speech because we have internal knowledge about how it is produced, and the auditory theory (developed from many sources) that contends that the brain contains structures for perceiving speech that are largely independent of the structures for speech production.

From the point of view of semiotics, the most interesting consequence of these theories is that the motor theory predicts that speech perception is species-specific and innate (i.e. part of our biological heritage). The auditory theory predicts just the reverse. Miller describes ingenious experiments that address both of these points.

The first experiment examines the claim that speech perception may be species-specific. It should be emphasized that Miller is not talking about the understanding of speech but rather the perception of speech (the organic equivalent of digital signal analysis). In Miller's words (Miller 1990):

A way to test the claim that speech perception is species-specific becomes obvious: examine speech perception in nonhuman animals whose basic auditory systems are similar to those of humans, and see whether these animals process speech in-the same way humans do.

As unlikely as it seems, the study Miller cites (Kuhl 1978) involved comparing humans to Chinchillas (Chinchillas were used because they and humans have very similar basic auditory sensitivity). The experiment compared the reactions of the two Species to a specific speech feature, Voice Onset Time (VOT), which is used by humans to help discriminate between consonants. By varying the VOT of a computer regenerated voice, the syllable "ba" can be transformed into "pa" (as perceived by humans). The ba/pa boundary occurs when VOT=25 milliseconds. The Chinchillas were trained to have different conditioned responses when they heard "ba" or "pa". When they were subsequently given the same stimuli as the human subjects, the Chinchillas showed the "ba" conditioned response until VOT= 25 milliseconds, where upon they switched to the "pa" response. This strongly suggests that the two species share a common (or at least very similar) neurological mechanism for detecting this particular speech feature. There is still much research that must be done on this question, but the early evidence seems to indicate that the auditory processing mechanisms that humans use to perceive speech are common to other species as well.

To explore the issue of whether or not speech perception is acquired or innate, studies were described (Elmas 1971 and Eimas 1987) concerning the speech perception capacities of infants (who presumably have not had sufficient time to acquire much language expertise).

The previous experiment with Chinchillas clearly suggests that human infants will ultimately develop neurological mechanisms to support speech perception, but the question here is whether such mechanisms need to develop or are they present from the beginning as a kind of genetic heritage (in other words, is the brain "pre-wired" for speech?). In these studies, the Voice Onset Time of synthetic speech was varied to produce a ba/pa distinction. The infants were very young (a group of 1 month old babies and a group of 4 month old babies) and were clearly pre-verbal. Determining what humans perceive at this age is very difficult. Infants at the age of 1 month cannot even be apparently conditioned as well as Chinchillas. The procedure used to evaluate what an infant hears is highly ingenious. It has been shown that when an infant is sucking on a pacifier, the rate of sucking increases when the infant is aroused by novel stimuli. Thus as the VOT was gradually increased in small increments, an infant's sucking rate would remain relatively low until the infant perceived "something new". If infants cannot categorize consonant sounds as adults do, then every increment of VOT may be perceived as a new stimulus with a corresponding change in sucking rate. However, the experiments actually showed that sucking rates showed a marked increase only at the point where adults experience the ba/pa distinction. Although such studies are not absolutely conclusive, it does suggest that humans have at least some of the neurological mechanisms necessary for speech perception from the moment of birth.

Language Processes in The Brain

The next four chapters take a detailed look at how the brain processes language. Chapter 5 is specifically concerned with lexical processing, chapter 6 deals with sentence processing, chapter 7 with language in general, and chapter 8 with human acquisition of language.

Forster's chapter on lexical processing is primarily focussed on how the meaning, spelling and grammatical role of individual words are stored and recalled in the brain. Most of the experiments described in this chapter are concerned with the "access time" of the brain's lexical processor (i.e. how long does it take the brain to recognize specific words and what factors affect the speed of recognition?). Forster points out that we have actually known that word frequency affects recognition for a long time (we recognize the word "canary" much more rapidly than "apteryx" because we encounter the former more frequently). He also describes a phenomenon known as the semantic priming effect, that is, if we consider pairs of words such as: "doctor...nurse" and "banker...nurse" we will recognize "nurse" more quickly in the first pair than in the second. This suggests that the brain somehow accesses categories of words (doctor, nurse, hospital, scalpel, etc.,) and these categories are searched early when we encounter subsequent words. Measuring these priming effects can present very subtle problems as illustrated by the following from Forster (Forster 1990):

...if the spoken sentence is the walker poured the port into the glasses, then both WINE and SHIP show facilitation. However, if the probe is delayed for Just a syllable or two, then only the contextually relevant meaning of port appears to be activated; that is WINE shows facilitation, but SHIP does not.

Garrett's chapter on sentence processing parallels Lasnik's chapter on syntax but with an emphasis on the neurological structures that support sentential analysis. Much of the chapter is a catalogue of known effects (resolution of various kinds of lexical ambiguities, determining constraints on verbs, determination of pronominal and anaphoric references, etc.,). One of the main themes of the chapter concerns the modularity of human language perception mechanisms. Specifically the modular theory claims that the brain contains specialized structures for such things as grammatical analysis, as contrasted with the belief that such competencies are essentially skills that use the brain's general purpose problem solving capacity. In the closing section of his chapter, Garrett argues convincingly that the language mechanisms of the brain are indeed modular.

Chapter 7 by Zurif, is concerned with measurable brain functions, specifically the "correlations between language processing and temporospatial patterns of activity in the brain." (Zurif 1990) The chapter starts with a revealing study of a pair of aphasias that have been known for nearly a century. In particular, patients with Broca's aphasia show an impaired ability to produce language but are relatively unimpaired in the comprehension of language. Patient's with Wernicke's aphasia on the other hand, show just the opposite set of symptoms. Wernicke theorized that the mechanisms underlying language comprehension are distinct from those underlying language production and are based on the body's sensory/motor distinction. Zurif describes a number of modern experiments which use sophisticated tools such as PET scans to largely disproved Wernicke's theory.

The chapter concludes with a brief summary of some promising new tools for exploring language issues (for example, Event Related brain Potentials- ERP's). However, the author rightly points out that we are still in an early phase of this kind of research. In his words (Zurif 1990): "For the most part, however, this litmus has been applied to broad distinctions of very limited value to current linguistic theory."

Chapter 8 on Language Acquisition is perhaps the most provocative section of the book. In this chapter, Steven Pinker deals with the issue of how children acquire language from everyday experiences. On the face of it, this would seem to be a nearly impossible task, given the complexities of language, the idiosyncrasies of childhood exposure to language, and the many violations of the rules of language encountered in ordinary speech. Pinker's conclusions are startling (Pinker 1990):

A striking discovery of modern generative grammar is that natural languages all seem to be built on the same basic plan. Many differences among languages are not radical differences in basic structure but different settings of a few 'parameters' that allow languages to vary, or different choices of rule types from a fairly small inventory of possibilities.... On this view, the child only has to set these parameters on the basis of parental input, and the full richness of grammar will ensue when those parameterized rules interact with one another and with universal principles. The parameter setting view can help explain the universality and rapidity of language acquisition: when the child learns one fact about her language, she can deduce that other facts are also true of it without having to learn them one by one.

Pinker has built his case well and this reader had little choice but to agree with his conclusions.


Although speculation about the relationship between human thought and human languages is as old as recorded history, this is a special time for language study. As computers and new technology for monitoring brain activity combine with recent advances in linguistics and the mathematics of language we may finally be in a position to address the fundamental questions of language:

The book by Osherson and Lasnik offers tentative, thought provoking, and controversial answers to these ancient questions.


Ainsworth, W.A. (1988) Speech Recognition by Machine London: Peter Peregrinus Ltd.

Baker, G.P. and Hacker, P.M.S. (1984) Language Sense and Nonsense: A Critical Investigation into Modern Theories of Language. Oxford: Basil Blackwell Publisher Ltd.: 248.

Charniak, E. and Wilks, Y. (1976) Computational Semantics: An Introduction to Artificial Intelligence and Natural Language Comprehenslon. Oxford: North-Holland: 43-54.

Chomsky, N. (1959) "A Review of B. F. Skinner's Verbal Behavior" (New York: Appleton-CenturyCrofts. Inc., 1957) In Language 35.1: 26-58.

---. (1965) Aspects of the Theory of Syntax. Cambridge Mass.: MIT Press.

Elmas, P.D., Siqueland, E.R., Jusczyk, P.W. and Vigorlto, J. (1971) "Speech Perception in Infants." Science. 171 :303-306.

Eimas, P.D., Miller, J.L. and Jusczyk, P.W. (1987) "On Infant Speech Perception and the Acquisition of Language." In Categorical Perception. Edited by S. Harnad. Cambridge U.K.: Cambridge University Press.

Fodor, J.A. (1981) Representations: Philosophical Essays on the Foundations of Cognitive Science. Cambridge, Massachusetts: The MIT Press: 257.

Forster, K.l. (1990) "Lexical Processing." In Language: An Invitation to Cognitive Science. Edited by Daniel N. Osherson and Howard Lasnik. Cambridge Mass.: Bradford Books, MIT Press: 116.

Frauenfelder, U.H. and Tyler, L.K. (1987) Spoken Word Recognition. Cambridge Mass.: Bradford Books, MIT Press.

Katz, J. and Fodor, J.A. (1963) "The Structure of a Semantic Theory." Language 39:170-210

Kuhl, P.K. and Miller, J.D. (1978) "Speech Perception by the Chinchilla: Identification Functions for Synthetic VOT Stimuli." Journal of the Acoustical Society of America. 63:905-917.

Larson, R.K. (1990) "Semantics." In Language: An Invitation to Cognitive Sclence. Edited by Danlel N. Osherson and Howard Lasnik. Cambridge Mass.: Bradford Books, MIT Press: 3031.

Lasnik, H. (1990) "Syntax." In Language: An Invitation to Cognitive Sclence. Edited by Daniel N. Osherson and Howard Lasnik. Cambridge Msss.: Bradford Books, MIT Press.: 16-17.

Lenneberg, E.H. (1964) "The Capacity for Language Acquisition." In The Structure of Language: Readings In the Philosophy of Language. Fodor, J.A. and Katz, J.J. (ed.). Englewood Cliffs, New Jersey: Prentice-Hall: 579-603.

Liberman, A.M. and Mattingly, I.G. (1985) "The Motor Theory of Speech Perception Revised." Cognition. 21:1-36.

Miller, J.L. (1990) "Speech Perception." In Language: An Invitation to Cognitive Science. Edited by Danlel N. Osherson and Howard Lasnlk. Cambridge Mass.: Bradford Books,MIT Press.: 81.

Pinker, S. (1990) "Language Acquisition." In Language: An Invitation to Cognitive Science. Edited by Daniel N. Osherson and Howard Lasnik. Cambridge Mass.: Bradford Books, MIT Press.: 199 131 and 230-231.

Tennant, H. (1981) Natural Language Processing Princeton: Petrocelil Books: 101-138.

Waterworth, J.A. and Talbot, M. (1987) Speech and Language Based Interaction with Machines: towards the conversational computer. Toronto: John Wiley and Sons.

Zurif, E.B. (1990) "Language and the Brain." In Language: An Invitation to Cognitive Science. Edited by Daniel N. Osherson and Howard Lasnik. Cambridge Mass.: Bradford Books, MIT Press: 177 and 195.

James Bradford is an Associate Professor of Computer Science at Brock University. He is also cross appointed to Fe Psychology Department at Brock and holds an adjunct appointment at the University of Guelph. His primary research interests are automatic speech recognition by computer and the human factors of user interface design. He is currently leading a project to determine the effectiveness of speech interfaces for the access on online help as well as a project to study ways of using contextual knowledge to improve the reliability of speech interfaces in noisy environments.

Go to Semiotic Review of Books Home Page
Go to SRB Highlights
Go to SRB Archives