Your initial discussion thread is due on Day 3 (Thursday) and you have until Day 7 (Monday) to respond to your classmates. Your grade will reflect both the quality of your initial post and the depth of your responses. Refer to the Discussion Forum Grading Rubric under the Settings icon above for guidance on how your discussion will be evaluated.
The Brain and Language [WLO: 2] [CLO: 3]
Don't use plagiarized sources. Get Your Custom Essay on
In reading Chapter 8, in what specific places did you need to make bridging references? Include at least three examples and the section and paragraph number in your response. Did you need to pause to go back and reread the previous sentence or two at these points? Describe some of the micropropositions asserted in the text: Where were these located in the text? What are some of the macropropositions that convey the basic gist of the chapter, and where were these located in the text? Use the reading from Chapter 8 to thoroughly answer these questions.
Be sure to use your own
Academic Voice (Links to an external site.)
and apply in-text citations appropriately throughout your post. The
In-Text Citation Helper: A Guide to Making APA In-Text Citations (Links to an external site.)
tutorial can clarify further questions you might have on APA.
Guided Response: Respond to two of your classmates’ posts. Pose questions about their bridging references and suggest ways for further understanding the concepts of that section of the chapter.
For deeper engagement and learning, you are encouraged to provide responses to any comments or questions others have given to you. Remember, continuing to engage with peers and the instructor will further the conversation and provide you with opportunities to demonstrate your content expertise, critical thinking, and real-world experiences with this topic.
8.1 Defining Language
Language is a system of symbols that are used to communicate ideas among two or more individuals. Language uses both mental and external representations. An author communicates with a reader using the symbols of letters and words to get across ideas. In conversation, the speaker and listener exchange mental representations using spoken rather than written symbols. All languages share four rudimentary properties (Clark & Clark, 1977): Children can learn them, adults can speak and understand them readily, they capture the ideas that people normally communicate, and they enable communication among groups of people in a social and cultural context.
Language can be used to communicate factual information, but this is not its sole function (Atchison, 1996). Venting emotions, telling jokes, and making social greetings are common uses of language that are not intended to articulate facts. Moreover, some factual knowledge is very difficult to convey with language. For example, try to describe to a friend what a spiral is without resorting to gestures or drawing a diagram. Or try to use words alone to teach someone how to tie a square knot. Such visual-spatial knowledge is not easily captured in words.
At the heart of language is the use of symbols to convey meaning. Human beings use words, or patterns of sound, to refer to objects, events, beliefs, desires, feelings, and intentions. The words carry meanings. If your friend says he is happy, then you interpret this to mean something about his emotional state. On the other hand, if your friend whistles a tune, his behavior may say something about his emotional state, but it is less meaningful. Your friend might whistle by habit or whistle when he is angry, sad, or happy. Unlike speech, whistling is not specialized to convey a clear meaning. Once human beings learn a word, they can retrieve its mental representation, hold it in working memory, and use it in thought. The word itself is represented separately from the object or event to which it refers.
The words used by human beings typically are arbitrary; they lack any connection between the symbols and the meanings they carry. Therefore, they differ across languages. Uno, ein, and one are arbitrary sounds referring to the same numerical concept. A single scratch in the dirt or mark on a clay tablet would be a nonarbitrary way of referring to the number one, and ten such marks would nonarbitrarily refer to the number ten. But the use of nonarbitrary symbols can be cumbersome. The invention of Arabic numerals for representing numerical quantities greatly simplified the task of representing, say, 432 jars of olive oil or wine.
By putting together strings of words in different orders, one can express a very large number of different meanings. Consider, for example, a six-word sentence. Suppose that one selected 1 of 10 possible words for the first word of the sentence, 1 of another set of 10 possible words for the second word, and so on. The number of unique sentences that could be generated following this procedure would equal 106 or 1,000,000 sentences. Because one is not limited to only six-word sentences or to 10 possible choices, the number of unique sentences that one might utter is infinite.
Origins of Language
How did language begin? The answer is unknown and perhaps unknowable, but the question is too tantalizing to ignore. Linguists have reconstructed what they believe early languages were like up to about 10,000 years ago by studying the relationships among the written records of ancient languages dating back about 5,000 years (Atchison, 1996). However, scholars believe that the origins of language lie much farther back in human evolutionary history. Casts made from the skulls of Homo habilis, our early ancestor from more than two million years ago, reveal what could have been Broca’s speech area (Tobias, 1987). However, the unusual shape of the human vocal tract, a necessary requirement for speech, emerged later (perhaps 150,000 to 200,000 years ago) in our own species, Homo sapiens sapiens (Corballis, 1989; Lieberman, 1984).
A long-debated idea is that language developed from gestures. However, it appears that language and gestures may well have evolved together (Atchison, 1996). When our ancestors first began to communicate their thoughts to other individuals, they needed a way to refer to specific objects and to relate those objects. It is known that gestures are often synchronized in time with oral statements to convey meaning (Goldin-Meadow, McNeill, & Singleton, 1996). Spoken and gestural outputs are synchronized even in congenitally blind individuals who have never seen anyone gesturing. It could be that gestural and spoken output developed in tandem, with each specialized to communicate particular kinds of information.
Another idea is that language evolved as a consequence of the large brain of human beings (Gould & Lewontin, 1979). Language might be an example of taking an existing biological structure and adapting it for a new function. The problem with this view is that language is a very complex function. Adapting an existing structure to handle such a complex new function would seem to be unprecedented. As Atchison (1996) noted, “A type of wading bird uses its wings as a sunshade: there is no evidence of any bird using what was originally a sunshade as wings” (p. 75).
A plausible alternative is that language and a large brain emerged more or less simultaneously (Deacon, 1997). As our hominid ancestors lived together in increasingly larger groups, the degree of social interaction increased. Deception may have become a more important tool for gaining an advantage in getting the food, water, and shelter needed for survival. Social forces may have selected for a slightly larger brain and may have also selected for means of communication at the same time. Thus, language and brain size may have fed off each other during the process of evolution, and increasingly sophisticated means of communication would have demanded increasingly complex brain structures.
Meaning, Structure, and Use
Perhaps the most obvious point about language is that the words and sentences we speak and comprehend are meaningful. Semantics is the study of meaning. A theory of semantics must explain how people mentally represent the meanings of words and sentences. The expression of one’s thoughts and their comprehension by listeners or readers obviously depend on these mental representations. As discussed in
, sentence meanings can be represented in the form of propositions—that is, abstract codes for the concepts and schemas referred to in a sentence. For example, the sentence “The professor praised the industrious student,” can be analyzed into two propositions. Each involves a list that starts with a relation followed by one or more arguments:
(praise, professor, student, past)
The sounds that we generate when saying a sentence must code meaning in a consistent manner so that listeners can understand our utterances (see
). The coding begins with the phonemes or phonological segments that distinguish one meaningful word from another. As introduced in
, pill and kill convey different meanings because they differ in their initial phoneme. Each phoneme is produced by the vocal apparatus in a unique manner. The /p/ of praised and the /b/ of braised are pronounced nearly identically; they differ only in that the vocal cords vibrate for /b/ but not for /p/. This difference, called voicing, is also seen between /s/ and /z/. Say each aloud, and you can feel with your fingers on your Adam’s apple the vibration with /z/.
The phonemes of a language are the building blocks of meaningful units, which are known as morphemes. A morpheme is a minimal unit of speech used repeatedly in a language to code a specific meaning. A word such as pill is a morpheme, but so are prefixes and suffixes such as pre- and -es. Each morpheme signals a distinct meaning. For example, the word kill describes an action, and the suffix -ed tells us that an action took place in the past. So the word killed is composed of two morphemes, each of which conveys a specific meaning.
Figure 8.1 Meaningful units of language.
Taken together, all the morphemes in a language make up a mental lexicon, or the dictionary of long-term memory on which human beings rely when speaking and listening and when reading and writing. Each morpheme is a lexical entry in this dictionary in the mind. In semantics, one is particularly concerned with content words, which are the verbs and especially the nouns that refer to natural (e.g., chair) or formal (e.g., the legal definition of marriage) concepts. Function words, such as articles (e.g., the) and prepositions (e.g., by), often serve a grammatical rather than a semantic role. For example, the by in the sentence “Jill bought groceries by the week,” plays the grammatical role of starting a prepositional phrase. The entire phrase could easily be replaced by the adverb weekly without a change in meaning. The content words, on the other hand, all contribute to the unique meaning of the sentence.
A morpheme is a minimal unit of speech used repeatedly in a language to code a specific meaning. The word killed is composed of two morphemes: kill and -ed.
The sound of a language is influenced by its phonology, the sound segments that make up its words. Some phonological segments make a difference in meaning in a given language, and these are called phonemes. English makes use of 40 phonemes, some of which are used in other languages and some not. For example, the consonants l and r signal different morphemes or word meanings, such as the difference between look and rook or between lip and rip. However, in Japanese this phonemic distinction is not made, and the two sounds can easily be confused when native Japanese speakers learn English. Similarly, there are sounds in other languages that are not employed in English. In Spanish, the rolled r is articulated near the front of the mouth and is slightly different from the rolled r of French, which is articulated farther back. Neither of these methods of articulation is used in pronouncing an English r.
Languages also differ in the sequence of phonemes that are permitted. In English, one never sees the sequence pt at the beginning of a word, whereas in Greek the combination is common. This can be seen in English words borrowed from Greek, such as pteropod or pterosaur. The rules of phonology are learned implicitly through repeated exposure to a particular language. Although we are not consciously aware of these rules, we can easily decide whether a nonsense word is possible (e.g., patik) or impossible (e.g., ptkia) by making use of what we have learned unconsciously about English phonology.
The sound structure of a word is detached from the word’s semantics. In the left frontal lobe of the brain, two distinct regions of the neocortex specialize in processing the meanings of words versus the sounds of words. Sometimes during speech production, the sound of the word we want to say cannot be retrieved. There is a feeling of knowing what needs to be said but failing to find the right word. This frustrating experience is called a “tip of the tongue state.” It occurs more often as human beings age and creates the word-finding problem that older adults experience more commonly than young adults. But it can occur at any age and illustrates well the idea that semantic representations in the brain are stored and processed separately from their phonological representations.
Last, the sound structure of language also includes the variations in syllable stress, tone, and intonation that occur across words, phrases, and whole sentences. This aspect of sound structure is called prosody, and it highlights the meter or rhythm of everyday human speech. Just as with poetry and music, there is a song of sorts embedded in our speech. Unlike the phonology and semantics of a word, prosody is processed by the right hemisphere of the brain, just as music is. Male voices are typically pitched at a lower frequency than female voices, but in both sexes the voice modulates in predictable ways. For example, when asking a question, the pitch rises at the end of a sentence to signal the listener. In English, the tune or intonation of the sentence varies. In some languages, the song comes from the tone or pitch levels for syllables. Other languages, such as Mandarin Chinese, use both tone and intonation to create the distinctive song of the language.
Another landmark of language is its structure. The grammatical rules that specify how words and other morphemes are arranged to yield acceptable sentences are called syntax. Technically, syntax is only part of the study of grammar, which is the complete set of rules by which people speak and write correctly (including, for example, punctuation). Here, however, grammar is seen as an abstract set of syntactic rules that describe how the morphemes of a language are sequenced to generate an acceptable sentence. Syntactic rules ensure that speakers, listeners, writers, and readers are all playing the same structural game with language. Because the words used in language must follow one after another in a linear order (i.e., one word after another), either in time (as occurs in speech) or in space (as occurs with text), some convention is needed to order the words and the parts of words (e.g., past-tense suffixes). In English, for instance, a declaration consists of a subject (S) followed by a verb (V) followed by an object (O). Some other languages, such as German, follow an S-O-V pattern instead.
The grammar of a language specifies the rules that enable one to generate all acceptable sentences; at the same time, these rules do not allow the generation of ungrammatical sentences. Nonsentences in the language fail to meet one or more of these rules. If you can speak and understand a language, then you have learned and can use its grammar, even if what you know is implicit and not available for conscious articulation. In learning a second language, students sometimes discover, at a conscious level of analysis, grammatical distinctions in their native tongue (e.g., the pluperfect tense).
An implicit knowledge of grammar provides one with linguistic intuitions (Chomsky, 1965). Being able to identify the parts of speech in a sentence (e.g., knowing what is the subject as opposed to, say, the verb) is one such intuition. Another is recognizing that two different sentence structures mean the same thing (e.g., “The student passed the exam,” and “The exam was passed by the student.”). Recognizing syntactic ambiguity, in which multiple structures are possible, is yet another linguistic intuition (e.g., “Visiting relatives can be a pain.”). A very basic intuition is recognizing whether a string of words is a grammatical sentence.
To illustrate further the concepts of semantics and syntax and the idea of linguistic intuitions, answer the questions in
Learning Activity 8.1
. The first assertion is an English sentence because it conveys meaning and is syntactically correct. The second assertion is not a sentence because it violates syntactic rules. All of the elements of meaning are there, but they are in the wrong order. The third assertion violates no syntactic rules, yet it fails to make any sense. Your mental representation of the noun ideas does not allow them to sleep in any fashion, let alone to dream about a psychologist.
Experiments on comprehension also reveal our sensitivity to grammar. A classic study by Garrett, Bever, and Fodor (1966) illustrates this point. The authors presented listeners with a series of sentences through earphones and superimposed on the tape-recording of each sentence the sound of a click. The click occurred at various locations relative to the boundaries among the phrases in a given sentence. The listeners’ task was to identify the location at which they heard the click. It turned out that the sound of the click tended to be heard between two syntactic phrases in a sentence even when its actual location occurred earlier or later. Two of the sentences used by Garrett et al. are shown in
. The final words in each sentence (“influence the company was given an award”) were tape-recorded once and inserted into the sentences. Thus, the listener heard precisely the same words and pauses, and the click occurred during the first syllable of the word company. Yet in Sentence A, the word influence ends a prepositional phrase, and the starts the main clause of the sentence. In Sentence B, the word company ends a subordinate clause, with was being the verb of the main clause. Listeners given Sentence A heard the click occur earlier than did those given Sentence B. Perception of the click “migrated” toward an important syntactic boundary used in comprehending the sentence. This boundary occurred earlier than the click in Sentence A and later than the click in Sentence B.
Learning Activity 8.1 Syntax and Semantics: A Demonstration
Which of these sentences is grammatical? Which is meaningful?
1. The psychologist slept fitfully, dreaming new ideas.
2. Fitfully the slept new, ideas dreaming psychologist.
3. The new ideas slept fitfully, dreaming a psychologist.
Syntax refers to the rules that specify how words and other morphemes are arranged to yield grammatically acceptable sentences.
The third landmark is the uses or functions of language in social intercourse. Human beings may speak or write with themselves as their only audience. More commonly, however, the utterances made and texts composed are embedded in a discourse community. Language is intended for and shaped by those who collectively listen, read, comprehend, interpret, and respond to its uses. For example, consider the difference in the following two utterances:
It is hot in this room.
Open the window!
The first sentence informs others about how one feels about the room temperature. The second sentence commands someone to let some cool air into the room. But note that in a specific setting, the first sentence might be used to achieve the goal of the second in a polite way. Instead of commanding directly, one can achieve the same effect by merely informing someone standing next to the window of how one feels.
Figure 8.2 The grammatical structure of a sentence influences the perception of a click embedded in the first syllable of the word company.
Pragmatics refers to the manner in which speakers communicate their intentions depending on the social context. A speech act is a sentence uttered to express a speaker’s intention in a way that the listener will recognize (Grice, 1975). There are distinct kinds of speech acts. We inform, command, question, warn, thank, dare, request, and so on. A direct speech act assumes a grammatical form tailored to a particular function. For example, “Open the window,” directly commands. An indirect speech act achieves a function by assuming the guise of another type of speech act. For example, one might question (“Can you open the window?”), warn (“If you don’t open the window, we’ll all pass out.”), threaten (“If you don’t open the window, I’ll shoot!”), declare (“The window really should be open.”), inform (“It really is hot in here.”), or even thank in a sarcastic tone (“Thanks a lot for opening the window!”).
Grice (1975) proposed that when two people begin a conversation, they in essence enter into an implicit contractual agreement called the cooperative principle. This means that the participants agree to say things that are appropriate to the conversation and to end the conversation at a mutually agreeable point. One way to understand this contractual agreement is to recall times when people have violated it. Have you ever heard someone say something that made no sense whatsoever in the context of the ongoing conversation, or seen someone walk off abruptly to end a conversation without warning? The cooperative principle dictates otherwise. We agree to speak audibly, to use languages that listeners understand, and to follow the rules of those languages.
For example, participants try to be informative by saying what others need to know in order to understand and by not providing unnecessary details, as shown in the following exchange (Clark & Clark, 1977):
Steven: Wilfred is meeting a woman for dinner tonight.
Susan: Does his wife know about it?
Steven: Of course she does. The woman he is meeting is his wife. (p. 122)
Steven misled Susan here by failing to provide enough information in his use of the term a woman. Susan inferred that the woman in question was someone other than Wilfred’s wife.
The pragmatic dimension of language stresses the essential point that language involves a dialogue (Clark, 1996). Speaking is a bilateral activity in which it is equally important to listen to what others say in response to our utterances. Not only must a speaker engage in self-monitoring to avoid an error or miscommunication, but the speaker must also monitor listeners for their understanding of what is said. Gestures as well as spoken words are part of the communicative dance of dialogue. As Clark and Krych (2004) observed about spontaneous dialogue, “People not only speak, but nod, smile, point, gaze at each other, and exhibit and place things. . . . At the dinner table, they may point at salt shakers, pass food, and exhibit empty plates. It is the vocal and gestural acts together that comprise their talk” (p. 62).
Partners in a dialogue attempt to ground their words and gestures as they proceed. They cooperate with each other to reach a mutual assurance of understanding, or at least understanding suitable for current purposes. In other words, conversation is a joint activity that shares much in common with less language-based joint activities, such as dancing. The partners must work with each other for the activity to succeed. As an illustration of such cooperation, Clark (1996) described an interaction he had with a clerk in a store. When Clark pointed to what he wanted, the clerk could select the item, ring it up on the cash register, and state simply to Clark, “Twelve seventy-seven,” to which Clark responded, “Let’s see, I’ve got two pennies” (p. 32).
While these statements make little sense outside of the context of this dialogue, they fully satisfied the needs of each partner in the conversation. To know what the store clerk meant by “twelve seventy-seven,” you simply would need to have been there and observed that she had just rung up Clark’s two items on the cash register. If you knew, as the store clerk did, that Clark had first handed her a $20 bill before offering the two pennies, then you would readily grasp why Clark informed her that he had two pennies. Thus, in dialogue, language is used in conjunction with gestures, and both are grounded in a joint activity that provides coherence to the conversation. Although the two fragmentary statements may be obscure—if not incoherent—by themselves, the context of the joint activity renders the language understandable.
Pragmatics addresses the various ways in which speakers communicate their intentions depending on the social context. Speech acts serve to inform, command, question, warn, and so on, but they may do so indirectly rather than directly.
8.2 Contrasts to Animal Communication
Human beings are not unique in using communication or the exchange of information between a sender and a receiver coded in the form of signals understood by both. Pioneering work on communication in the animal kingdom was conducted by von Frisch (1950). The waggle dance of the honeybee directs other members of the hive to distant sources of nectar. The precise nature of the dance communicates the direction and distance from the hive of a source discovered by the dancing bee. Since 1950, an extensive array of communication systems has been documented, including the antennae and head gestures of weaver ants, the alarm calls of vervet monkeys, and the complex signaling of dolphins and whales (Griffin, 1984).
One difference between animal communication and human language is that animals do not use symbols to represent objects. The dance of the honeybee, for example, conveys information about the environment after the bee returns to the hive after locating a source of nectar. The honeybee dance is not symbolic because it is tied directly to the situation. It is not a separate entity that the bee uses to communicate later, when not just returning from or preparing to go to a food source. For human beings, words are detached from their referents, and we use them to recall events from the past or to imagine events that have never even happened. Also, as noted earlier, human beings use purely arbitrary symbols that have no relation to the concept being communicated.
Another fundamental difference is that most animal communication does not involve a theory of mind (Seyfarth & Cheney, 2003). When a human being speaks, the listener learns things about the mind of the speaker, such as the speaker’s attitudes and dispositions to think or behave in particular ways (Pinker, 1994). Tests of animals have failed to show that they make any attributions about the mental state of others. A possible exception to this generalization is the chimpanzee, where the results of tests for theory of mind have been mixed and controversial (Tomasello, 2008). Because of theory of mind deficits in most, if not all, nonhuman species, the speaker or sender does not modify communication signals to tailor the message to the mental state of the listener. For instance, a vervet monkey’s leopard call is not vocalized in different ways depending on whether the listener believes a leopard might be in the area or is oblivious. Human beings, on the other hand, routinely make an attribution about the state of mind of the listener and frame their communication accordingly.
Attempts have been made to teach American Sign Language (ASL) and other specially designed languages to chimpanzees, orangutans, and gorillas. For example, Gardner and Gardner (1969, 1975) raised a chimpanzee named Washoe in an environment comparable to one suitable for a human baby. Those who raised Washoe “spoke” to the chimp using ASL. They trained Washoe to use sign language to ask for what she wanted, and Washoe learned more than 130 signs. When shown the picture of an object, she could make the appropriate sign. More important, Washoe occasionally improvised signs or combined them in novel ways. For example, on first seeing a swan, Washoe gave the sign for water and the sign for bird.
Terrace, Petitto, Sanders, and Bever (1979), however, doubted that what Washoe and other apes had learned was really language. In particular, they questioned whether the apes showed the generative capacity or productivity of human language. Productivity refers to the ability to create novel sentences that can be understood by other speakers of the language. Terrace and his colleagues raised Nim Chimsky, a young male chimpanzee. Like Washoe, Nim learned about 130 signs and could use these to request objects or actions that he wanted at the moment. However, Terrace concluded from careful review of videotapes that Nim’s signs were often repetitions of what his human caretaker had just signed. Terrace found little evidence that Nim could combine signs according to syntactic rules, meaning that Nim could not generate a simple sentence.
Other researchers question Terrace’s strongly negative conclusion regarding primates’ lack of capacity to learn human language. For example, Savage-Rumbaugh, McDonald, Sevcik, Hopkins, and Rupert (1986) reported excellent success in teaching chimpanzees to communicate using a set of shapes as symbols—called lexigrams—on a computer keyboard instead of ASL. A chimp named Kanzi learned the associations between an object in the world, the sound of the English words spoken by a caretaker, and the visual lexigram. Caretakers talked to him about daily routines, such as taking baths, games of tickle, trips to visit other primates, watching TV shows, and many other events. He learned the words and lexigrams for orange, peanut, banana, apple, bedroom, and chase first, because these most interested him. By the age of 6, Kanzi could identify 150 lexigram symbols when he heard the spoken words. He could also perform correctly 70% to 80% of the time in comprehending and responding to sentences such as “Put the rubber band on your ball,” or “Bite the stick.” Of interest, this level of performance was just as good, if not slightly better, than that observed in a 2-year-old child who was given the same experiences with lexigrams and listening to her mother speak to her (Savage-Rumbaugh & Rumbaugh, 1993).
Even so, no one disagrees that young children with small vocabularies greatly exceed trained apes in their linguistic abilities. A 6-year-old child has a vocabulary on the order of 16,000 words (Carey, 1978). By adulthood, vocabulary is measured in the tens of thousands of words. Human beings can express more ideas than animals for another reason, in addition to vocabulary size. As seen earlier, grammar provides a means for producing novel expressions that have never been spoken before. For example, it is unlikely that you have ever heard someone say the sentence “My dog ordered caviar at the ball game, surprising even the shortstop.” Although some sentences are used repetitively to the point of annoyance (e.g., “Have a nice day.”), it is not at all difficult to generate utterly novel sentences.
Language uses symbols that refer to events displaced in time and space. The mental lexicon and grammar of a language are productive, allowing one to generate an infinite number of novel sentences.
8.3 Representations of Language
How are the rules of grammar, the mental lexicon, and other constituents of language represented in the mind? Which regions of the brain support these mental representations? Are linguistic representations to some extent prewired through genetic predispositions, or are they entirely learned? To what extent are the cognitive processes that manipulate linguistic representations specific to language, and to what extent are they general processes that operate throughout perception, attention, memory, and thinking? In this section, some preliminary answers to these root questions about the mental representation of language are provided.
The language or languages that you heard and learned to speak as a young child shaped many aspects of your knowledge about language. For example, the word in your mental lexicon that you use to refer to the family pet might be dog, chien, or Hund, depending on whether you acquired English, French, or German. However, other aspects of language are culturally invariant. Linguistic universals are properties that are shared by all natural languages in diverse cultures around the world. For example, at about 3 or 4 months of age, babies all over the planet start producing sounds that are similar to adult speech but quite meaningless. This babbling peaks before a child’s first birthday, when often the first understandable word will be uttered (de Villiers & de Villiers, 1978). The first words are invariably composed of a single syllable made up of a consonant and a vowel (CV) or two syllables that repeat the same two sounds (CVCV), as in mama or dada.
Universal grammar refers to the hypothesis that genetically determined knowledge of human language allows children in all cultures to rapidly acquire the language to which they are exposed. Many aspects of semantics and pragmatics vary across languages and are not universal. However, the syntactic structure of languages may, at least in part, reflect an innate universal grammar. A language acquisition device (LAD) is the innate mechanism that presumably analyzes the linguistic inputs to which we are exposed and adjusts its parameters to fit that language. Universal grammar and the LAD are thought to reflect an innate cognitive module that is independent of other cognitive systems (Chomsky, 1986; Fodor, 1983). According to this theory, the universal grammar allows certain parameters of variation among languages, and these parameters are set during the acquisition process.
During parameter setting, a child exposed to a language like Italian would learn to put a positive setting on the pronoun omission parameter. In Italian, in contrast to English, it is permissible to omit the pronoun before a verb because the inflection on the verb conveys the necessary information about the subject of the sentence. For example, “I love,” can be expressed as “Amo.” On the other hand, a child exposed to English instead of Italian would learn a negative setting for the parameter of pronoun omission.
To take a different illustration, consider the syntactic order of subject (S), verb (V), and object (O) introduced earlier. A word order parameter in the universal grammar would allow certain combinations and not others. Greenberg (1966) concluded from his examination of natural languages that only four of the six possible orders are used, and that one of these (VOS) is quite rare. The common orders are SOV, SVO, and VSO. In theory, the developing infant would come equipped at birth with the implicit knowledge that natural languages never follow the OVS and OSV orders. Children the world over would come prepared to examine whether the language (or languages) to which they are exposed conform to one of the four possible word orders.
Languages also differ in the degree of word order variation allowed. Russian, for example, tolerates more variation in the order of words in a sentence than does English. Pinker (1990) hypothesized that children are programmed to assume that the grammar of their native language demands a fixed order of words. The evidence suggests that early utterances indeed follow a strict ordering, regardless of the language being learned. In the case of English, these early utterances approximate the grammatically correct order. For Russian children, however, their utterances initially fail to show the full scope of possible word orders. It appears that an innate language acquisition device guides children to try out a fixed order first.
Universal grammar refers to the hypothesis that a genetically determined knowledge of human language allows children in all cultures to rapidly acquire the language to which they are exposed. It remains unclear exactly how genetic predispositions and learning processes combine to support language acquisition.
Absence of Input.
Two kinds of arguments have been advanced in favor of universal grammar, citing atypical cases in which infants fail to receive language input. In the first case, congenitally deaf children have never heard spoken language, and some are not taught standard sign language, either. Despite the absence of speech or sign input, such children invent their own gestural language that reflects the same properties of speech acquired by children with normal hearing (Goldin-Meadow & Mylander, 1990). For example, one-word utterances by normal children occur at about 18 months of age, and these are later followed by two- and three-word utterances. Deaf children similarly invent one-sign gestures at 18 months, followed later by two- and three-sign gestures. Presumably, an innate language acquisition device dictates this common pattern of development.
In the second case, there may be a critical period during which the LAD is open to input (Lenneberg, 1967). Feral children have been found living with animals without any contact with a community of speaking human beings. If found after 5 years of age, such children are typically unable to learn the phonology of human speech. Without exposure to the phonemes of a particular language, the representations that start out as babbling and eventually develop into speech seem to be lost and unrecoverable after the early years of life. There may be a critical period for learning a second language as well, although if so, it is less severe. If you have not learned the phonology of a second language by the time of puberty, then native speakers can detect your foreign accent with ease (Nespor, 1999).
One case that raises questions about the critical period hypothesis occurred during the 1970s, when a child named Genie, who had suffered from severe neglect comparable to that of any “wolf child,” was found in Los Angeles (Curtiss, 1977). From about 20 months of age until she was discovered at age 13, her parents isolated her in a small closed and curtained room. Her mother visited her only a few minutes each day to feed her. The child had no exposure to radio or television. Her father beat her for making noise and barked at her, as well as her brother, like a dog rather than speaking to her. They believed that she was severely retarded. After her rescue, she was tested for language comprehension and was shown to have no knowledge of grammar. Because she was past puberty, the critical period hypothesis would predict that Genie should fail to learn once put in a language-rich environment. However, the results were mixed. Genie began producing single words 5 months after her rescue and two-word utterances at 8 months. Her learning of phonology looked very much like normal language acquisition. She started with consonant-vowel monosyllables, for example, and then worked up to longer words. By contrast, her grammatical development was poor; Genie never did master syntax. The small number of feral children discovered and the possibility that some have mental retardation not specific to language prevent drawing strong conclusions (McDonald, 1997).
The hypothesis that language acquisition draws on a universal grammar has spurred research into its genetic basis. Molecular studies of a three-generation family, known as KE, showed that 15 of the 30 family members suffered from a major dysfunction of the brain mechanisms controlling facial and mouth movements that began early in childhood (Lai, Fisher, Hurst, Vargha-Khadem, & Monaco, 2001). This dysfunction impaired the development of speech, including grammar. Articulation of speech was profoundly impaired; in addition, linguistic deficits were observed in writing, indicating that the underlying cognitive representations of language were affected and not just the speech articulation system. Also supporting the view that central language capacities were impaired, the dysfunction disrupted both the expression of language and its reception or comprehension (Lai, Gerrelii, Monaco, Fisher, & Copp, 2003).
The cause of the language disorder is a mutation of a specific gene, named FOXP2, in the 15 affected family members. Lai et al. (2001) discovered that this gene is expressed early during brain development in several subcortical regions as well as in the cerebellum. These same regions have been identified as the sites of brain pathology in neuroimaging studies of the KE family members with the disorder. The researchers suggested that the affected brain regions are consistent with the hypothesis that the FOXP2 gene is involved in procedural learning mechanisms important to the development of nondeclarative memory. A deficit in motor skill learning could disrupt both the normal sequencing of oral and facial movements and the capacity for sequencing words and morphemes in the correct grammatical order. Although discovering the role of FOXP2 is a good start, the search for the genetic basis of language acquisition is just beginning. Whether the genetic mechanisms discovered are better interpreted as support for an LAD based on universal grammar or for general skill-learning processes also remains to be decided.
In sum, by the age of 4 children typically can utter a remarkable range of unique, grammatically correct sentences employing a vocabulary of a few thousand words. Although social learning through exposure to the utterances of parents and other adults clearly plays a role, native language acquisition occurs without formal instruction, and at approximately the same rate for most children despite considerable variation in the amount of speech directed at them. The hypothesis that an LAD exists that is dependent on universal grammar provides one explanation, but it must be understood as unproven and still controversial. For instance, Tomasello (2008) argued strongly against the existence of a universal grammar and in favor of a social-learning explanation of language acquisition. Christiansen and Kirby (2003) acknowledged the importance of biological evolution in providing predispositions for language acquisition, but they emphasized that an innate grammar is only one of several possible explanations for such predispositions. To illustrate, the structure of the mouth and vocal tract is a biological adaption that allows human beings to articulate the phonemes of speech (Lieberman, 1984).
Whatever turns out to be the truth about the human biological capacity for language, its genetic basis must be related to learning processes that operate within the lifetime of a single individual. At the same time, the ways in which language is passed on through cultural transmission across multiple generations of human beings are just as important to completing the picture. For example, the English we speak and write today has evolved in syntax and semantics since the Old English of Beowulf. In the final analysis, scholars of language must come to terms with biological evolution and genetics, procedural and social learning processes, and changes over historical time through cultural evolution.
The localization of language was proposed very early in the scientific study of the brain. In 1861, Broca reported on a patient who had lost his ability to produce meaningful speech but who retained his ability to hear and comprehend speech (McCarthy & Warrington, 1990). The patient received the nickname “Tan” because he uttered only this sound. Broca observed that the muscles of the vocal apparatus were not at fault, for Tan could eat and drink without difficulty. Broca speculated that Tan suffered from damage to a specific area in his brain that controlled speech, located in the third convolution of the frontal lobe in the left hemisphere. As it turned out, Tan suffered brain damage in many areas, but we still refer to this part of the brain as Broca’s area, in honor of his early investigations (see
). Broca’s aphasia refers to an inability to speak fluently without effort and with correct grammar. Speech is halting and often consists of short sequences of nouns so that the grammatical structure of a sentence is broken. Dronkers, Redfern, and Knight (2000) provided this example of the speech of a Broca’s aphasia patient in a picture description task:
O, yeah. Det’s a boy an’ a girl . . . an’ . . . a . . . car . . . house . . . light po’ (pole). Dog an’ a . . . boat. ‘N det’s a . . . mm . . . a . . . coffee, an’ reading. (p. 951)
The patient with Broca’s aphasia can, however, comprehend single words and short grammatical sentences.
In 1874, Wernicke reported on patients who could speak easily (albeit unintelligibly) but who failed to comprehend speech (McCarthy & Warrington, 1990). They tended to pronounce phonemes in a jumble, sometimes uttering novel words or neologisms. Postmortem examination of one such patient revealed a lesion in the area just behind or posterior to Broca’s area. Wernicke’s aphasia refers to a comprehension dysfunction. Speech is fluent and effortless, although often semantically meaningless. Dronkers et al. (2000) provided the following example of the speech pattern of a Wernicke’s aphasia patient:
Figure 8.3 Broca’s area and Wernicke’s area in the left hemisphere.
SOURCE: Adapted from Goodglass (1993).
Ah, yes, it’s, ah . . . several things. It’s a girl . . . uncurl . . . on a boat. A dog . . . ’S is another dog . . . uh-oh . . . long’s . . . on a boat. The lady, it’s a young lady. An’ a man. They eatin.’ ‘S be place there. This . . . a tree! A boat. No, this is a . . . It’s a house. Over in here . . . a cake. An it’s, it’s a lot of water. Ah, all right. I think I mentioned about that boat. (p. 951)
The specific kinds of speech errors made by fluent or Wernicke’s aphasics has been of interest as a way of understanding how language is normally produced.
shows a connectionist model of word production (Dell & O’Seaghdha, 1992). The semantic properties of the concepts of cat, dog, and truck are stored at the highest level. Below that is the level of lexical nodes where abstract representations of each word in the mental lexicon reside. Whether the word is a noun or a verb is stored at the lexical level. So, too, is grammatical gender in languages that distinguish between masculine and feminine nouns, such as Spanish, Italian, French, and German. At the lowest level are the phonological segments that make up how a word sounds when pronounced. It is important to note that the phonological form of the word is stored separately from the abstract lexical node. Note also that the model includes feedforward of information from the phonological to the lexical and then to the semantic level as well as feedback in the reverse direction. This model was applied to the errors made by fluent aphasic patients by creating global “lesions” in the computer model of the neural network that reduced the strength of connections among all three levels and speeded the loss of activation through decay. In other words, the connections in the computer model of the neural network were damaged to mimic the effects of lesions in real brains. As R. C. Martin (2003) summarized, damaging the connectionist model in this way nicely accounted for errors in phonological substitutions (e.g., golf for glove) and phonologically similar nonwords (e.g., brind for bread). Yet such “lesioning” of the model could not explain the production errors of semantically related words (e.g., parsley for carrot). Attempts to lesion connections locally between adjacent levels proved somewhat more effective but still left out patients who made largely semantic errors.
Semantic Versus Phonological Storage.
It appears that some patients have damage specifically at the semantic level so that they are deficient in understanding the semantic properties of concepts (R. C. Martin, 2003). However, other patients experience damage below the semantic level. They are unable to retrieve the phonological form of a word even though they understand the concept. As can be seen in Plate 11, middle temporal gyrus and inferior temporal gyrus of the left hemisphere hold the long-term memory representations of semantic information (R. C. Martin, 2005). The phonological knowledge about words is instead held in the posterior section of the superior temporal gyrus. Using this knowledge in the short-term phonological loop of working memory, on the other hand, is mediated by a different, adjacent region known as the supramarginal gyrus. This area retains for a brief period of time the phonological code of incoming speech during the process of speech recognition. Of interest, this input phonological code is not the only kind of verbal short-term store in working memory. An entirely separate brain region appears to hold the phonological code needed for speech output (R. C. Martin, 2005).
Figure 8.4 A connectionist model of word production.
SOURCE: From Dell, G., & O’Seaghdha, P. (1992). Stages of lexical access in language production. Cognition, 42, 287–314, copyright © 1992. Reprinted with permission from Elsevier.
Finally, notice the location of Broca’s area in the left frontal lobe. Just anterior and inferior to it, there is a region implicated in the short-term storage of semantic information. Some evidence points to a fourth, semantic store in short-term memory, in addition to the verbal, visual, and spatial stores discussed in
. For example, some brain-injured patients are able to retain the phonological codes in verbal working memory without difficulty, but they show abnormalities in their short-term memory performance of a different sort. Typically, it is easier to retain words compared with nonwords, as you discovered in
Learning Activity 1.1
. However, patients with a semantic short-term memory deficit show no such advantage for words over nonwords. Although their semantic knowledge of the meaning of the words is intact, they are unable to use this lexical and semantic information to help with retaining and recalling the words in a test of short-term memory.
The tip of the tongue (TOT) phenomenon was discussed in
as a failure to retrieve information that is available in long-term memory but temporally inaccessible. The TOT can be seen more precisely now in the context of language. When one knows a word, a lexical representation is available in the mental lexicon of long-term memory. The lexical-semantic system of long-term memory includes knowledge of the concept and its linguistic name along with the properties needed to use the name in syntactic constructions of sentences (e.g., noun vs. verb vs. adjective). Even when the lexical node is activated, however, it is at times difficult to retrieve the phonological form required to pronounce the word. During a TOT state, fragments from the phonological level are retrieved, such as the initial consonant or vowel or the number of syllables (Brown & McNeill, 1966), but the phonology of the whole word remains inaccessible. Vigliocco, Antonini, and Garrett (1997) found that the grammatical gender of the lexical node can also be retrieved even though the phonological form cannot. When testing native speakers of Italian, they asked participants to generate words that fit a series of dictionary-type definitions. Every time participants failed to come up with the right word, the researchers asked them a series of questions to evaluate the TOT state (e.g., Could they guess the number of syllables, any letters, or the gender of the word?). After responding to several probe questions, participants were shown the target word and asked to confirm that it was the word they were trying to remember. In these cases of positive TOT states, the researchers found that grammatical gender was correctly identified 80% of the time, well above chance (50%). Thus, the lexical node, with its storage of grammatical information, was recognizable even when the phonological form of the word remained inaccessible in the TOT state.
Although Broca’s area is specialized for the production of speech sounds, it also plays a role in the perception and comprehension of speech. As speech sounds or phonemes are taken in by the auditory-processing regions of the temporal lobe, they activate regions in the inferior frontal lobe of the brain. Broca’s area is included among these regions, most likely as a way to maintain the phonological codes of the incoming speech (Gernsbacher & Kaschak, 2003). These are the same codes used to prepare for the articulation of speech, either aloud or silently in the form of inner speech. The motor regions of speech production in Broca’s area are thus recruited as part of the speech perception process (see Chapter 2). Recent research has shown that this perceptual-motor link is not present in newborn infants but begins to emerge by the age of 6 months (Imada et al., 2006). The neural linkage between speech sound processing in the temporal lobe and in Broca’s area is significantly strengthened by the time the infant is 1 year of age, as the infant begins to comprehend and produce phonemes and first words.
Hemispheric dominance or brain lateralization in human beings means that one hemisphere controls key motor and cognitive functions. Approximately 90% of people have a dominant left hemisphere, meaning that they are right-handed. Recall that the brain shows contralateral control; this means that the motor and sensory nerves of the right side of the body are controlled by the left hemisphere of the brain. Right-handedness is found universally across diverse cultures (Corballis, 1989). Moreover, language is localized in the left hemisphere of virtually all right-handed individuals. When right-handed people suffer damage to their left hemisphere, the frequency of aphasia is high (McCarthy & Warrington, 1990). Only rarely does a right-handed individual lose language function as a result of damage to the right hemisphere.
Researchers have investigated the localization of language in a remarkable series of studies involving split-brain patients (Gazzaniga, 1970, 1995; Gazzaniga, Bogen, & Sperry, 1965). These individuals suffered horrendous seizures from epilepsy that could not be controlled by the usual therapies. During the 1950s, physicians treating such severe cases successfully controlled the seizures by cutting the connective tissue between hemispheres, called the corpus callosum. An epileptic seizure can be likened to an electrical storm; by severing the hemispheric bridge, the surgeons isolated the storm in one hemisphere, reducing the amount of damage done by the seizure. After recovering from surgery, these patients behaved quite normally and revealed no cognitive deficits to casual observers. Yet careful testing revealed highly selective deficits.
If a right-handed, split-brain patient was given a common object, such as a coin, the patient’s ability to verbalize the name of the object depended on the hand used. If the coin was placed in the patient’s right hand, then because of contralateral control, all information about it would be processed by the left hemisphere. Because the language centers reside in the left hemisphere, the patient could readily name the object as a coin. But if the coin were placed in the patient’s left hand, thereby sending the information to the right hemisphere, then the patient was unable to name the object. When pressed to point to the object just placed in the left hand, the patient could do so—but only by pointing with the left hand. This astonishing outcome showed that the right hemisphere indeed knew what the object was but could not name it because language use depended on involving the left hemisphere.
Further experiments verified these observations by taking advantage of the fact that the objects in the left visual field project only to the right visual hemisphere, as discussed in Chapter 2. The split-brain patient sat in front of a display screen and fixated on a central point. A stimulus was flashed briefly (100–200 milliseconds) in the left visual field so that it was received and processed only by the right hemisphere. The patient was unable to name the object, suggesting that language requires processing in the left hemisphere. If asked to pick up the object with the left hand from among some alternatives behind a screen, the patient was usually successful because the right hemisphere had successfully recognized the object.
Thus, in right-handed individuals, language critically depends on processing in the dominant left hemisphere. However, the situation for left-handed individuals is much more complicated. The various tests noted above indicate that most left-handed individuals show speech functions in both hemispheres. Some left-handers show speech localized in the left hemisphere, and very few reveal localization in the right hemisphere (McCarthy & Warrington, 1990).
It is of great interest that activation is observed in the left hemisphere, including Broca’s area, when deaf individuals who learned ASL as their first language observe someone making signs (Neville & Bavelier, 2000). As shown in Plate 12, the pattern is very similar to that observed in the left hemisphere of hearing individuals as they read English. Although ASL does not involve speech, it has a complex grammar expressed through hand motions and spatial locations (Poizner, Bellugi, & Klima, 1990). Thus, the fMRI results imply that there is a strong biological predisposition for the grammar of a language—whether it involves sounds or visual signs—to be represented in the left hemisphere. The native users of ASL in Neville and Bavelier’s study learned English as a second language late in life, after the critical period for grammar acquisition. As can be seen in Plate 12, reading English resulted primarily in right- rather than left-hemisphere activation. This result suggests that the bias for left-hemisphere language representation is not expressed when a language is learned after a critical period of development.
Although Broca’s and Wernicke’s areas in the left hemisphere are rightfully viewed as necessary for language in most human beings, it is incorrect to think of language as localized. Many other cortical and subcortical regions are also necessary for comprehension and expression of language, particularly when the reading and writing of visible language is considered in addition to spoken language (Goodglass, 1993). The language zone is pictured in
from a lateral view of the left hemisphere. It extends beyond Broca’s and Wernicke’s areas anteriorly into the frontal lobe and posteriorly into the parietal lobe. It includes regions that are both superior and inferior to Broca’s and Wernicke’s areas in terms of their vertical location in the left hemisphere. Neuroimaging of sentence comprehension has revealed that besides Wernicke’s area’s involvement in word/phonological processing and Broca’s area’s involvement in production/syntactic processing, temporal and frontal regions are involved in phonological and lexical-semantic processing (Gernsbacher & Kaschak, 2003). Parallel regions in the right hemisphere are also active, particularly in the comprehension of multiple sentences linked together in discourse. Thus, language is a prime example of how a complex cognitive function involves multiple distributed regions of the brain.
8.4 Comprehension of Language
Recognizing the words of a sentence is necessary for comprehension, but it is only a beginning. One must also recognize the syntactic relations among the words to build a mental structure of the sentence’s meaning. Similarly, to grasp the relation between one sentence and the next, it is necessary to build mental structures that represent meaning. The structures built by a listener or reader start at the local level of words and sentences and proceed to the global units found in extended discourse. In building these structures, one must “read between the lines,” or infer meanings that are not explicitly stated. One must also identify the intended meaning of a sentence when words with more than one literal meaning are used or when words are used in nonliteral ways, such as in metaphors.
Figure 8.5 The language zone in the left hemisphere.
SOURCE: From Goodglass, H., Understanding Aphasia, copyright © 1993. Reprinted with permission from Academic Press.
As in building any structure, the laying of a foundation is critical (Gernsbacher, 1990). The time and effort needed to develop mental structures that incorporate the meaning of a text provide useful information about the process. For example, the first sentence of a paragraph takes longer to read than do later sentences because the reader uses it to lay the foundation for a mental structure (Cirilo, 1981; Cirilo & Foss, 1980). This result occurs even when the topic sentence comes later in the paragraph (Kieras, 1978). So the extra time reflects foundation building, not just the time needed to process the most important or informative sentence.
The problem of speech recognition was treated in Chapter 2 as a perceptual problem of identifying words that are run together in spoken sentences and phonemes that are coarticulated. However, a memory-retrieval problem must also be addressed. Assuming that people know anywhere from 30,000 to 80,000 words, how do they retrieve the right representation so quickly?
Data-driven processes work to recognize visual patterns from the bottom up. In the case of words, three distinct domains of features must be identified (Graesser, Hoffman, & Clark, 1980; Perfetti, 1985; Stanovich, Cunningham, & Feeman, 1984). For example, consider the features associated with the word bird. When one hears a spoken word, the sounds are identified as phonological features. These are the phonemes used by the speaker to pronounce bird. Orthographic features refer to the letters used to spell a word in a visual format. When one reads a word, the individual letters and the visual shape of the word as a whole are processed as orthographic features, and one must identify the graphemes used to represent visually the phonemes of a written language. As shown in
, the identification of graphemes can also activate phonological features. This can happen not only when one reads a word aloud but also when one reads it silently. The identification of the phonological and/or orthographic features drives the bottom-up identification of the lexical-semantic features—the meaning of the word. Words or morphemes are verbal labels for underlying concepts. Morphemes and the concepts to which they refer constitute the lexical-semantic domain.
A PET study has shown that specialized areas in the left hemisphere respond to words and pseudowords, both of which look like words in that they follow the orthographic rules of English (Petersen, Fox, Snyder, & Raichle, 1990). As shown in Plate 13, false fonts and letter strings activate the visual cortex, just as do words and pseudowords. All four kinds of stimuli involve a low-level analysis of visual features by the visual cortex at the rear of the brain in both hemispheres. But only the words and pseudowords prompt an analysis by a specialized system in the left hemisphere that analyzes the visual form of words as such. Reading well demands that one handle words effectively and efficiently.
Figure 8.6 Domains of features activated in the recognition of written words.
SOURCE: Adapted from Caramazza (1991).
Equally convincing evidence has stressed the role of top-down or conceptually driven processes, as seen in the word superiority effect and other findings discussed in Chapter 2. By using world knowledge and the context in which a word is encountered, it is possible to form hypotheses and make good guesses regarding a word’s identity (Palmer, MacLeod, Hunt, & Davidson, 1985; Thorndike, 1973–1974). In fact, context enables one to identify words even when critical data are missing, as seen in the following sentence: Rexmaxkaxly xt ix poxsixle xo rxplxce xvexy txirx lextex of x sextexce xitx an x, anx yox stxll xan xanxge xo rxad xt—wixh sxme xifxicxltx (Anderson, 1990; Lindsay & Norman, 1977).
When words do not fit the expectations of conceptually driven processes, extra effort is required to analyze the data from the bottom up. This added effort can be detected by monitoring brain waves during sentence comprehension. An event-related potential (ERP) occurs as a negative voltage change that reaches its peak amplitude 400 milliseconds after the unexpected word appears (Kutas & Hillyard, 1980, 1984). The ERP is labeled an N400. Kutas and Hillyard (1980) presented readers with a set of mundane sentences that occasionally included an anomalous or low-probability word. For example, compare these two sentences:
He likes ice cream and sugar in his socks.
He likes ice cream and sugar in his tea.
Recording from a region in the parietal lobe, Kutas and Hillyard observed a significant negative component voltage 400 milliseconds after the last word of the first sentence but not of the second sentence. In the context of these sentences, the word socks is semantically anomalous, whereas tea is predictable from conceptually driven processes.
In addition, a large N400 component occurs following the first word of each sentence (He), and smaller ones occur after each succeeding word (Kutas, Van Petten, & Besson, 1988). Notice that the first word of a sentence, as well as the unexpected final word of socks, must be processed from the bottom up and fit into a mental structure for the sentence. The N400, then, is sensitive to the meaning of a word and is triggered when a word’s meaning is unpredictable. The largest N400 is obtained for a semantically unpredictable word, regardless of whether it comes in the middle or at the end of a sentence.
Complex sentences are harder to comprehend than simple sentences. For example, a sentence that negates an assertion is harder to comprehend than one that asserts a proposition, because the listener or reader must first presuppose a positive proposition and then deny it. Clark and Chase (1972) presented readers with a picture like that shown in
accompanied by one of four sentences:
The star is above the plus. (true affirmative)
The plus is above the star. (false affirmative)
The plus is not above the star. (true negative)
The star is not above the plus. (false negative)
As you can see from Figure 8.7, Sentences 1 and 3 are true statements, whereas Sentences 2 and 4 are false statements. Clark and Chase (1972) argued that the negative sentences (Sentences 3 and 4) require the reader to engage in more processing than do the affirmative assertions of Sentences 1 and 2. Specifically, they contended that the negatives entail both the supposition that the star is above the plus and the assertion that this supposition is false. To expose the additional effort required by the reader, Clark and Chase measured the time required to verify each type of sentence.
Figure 8.7 A sentence comprehension task.
SOURCE: From Clark, H. H., & Chase, W. G., On the process of comparing sentences against pictures. Cognitive Psychology, 3, 472–517, copyright © 1972. Reprinted with permission from Elsevier.
If the negative sentences require the reader to presuppose the positive assertion, then the time needed to comprehend this assertion must be factored into total verification time. From the observed times for each of the four sentences, Clark and Chase (1972) estimated that comprehension of the simple assertion (the star is above the plus) took slightly more than 1,450 milliseconds. All four sentences required this amount of time. The researchers further estimated that the time needed to deny the assertion added roughly another 300 milliseconds. Only Sentences 3 and 4 needed this extra time.
Another dimension of complexity is voice, with active voice easier to comprehend than passive voice. Still another is the use of simple sentences with one independent clause versus a sentence with a dependent clause in addition to an independent clause. Just, Carpenter, Keller, Eddy, and Thulborn (1996) presented people with sentences of different complexities and recorded brain activation using fMRI. The simplest sentence to comprehend was written in active voice and conjoined two clauses without embedding a relative clause. An example of an active conjoined sentence is “The reporter attacked the senator and admitted the error.” A somewhat more complex sentence can be constructed from the same words by embedding a relative clause after the subject of the sentence, which interrupts the main clause. An example of a subject relative clause sentence is “The reporter that attacked the senator admitted the error.” Finally, in the most complex sentence, the first noun serves as both the subject of the sentence and the object of the relative clause. An example of an object relative clause sentence is “The reporter that the senator attacked admitted the error.” Although this type of sentence is grammatical, it is complex in structure.
After reading each sentence, participants in the Just et al. (1996) study answered a question to measure whether comprehension was successful (e.g., “The reporter attacked the senator, true or false?”). The results of the study showed that as the sentences increased in grammatical complexity, there was an increase in processing time and in the probability of incorrectly answering the comprehension question. In addition, activation levels in Wernicke’s area in the left hemisphere showed a systematic increase across the three kinds of sentences. Of interest, the activation levels were substantially lower in this same brain region in the opposite right hemisphere, but even there a reliable increase was obtained as the sentences became more difficult to understand. Furthermore, a similar pattern of results was found for Broca’s area. Just et al. noted that the role of Broca’s area in comprehension is unknown, but that it may generate articulatory codes for the words of the sentence or may assist with syntactic processing.
Bridging Inferences. Anaphora
is the use of a word to substitute for a preceding word or phrase. The following three examples illustrate this idea (adapted from Gernsbacher, 1990, pp. 108–109):
1. .William went for a walk in Verona, frustrated with his play about star-crossed lovers. William meandered through Dante’s square, when the balcony scene came to him suddenly.
2. .William went for a walk in Verona, frustrated with his play about star-crossed lovers. The Bard meandered through Dante’s square, when the balcony scene came to him suddenly.
3. .William went for a walk in Verona, frustrated with his play about star-crossed lovers. He meandered through Dante’s square, when the balcony scene came to him suddenly.
Writers frequently use anaphora to establish referential coherence, especially anaphoric pronouns, as illustrated by Example 3. Of the 50 most common words that appear in print in the English language, nearly one third are pronouns (Kucera & Francis, 1967).
The given-new strategy in reading assumes that writers mark information already understood and information meant to be a new assertion.
Clark (1977) theorized that readers (and listeners) employ the given-new strategy to assist them in making correct inferences. This strategy is based on the assumption that writers cooperate with readers to make their meanings understood, just as speakers do in conversations. Specifically, writers clearly mark information that the readers already understand—that is, the given information that provides a shared basis for communication between writers and readers. Writers also mark what they are now making an assertion about—that is, the new information that they want readers to grasp.
On coming to the second sentence in Example 1, the reader determines what is being asserted as new information (someone is having trouble writing a play) and what is old information (the person in question is William). The reader next identifies a unique antecedent for the given information in working memory. The new information can then be fit into a new structure that links the predicates of both sentences to the same person.
Haviland and Clark (1974) found that the time needed to read and comprehend a sentence varied with the explicitness of the anaphora. This would be expected if readers used the given-new strategy and experienced more or less difficulty in identifying a unique antecedent for the given information. The term The Bard identifies the antecedent only by applying one’s knowledge of English literature and the writings of William Shakespeare. The pronoun He could, in a longer text, match more than one antecedent, so the reference in Example 3 is the least explicit. Thus, in Example 1, a unique referent is specified by repeating the name verbatim. In Examples 2 and 3, the reader must infer a link that is not explicitly given. Haviland and Clark aptly called these bridging inferences because the reader must build a bridge between two ideas to grasp their relation.
So far, comprehension has been discussed as if each word in a sentence can mean only one thing. Polysemy is the property of language that a single word can have more than one meaning. When a reader encounters words with more than one interpretation, how do cognitive processes arrive at the right meanings? Homonyms and metaphors illustrate the problem that polysemy poses for comprehension.
Suppose that the reader comes across a homonym, such as bug or watch, that can be interpreted in different ways. The preceding words might bias the interpretation (e.g., spiders, roaches), or the syntax might accomplish the same purpose (e.g., “I like the watch,” vs. “I like to watch.”). But what becomes of the other meaning? One possibility is that it simply decays with time (Anderson, 1983). However, the findings of Gernsbacher and Faust (1991) indicate that unintended meaning is actively suppressed and lost sooner than would be expected by decay alone. When only one meaning of a homophone was supported by the context (e.g., “Pam was diagnosed by a quack.”), the inappropriate meaning (i.e., the sound of a duck) was no longer active after 350 milliseconds. But when the context failed to bias a specific meaning (e.g., “Pam was annoyed by the quack.”), both meanings remained activated for up to 850 milliseconds. Thus, it is likely that an active suppression of the inappropriate meanings takes place when the context steers the interpretation process away from those meanings.
Similarly, to comprehend a metaphor (e.g., “Time flies.”), one must ignore or suppress the literal meaning of words in order to grasp the intended meaning. It is possible that readers first try a literal interpretation and then look for nonliteral meanings. Alternatively, they may use the context of the metaphor wisely and immediately capture the nonliteral interpretation. Experiments have shown that with the proper context, the nonliteral meaning of a metaphor is grasped without first trying the literal interpretation (Glucksberg, Gildea, & Bookin, 1982; Inhoff, Lima, & Carroll, 1984). For example, Inhoff et al. measured how long readers spent looking at and trying to comprehend a sentence with a metaphorical interpretation, such as the meaning of choked in the following sentence:
1. The directors mercifully choked smaller companies.
For some readers, this sentence was preceded by an appropriate metaphoric context, such as the following:
2. The company used competitive tactics.
For others, the sentence was preceded by a context designed to encourage a literal reading of the term choked, such as the following:
3. The company used murderous tactics.
Inhoff et al. (1984) found that readers spent less time in comprehending the metaphoric Sentence 1 when it was preceded by Sentence 2 than when it was preceded by Sentence 3. The metaphoric context of Sentence 2 primed activation of a nonliteral interpretation of choked.
Words in a text must be actively interpreted because they can have more than one meaning. In comprehending metaphors, for example, the nonliteral meaning—not the literal meaning—must be activated.
Structures develop at multiple levels: words, sentences, and discourse (Foss, 1988). Numerous models have been proposed regarding the high level of global structures that characterize discourse as a whole (Kintsch & van Dijk, 1978; Meyer, 1975; Thorndyke, 1977). But just what is meant by the term discourse? When does a collection of sentences constitute true discourse versus just a bunch of sentences? The answer, according to Johnson-Laird (1983), is that discourse occurs when the references in each sentence are locally coherent with one another and when the sentences can be fit into a global framework of causes and effects. We first consider the issue of referential coherence and then examine Kintsch and van Dijk’s model of the global structure of discourse.
When the words and phrases of one sentence in a paragraph refer unambiguously to those of other sentences in the paragraph, the sentences possess referential coherence. Johnson-Laird (1983) offered the following three paragraphs to illustrate this property of true discourse:
(1) It was the Christmas party at Heighton that was one of the turning points in Perkins’ life. The duchess had sent him a three-page wire in the hyperbolical style of her class, conveying a vague impression that she and the Duke had arranged to commit suicide together if Perkins didn’t “chuck” any previous engagement he had made. And Perkins had felt in a slipshod sort of way—for at least at that period he was incapable of ordered thought—he might as well be at Heighton as anywhere. (from Perkins and Mankind by Max Beerbohm)
(2) Scripps O’Neil had two wives. To tip or not to tip? Dawn crept over the Downs like a sinister white animal, followed by the snarling cries of a wind eating its way between the black boughs of the thorns. When I had reached my eighteenth year I was recalled by my parents to my paternal roof in Wales.
(3) The field buys a tiny rain. The rain hops. It burns the noisy sky in some throbbing belt. It buries some yellow wind under it. The throbbing belt freezes some person on it. The belt dies of it. It freezes the ridiculous field. (pp. 356–357)
Which passage do you judge to be true discourse? Which is the least qualified for the title?
The difference between Paragraphs (1) and (2) is easy to detect. The sentences in Paragraph (1) relate to each other in a coherent fashion, whereas those in Paragraph (2) plainly do not. But what about the sentences in Paragraph (3), which Johnson-Laird generated using a computer program? Although each sentence is nonsensical, the paragraph seems structured. The pronouns of one sentence seem to refer to a previous sentence. Nouns are repeated from one sentence to the next. Words related in meaning—here, words describing weather—are laced throughout. These are among the cohesive ties that a writer uses in creating true discourse (Halliday & Hasan, 1976).
A paragraph is referentially coherent when the words and phrases of one sentence refer unambiguously to those of the other sentences.
The nonsensical, computer-generated sentences in Paragraph (3) illustrate that there is more to referential coherence than local cohesion between one word and the next within and between adjacent sentences. An organizing topic, theme, or global structure is also needed. Theme is absent in the following paragraph from Johnson-Laird (1983), even though each sentence makes sense on its own and the local links between sentences are interpretable:
My daughter works in a library in London. London is the home of a good museum of natural history. The museum is organized on the basis of cladistic theory. This theory concerns the classification of living things. Living things evolved from inanimate matter. (p. 379)
Kintsch and van Dijk (1978) suggested that schemas for different types of discourse guide the construction of a global framework. The individual propositions expressed by the sentences of a text were described by Kintsch and van Dijk as micropropositions. The more densely a text is packed with micropropositions, and the more often it demands bridging inferences among these propositions, the harder it is to read (Kintsch, 1974; Miller & Kintsch, 1980). Moreover, the longer a proposition is held active in working memory while establishing referential coherence during comprehension, the more likely it will be encoded successfully into long-term memory and recalled later on (Kintsch & Keenan, 1973).
Kintsch and van Dijk (1978) contrasted micropropositions with macropropositions, defined as the schema-based generalizations that summarize the main ideas or gist of the story. Telling a story, arguing a case, or recalling an episode from memory would each, presumably, invoke a different schema. The pertinent schema would establish certain goals for the reader and sort through which micropropositions are relevant to those goals. The schema would also generalize the form of the relevant propositions to arrive at a useful summary of the main ideas or gist of the text. The net result is a summary of the text—a macrostructure—that guides comprehension and memory. As shown in
, reading involves more than establishing cohesion at a local level of the text by finding overlapping arguments in micropropositions. It also involves constructing macropropositions that provide a global framework for understanding the text.
Literacy involves both reading and writing. However, the comprehension of written language through reading has been far more extensively studied by cognitive psychologists in comparison with the composition of written texts. One reason for this discrepancy is that production tasks are generally more challenging to study in the laboratory in comparison with reception tasks, where the experimenter controls the nature of the stimuli presented to participants and responses are limited and simple (e.g., pressing a button to record comprehension time). In written composition, the investigator may present a specific writing prompt, but the participant is free to produce an infinite variety of possible sentences in response. A second reason why writing has been less studied than reading is that it combines the complexities of problem solving and decision making with the complexities of language production (Kellogg, 1994). Unless the writing task is highly simplified, such as jotting down a grocery list, composition makes serious demands on thinking as well as language skills. As a result, progress in understanding writing in the field of cognitive psychology has been slower relative to understanding reading. In this section, two key topics from the literature on reading will be highlighted to illustrate the cognitive psychology of literacy.
Figure 8.8 Reading requires establishing local cohesion and a global framework.
Speed Versus Comprehension
College courses require extensive reading. Given that most people read relatively easy-to-comprehend material at a rate of about 250 to 300 words per minute, the notion of speed reading at four times that rate, or more, is attractive. Commercial claims for training regimens that can accelerate reading speed to more than 1,000 words per minute with no loss of comprehension sound almost too good to be true. It turns out that cognitive psychologists have extensively investigated the interaction of perceptual processes and higher-order processes of comprehension in working memory during reading (Just & Carpenter, 1980, 1992). A careful consideration of these perceptual and cognitive processes provides insight into the feasibility of speed reading.
To begin with, each fixation of the eyes on a word or on part of a word provides a discrete input to the visual system. The duration of most fixations ranges from 200 to 350 milliseconds, with great variability (Pollatsek & Rayner, 1989). A rapid eye movement called a saccade jumps the focus of foveal vision to a new point in between these fixations. When one reads a book, a saccadic eye movement typically would advance (or at times regress) foveal vision about 5 to 9 character spaces within 15 to 40 milliseconds. The reader, in essence, gains a series of snapshots of information about the text as the eyes jump across and down the page. The span of each snapshot appears to be biased to the right of the fixation point. The reader extracts information from 4 characters to the left of the fixation point and up to 15 characters to the right (McConkie & Rayner, 1975). The reader gains no information during a saccade, and is unaware of the movement.
Reading typically involves fixating on about 80% of the content words (nouns, verbs, and modifiers) and 20% of the function words. High-frequency words are fixated in less time than are low-frequency words, indicating that the lexical or semantic properties of the language control eye movements. High-frequency words are often only a few letters in length, but even with word length held constant, the greater the frequency, the shorter the fixation required. Words that are frequent, short, and predictable in the context of the text are often skipped over altogether. Also, the processing of words begins prior to fixating on them. This is possible by previewing a word in parafoveal or peripheral vision. In some cases, the fixation on the word serves primarily to complete processing that had already begun in the periphery of an earlier fixation (Reichle, Pollatsek, Fisher, & Rayner, 1998). Taking into account the time needed to fixate on the input characters, to recognize each word, and to build the necessary mental structures in working memory, it is not surprising that most people read at a rate of about 250 to 300 words per minute. The rate might be much slower with especially challenging text.
Two theoretical assumptions allow researchers to link eye fixations to comprehension (Just & Carpenter, 1980). First, the immediacy assumption holds that the reader assigns an interpretation to each word as it is fixated. Readers sometimes need to revise their interpretation based on a subsequent fixation. For example, in a sentence beginning “Mary loves Jonathan . . . ,” readers might initially assign one meaning and then repair it when the fourth and final word of the sentence turns out to be “. . . apples.”
Second, the eye-mind assumption holds that the duration of fixation varies with the amount of information that must be processed in working memory at that instant. In other words, the work of comprehension takes place during the fixation; the next saccadic eye movement is suppressed until the reader is ready to move forward. At times, regressive eye movements are needed so that the reader can go back and reprocess information that was misinterpreted initially. Such a regressive movement would probably take place in reading “Mary loves Jonathan apples.” But the reader does not take in several snapshots of data, hold them in memory, and then pause for an extended period of time to comprehend the data.
Just and Carpenter (1980, 1992) found that the more difficult a section of text is to read, the longer individuals fixate. Good readers with large working memory capacities take extra time to resolve ambiguities of interpretation. The authors’ model and findings are consistent with the idea that reading is a time-consuming, high-level cognitive skill. If that is so, then how is speed reading possible? How can one possibly read 1,000 or more words per minute without loss of comprehension?
One method taught in speed-reading courses is to start fixations slightly to the right of the first word of each line and to take the last fixation to the left of the final word of each line. In other words, one should reduce fixations to a minimum by picking up characters to the left and right of the fixation point as much as possible. Efficiency is further increased by focusing fixations on content words rather than on function words and by drawing inferences to fill in the gaps. So long as the key lexical (e.g., he, walk, dog, friend) and syntactic (e.g., -ed) information is fixated, the other function words of the following sentence can be inferred without fixating on them (e.g., “He walked the dog of a friend.”). Finally, in speed reading, the reader tries to avoid making regressive eye movements. Instead, the speed reader tries to force fixations forward at a rapid rate. Training regimens sometimes rapidly present words or phrases on a computer screen in a serial manner, thus precluding the possibility of looking back because the earlier text is already gone. Another way to implement this strategy is by rapidly scanning the index finger across the line of text and moving it forward without pausing. If the eyes can keep up with the finger, then backward fixations can be avoided. Of course, backward fixations do have a cognitive function—they are necessary to resolve ambiguities in the text that stand in the way of full comprehension. Thus, avoiding backward fixations can be a problem if the material is not immediately understood.
Unfamiliar words, words with multiple meanings, and words that must be linked by means of suppositions or inferences all are going to be misrepresented if the reader blazes ahead without adequate time and effort to build the proper mental structures. The same problem arises in integrating larger units such as paragraphs. Like other claims that sound too good to be true, speed reading is not really reading (Masson, 1983). Reading at 1,000 or 2,000 words per minute results in a loss of comprehension and memory for the text. Therefore, “speed reading” is better referred to as “trained skimming.”
When comprehension and memory are assessed carefully in the laboratory using well-designed recognition tests (i.e., where the correct answer cannot be readily guessed without full comprehension) or recall tests, increasing the reading rate invariably causes a loss in comprehension. This is especially easy to detect when the text is unfamiliar and difficult to read. Nonetheless, trained skimming is a valuable cognitive tool that is worth mastering. Trained skimming allows one to scan texts rapidly to get their gist, if not their details. It is also useful in visually searching for a specific piece of information. In doing research for composing a paper, for example, it is often necessary to process large amounts of textual material when searching for facts or citations or when assessing whether a book or article is relevant. Trained skimming can also be useful when studying a text to get an overview of the main topics. Also, when using the 3R study strategy of read, recite, and review, trained skimming allows one to review the text quickly and find passages that require further careful and slow reading.
Reading speed is normally about 250 to 300 words per minute. This rate is constrained by perceptual factors in fixating on the text and by cognitive factors in building mental structures.
Learning to speak and understand one’s native language typically proceeds without difficulties. Acquiring literacy—the ability to read and write—is a more challenging learning task, and developmental deficits in these skills are more common than with spoken language. Dyslexia refers to an impairment in the ability to recognize printed words. Dyslexia can be acquired in adulthood from injury to the brain. It can also appear as developmental dyslexia, in which the ability to learn to read words is impaired from the start. Reading comprehension is strongly correlated with word recognition ability, at least in the early grades of school, and so one would expect that dyslexia will contribute to problems in comprehending a text (Hulme & Snowling, 2011). However, some children have difficulty understanding a text, even though they are able read aloud with a normal level of fluency and accuracy. This implies that other aspects of extracting the micropropositions and building a global framework for the text are at fault. Despite being able to read a passage aloud without problems, the children cannot respond to questions regarding the text’s meaning. Thus, children who are not dyslexic—their decoding of the words on the page is normal—may still have difficulties with reading comprehension.
Developmental dyslexia affects between 5% and 10% of children in the primary grades (first through fifth) in the United States (Shaywitz, Escobar, Shaywitz, Fletcher, & Makugh, 1992). Typically, the disorder is defined in terms of how well the child reads relative to the child’s expected reading ability based on intelligence testing (IQ). This definition aims to take into account the possibility that reading difficulties would be expected from a relatively low IQ, even in the absence of any specific difficulty with decoding printed words. The intervention needed might be different in this case compared with dyslexia. On the other hand, it is well recognized that poor readers display a phonological deficit. They have great difficulty in converting graphemes into phonemes so as to gain access to the phonological lexicon (see Figure 8.6) and are diagnosed as lacking adequate phonological awareness. Thus, one of the pathways to meaning normally used by readers is blocked for dyslexic children. It could be, then, that training aimed at improving their facility with this pathway—to eliminate the phonological deficit—would help poor readers, regardless of their IQs.
Tanaka et al. (2011) sought to determine using fMRI whether the phonological processing regions of the brain were less active in poor readers, regardless of whether they had normal or relatively low IQs. A rhyming task was administered in the scanner to assess phonological processing. On each trial, a pair of printed words was presented that either rhymed (e.g., gate-bait) or not (price-miss) and the participant judged yes or no. The results are shown in Plate 14. Nondiscrepant poor readers were those with relatively low IQ scores (in the 25th percentile or less). In other words, one might anticipate poor reading in these individuals even in the absence of problems with phonological awareness. The discrepant poor readers were those with IQ scores in the normal range. As can be seen from the plotted data, it really made no difference: Both nondiscrepant and discrepant readers displayed reduced activation in the brain regions responsible for phonological processing compared with typical readers. The left inferior parietal lobe (LtIPL) can be seen in the brain images shown in Plate 14. Also shown is the left fusiform gyrus (LtFG), which lies at the base of the brain near the juncture of the parietal and occipital lobes. In samples taken at Carnegie Mellon University and at Stanford University, the plotted data plainly reveal a phonological deficit in both nondiscrepant and discrepant poor readers in both the LtIPL and the LtFG. Tanaka et al. concluded that the reduced activation of the LtFG disrupts the visual analysis of print, while the LtIPL deficit disrupts converting the printed letter to sounds. Their results suggest that treating the resulting phonological awareness deficit ought to be helpful to any poor readers, regardless of their IQs. The study provides a good example of how brain imaging can guide the understanding and potential treatment of a developmental cognitive impairment.
1. Language is a system of symbols that are used to communicate ideas between two or more individuals. It uses both mental and external representations, such as printed text. Language uses arbitrary symbols that refer to events displaced in time and space. The mental lexicon and grammar of a language are productive, allowing one to generate an infinite number of novel sentences. A language must be learnable by children, it must be able to be spoken and understood readily by adults, it must capture the ideas that people normally communicate, and it must enable communication among groups of people in a social and cultural context. All human languages make use of 50 or so speech sounds or phonological segments produced by our vocal apparatus. Each such utterance is a phoneme, defined as a basic speech sound that makes a difference in meaning. A morpheme is a minimal unit of speech used repeatedly in a language to code a specific meaning; it is made up of two or more phonemes, such as a word or a suffix.
2. Languages differ in terms of their semantics, syntax, and pragmatics. Semantics concerns the use of symbols to refer to objects, events, and ideas in the world. The words used in language make up the lexicon that must be represented mentally in fluent speakers. Syntax concerns the grammatical rules for ordering words to construct meaningful and acceptable sentences in a language. Pragmatics concerns the use of language within social contexts. People command, inform, warn, and otherwise communicate their intentions as direct speech acts (e.g., “Open the window.”) or as indirect speech acts (e.g., “Dreadfully hot in here, don’t you think?”). An implicit agreement, called the cooperative principle, governs conversations to ensure that participants say appropriate things and end the conversation at a mutually agreeable point.
3. Universal grammar refers to the genetically determined knowledge of human language that allows children in all cultures to rapidly acquire the language to which they are exposed. The question of whether or not language is innate has been hotly debated and remains unresolved. Language is localized in the left hemisphere of virtually all right-handed individuals. Damage to Broca’s area in the left hemisphere causes a language disorder or aphasia. Broca’s aphasia refers to an inability to speak fluently without effort and with correct grammar. By contrast, damage to Wernicke’s area disrupts language comprehension. Speech is fluent and effortless in Wernicke’s aphasia, although it is often semantically meaningless.
4. Text comprehension (i.e., reading) has been investigated much more extensively than has writing. The theme of cognition as active construction is well illustrated by the processes of reading. The reader builds mental structures at the local level of micropropositions expressed in words and sentences as well as at the global or macropropositional level of paragraphs and discourse. Sentences possess referential coherence when the words and phrases of one sentence refer unambiguously to those of other sentences in the same paragraph. In building mental structures, the reader uses more than the literal words on the page. For example, the readers use their knowledge about the world to make plausible bridging inferences during comprehension.
5. Normal reading speed is about 250 to 300 words per minute. This rate is constrained by the perceptual factors in fixating on the text and the cognitive factors in building mental structures. The reader appears to assign an interpretation to words, including ambiguous ones, as soon as they are encountered. The amount of time spent fixating on a word corresponds with the difficulty encountered in processing and assigning an interpretation to it.
· mental lexicon
· speech act
· cooperative principle
· universal grammar
· Broca’s aphasia
· Wernicke’s aphasia
· brain lateralization
· corpus callosum
· given-new strategy
· bridging inferences
· referential coherence
· immediacy assumption
· eye-mind assumption
Questions for Thought
· Record a 5-minute conversation with a friend and then listen to the recording. Are the utterances grammatically correct sentences, or are they largely telegraphic or fragmented? In what ways does the conversation illustrate the cooperative principle and pragmatics?
· In reading this chapter, in what specific places did you need to make bridging inferences? Did you need to pause or go back and reread the previous sentence or two at these points? Describe some of the micropropositions asserted in the text. What are some of the macropropositions that convey the basic gist of the chapter?
· Consider learning a second language as a college student. In what ways are your utterances in a foreign tongue similar to those of a patient with Broca’s aphasia? How are your comprehension and production similar to those of a patient with Wernicke’s aphasia?
Why Work with Us
Top Quality and Well-Researched Papers
We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.
Professional and Experienced Academic Writers
We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.
Free Unlimited Revisions
If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.
Prompt Delivery and 100% Money-Back-Guarantee
All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.
Original & Confidential
We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.
24/7 Customer Support
Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.
Try it now!
How it works?
Follow these simple steps to get your paper done
Place your order
Fill in the order form and provide all details of your assignment.
Proceed with the payment
Choose the payment system that suits you most.
Receive the final file
Once your paper is ready, we will email it to you.
No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.
No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.
Admission Essays & Business Writing Help
An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.
Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.
If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.