{"id":4063,"date":"2014-01-13T15:54:20","date_gmt":"2014-01-13T14:54:20","guid":{"rendered":"http:\/\/emilkirkegaard.dk\/en\/?p=4063"},"modified":"2014-10-09T18:49:42","modified_gmt":"2014-10-09T17:49:42","slug":"4063","status":"publish","type":"post","link":"https:\/\/emilkirkegaard.dk\/en\/2014\/01\/4063\/","title":{"rendered":"Review of Introduction to psycholinguistics, Understanding Language Science by MJ Traxler"},"content":{"rendered":"<p>Overall an interesting introduction. Some chapters were much more interesting to me than others, which were somewhere between kinda boring and boring. Generally, the book is way too light on the statistical features of the studies cited. When I hear of a purportedly great study, I want to know the sample size and the significance of the results.<\/p>\n<p>http:\/\/gen.lib.rus.ec\/book\/index.php?md5=2f8ef6d7091552204272a754eca2d7dc&#038;open=0<\/p>\n<p>&nbsp;<\/p>\n<p>&#8212;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Why does Pirah\u00e3 lack recursion? Everett\u2019s (2008) answer is that Pirah\u00e3 lacks recursion <\/span><\/p>\n<p><span style=\"color: #800000;\">because recursion introduces statements into a language that do not make direct assertions <\/span><\/p>\n<p><span style=\"color: #800000;\">about the world. When you say, Give me the nails that Dan bought, that statement presupposes<\/span><\/p>\n<p><span style=\"color: #800000;\">that it is true that Dan bought the nails, but it does not say so outright. In Pirah\u00e3, each of the <\/span><\/p>\n<p><span style=\"color: #800000;\">individual sentences is a direct statement or assertion about the world. \u201cGive me the nails\u201d <\/span><\/p>\n<p><span style=\"color: #800000;\">is a command equivalent to \u201cI want the nails\u201d (an assertion about the speaker\u2019s mental state). <\/span><\/p>\n<p><span style=\"color: #800000;\">\u201cDan bought the nails\u201d is a direct assertion of fact, again expressing the speaker\u2019s mental <\/span><\/p>\n<p><span style=\"color: #800000;\">state (\u201cI know Dan bought those nails\u201d). \u201cThey are the same\u201d is a further statement of fact. <\/span><\/p>\n<p><span style=\"color: #800000;\">Everett describes the Pirah\u00e3 as being a very literal-minded people. They have no creation <\/span><\/p>\n<p><span style=\"color: #800000;\">myths. They do not tell fictional stories. They do not believe assertions made by others <\/span><\/p>\n<p><span style=\"color: #800000;\">about past events unless the speaker has direct knowledge of the events, or knows someone <\/span><\/p>\n<p><span style=\"color: #800000;\">who does. As a result, they are very resistant to conversion to Christianity, or any other faith <\/span><\/p>\n<p><span style=\"color: #800000;\">that requires belief in things unseen. Everett argues that these cultural principles determine <\/span><\/p>\n<p><span style=\"color: #800000;\">the form of Pirah\u00e3 grammar. Specifically, because the Pirah\u00e3 place great store in first-hand <\/span><\/p>\n<p><span style=\"color: #800000;\">knowledge, sentences in the language must be assertions. Nested statements, like relative <\/span><\/p>\n<p><span style=\"color: #800000;\">clauses, require presuppositions (rather than assertions) and are therefore ruled out. If <\/span><\/p>\n<p><span style=\"color: #800000;\">Everett is right about this, then Pirah\u00e3 grammar is shaped by Pirah\u00e3 culture. The form their <\/span><\/p>\n<p><span style=\"color: #800000;\">language takes is shaped by their cultural values and the way they relate to one another <\/span><\/p>\n<p><span style=\"color: #800000;\">socially. If this is so, then Everett\u2019s study of Pirah\u00e3 grammar would overturn much of the <\/span><\/p>\n<p><span style=\"color: #800000;\">received wisdom on where grammars come from and why they take the form they do. <\/span><\/p>\n<p><span style=\"color: #800000;\">Which leads us to \u2026<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Interesting hypothesis.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">In an attempt to gather further evidence regarding these possibilities, Savage-Rumbaugh <\/span><\/p>\n<p><span style=\"color: #800000;\">raised a chimp named Panpanzeeand a bonobo named Panbanisha, starting when they <\/span><\/p>\n<p><span style=\"color: #800000;\">were infants, in a language-rich environment. Chimpanzees are the closest species to <\/span><\/p>\n<p><span style=\"color: #800000;\">humans. The last common ancestor of humans and chimpanzees lived between about <\/span><\/p>\n<p><span style=\"color: #800000;\">5 million and 8 million years ago. Bonobos are physically similar to chimpanzees, although <\/span><\/p>\n<p><span style=\"color: #800000;\">bonobos are a bit smaller on average. Bonobos asa group also have social characteristics <\/span><\/p>\n<p><span style=\"color: #800000;\">that distinguish them from chimpanzees. Theytend to show less intra-species aggression <\/span><\/p>\n<p><span style=\"color: #800000;\">and are less dominated by male members of the species.9Despite the physical similarities, <\/span><\/p>\n<p><span style=\"color: #800000;\">the two species are biologically distinct. By testing both a chimpanzee and a bonobo, SavageRumbaugh could hold environmental factors constant while observing change over time <\/span><\/p>\n<p><span style=\"color: #800000;\">(ontogeny) and differences across the two species (phylogeny). If the two animals acquired <\/span><\/p>\n<p><span style=\"color: #800000;\">the same degree of language skill, this would suggest that cultural or environmental factors <\/span><\/p>\n<p><span style=\"color: #800000;\">have the greatest influence on their language development. Differences between them <\/span><\/p>\n<p><span style=\"color: #800000;\">would most likely reflect phylogenetic biological differences between the two species. <\/span><\/p>\n<p><span style=\"color: #800000;\">Differences in skill over time would most likely reflect ontogenetic or maturational factors.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>The author clearly forgets about individual differences in ability. They are also found in monkeys (indeed, any animal which has <em>g<\/em> as a polygenetic trait).<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">The fossil record shows that human ancestors before Homo sapiensemerged, between <\/span><\/p>\n<p><span style=\"color: #800000;\">about 70,000 and 200,000 years ago, had some of the cultural and physical characteristics of <\/span><\/p>\n<p><span style=\"color: #800000;\">modern humans, including making tools and cooking food. If we assume that modern <\/span><\/p>\n<p><span style=\"color: #800000;\">language emerged sometime during the Homo sapiensera, then it would be nice to know <\/span><\/p>\n<p><span style=\"color: #800000;\">why it emerged then, and not before. One possibility is that a general increase in brain size <\/span><\/p>\n<p><span style=\"color: #800000;\">relative to body weight in Homo sapiensled to an increase in general intelligence, and this <\/span><\/p>\n<p><span style=\"color: #800000;\">increase in general intelligence triggered a language revolution. On this account, big brain <\/span><\/p>\n<p><span style=\"color: #800000;\">comes first and language emerges later. This hypothesis leaves a number of questions <\/span><\/p>\n<p><span style=\"color: #800000;\">unanswered, however, such as, what was that big brain doing before language emerged? If <\/span><\/p>\n<p><span style=\"color: #800000;\">the answer is \u201cnot that much,\u201d then why was large brain size maintained in the species <\/span><\/p>\n<p><span style=\"color: #800000;\">(especially when you consider that the brain demands a huge proportion of the body\u2019s <\/span><\/p>\n<p><span style=\"color: #800000;\">resources)? And if language is an optional feature of big, sapiensbrains, why is it a universal <\/span><\/p>\n<p><span style=\"color: #800000;\">characteristic among all living humans? Also, why do some groups of humans who have <\/span><\/p>\n<p><span style=\"color: #800000;\">smaller sized brains nonetheless have fully developed language abilities?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Interesting that he doesn&#8217;t cite a reference for this claim, or the brain size general intelligence claim from earlier. I&#8217;m more wondering, tho, is there really no difference in language sofistication between groups with different brain sizes (and <em>g<\/em>)? I&#8217;m thinking of spoken language. Perhaps it&#8217;s time to revise that claim. We know that <em>g<\/em> has huge effects on people&#8217;s vocabulary size (vocabulary size is one of the most <em>g<\/em>-loaded subtests) which has to do with both spoken and written language. However, the grammers and morfologies of many languages found in low-<em>g<\/em> countries are indeed very sofisticated.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">As a result of concerns like those raised by Pullum, as well as studies showing that <\/span><\/p>\n<p><span style=\"color: #800000;\">speakers of different languages perceive the world similarly, many language scientists have <\/span><\/p>\n<p><span style=\"color: #800000;\">viewed linguistic determinism as being dead on arrival (see, e.g., Pinker, 1994). Many of <\/span><\/p>\n<p><span style=\"color: #800000;\">them would argue that language serves thought, rather than dictating to it. If we ask the <\/span><\/p>\n<p><span style=\"color: #800000;\">question, what is language good for? one of the most obvious answers is that language <\/span><\/p>\n<p><span style=\"color: #800000;\">allows us to communicate our thoughts to other people. That being the case, we would <\/span><\/p>\n<p><span style=\"color: #800000;\">expect language to adapt to the needs of thought, rather than the other way around. If an <\/span><\/p>\n<p><span style=\"color: #800000;\">individual or a culture discovers something new to say, the language will expand to fit the <\/span><\/p>\n<p><span style=\"color: #800000;\">new idea (as opposed to preventing the new idea from being hatched, as the Whorfian <\/span><\/p>\n<p><span style=\"color: #800000;\">hypothesis suggests). This anti-Whorfian position does enjoy a certain degree of support <\/span><\/p>\n<p><span style=\"color: #800000;\">from the vocabularies of different languages, and different subcultures within individual <\/span><\/p>\n<p><span style=\"color: #800000;\">languages. For example, the class of words that refer to objects and events (open class) <\/span><\/p>\n<p><span style=\"color: #800000;\">changes rapidly in cultures where there is rapid technological or social changes (such as <\/span><\/p>\n<p><span style=\"color: #800000;\">most Western cultures). The word internetdid not exist when I was in college, mumble <\/span><\/p>\n<p><span style=\"color: #800000;\">mumble years ago. The word Googledid not exist 10 years ago. When it first came into the <\/span><\/p>\n<p><span style=\"color: #800000;\">language, it was a noun referring to a particular web-browser. Soon after, it became a verb <\/span><\/p>\n<p><span style=\"color: #800000;\">that meant \u201cto search the internet for information.\u201d In this case, technological, cultural, and <\/span><\/p>\n<p><span style=\"color: #800000;\">social developments caused the language to change. Thought drove language. But did <\/span><\/p>\n<p><span style=\"color: #800000;\">language also drive thought? Certainly. If you hear people saying \u201cGoogle,\u201d you are going to <\/span><\/p>\n<p><span style=\"color: #800000;\">want to know what they mean. You are likely to engage with other speakers of your language <\/span><\/p>\n<p><span style=\"color: #800000;\">until this new concept becomes clear to you. Members of subcultures, such as birdwatchers <\/span><\/p>\n<p><span style=\"color: #800000;\">or dog breeders, have many specialist terms that make their communication more efficient, <\/span><\/p>\n<p><span style=\"color: #800000;\">but there is no reason to believe that you need to know the names for different types of birds <\/span><\/p>\n<p><span style=\"color: #800000;\">before you can perceive the differences between them\u2014a bufflehead looks different than a <\/span><\/p>\n<p><span style=\"color: #800000;\">pintail no matter what they\u2019re called.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>The author apparently gets his etymology wrong. Google was never a verb for a browser, that&#8217;s something computer-illiterate people think, cf. <a href=\"https:\/\/www.youtube.com\/watch?v=o4MwTvtyrUQ\">https:\/\/www.youtube.com\/watch?v=o4MwTvtyrUQ<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>No&#8230; Google&#8217;s Chrome is a browser. Google is a search engine. And \u201cto google\u201d something means to search for it using Google, the search engine. Altho now it has changed somewhat to mean \u201csearch the internet for\u201d. Similarly to the Kleenex meaning change.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Different languages express numbers in different ways, so language could influence the <\/span><\/p>\n<p><span style=\"color: #800000;\">way children in a given culture acquire number concepts (Hunt &amp; Agnoli, 1991; Miller &amp; <\/span><\/p>\n<p><span style=\"color: #800000;\">Stigler, 1987). Chinese number words differ from English and some other languages (e.g., <\/span><\/p>\n<p><span style=\"color: #800000;\">Russian) because the number words for 11\u201319 are more transparent in Chinese than in <\/span><\/p>\n<p><span style=\"color: #800000;\">English. In particular, Chinese number words for the teens are the equivalent of \u201cten-one,\u201d <\/span><\/p>\n<p><span style=\"color: #800000;\">\u201cten-two,\u201d \u201cten-three\u201d and so forth. This makes the relationship between the teens and the <\/span><\/p>\n<p><span style=\"color: #800000;\">single digits more obvious than equivalent English terms, such as twelve. As a result, children <\/span><\/p>\n<p><span style=\"color: #800000;\">who speak Chinese learn to count through the teens faster than children who speak English. <\/span><\/p>\n<p><span style=\"color: #800000;\">This greater accuracy at producing number words leads to greater accuracy when children <\/span><\/p>\n<p><span style=\"color: #800000;\">are given sets of objects and are asked to say how many objects are in the set. Chinesespeaking children performed this task more accurately than their English-speaking peers, <\/span><\/p>\n<p><span style=\"color: #800000;\">largely because they made very few errors in producing number words while counting up <\/span><\/p>\n<p><span style=\"color: #800000;\">the objects. One way to interpret these results is to propose that the Chinese language makes <\/span><\/p>\n<p><span style=\"color: #800000;\">certain relationships more obvious (that numbers come in groups of ten; that there\u2019s a <\/span><\/p>\n<p><span style=\"color: #800000;\">relationship between different numbers that end in the word \u201cone\u201d), and making those <\/span><\/p>\n<p><span style=\"color: #800000;\">relationships more obvious makes the counting system easier to learn.22<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>This hypothesis is certainly plausible, but the average g difference between chinese children and american children is a confound that needs to be dealt with.<\/p>\n<p>&nbsp;<\/p>\n<p>It would be interesting to see a scandinavian comparison because <a href=\"http:\/\/www.sf.airnet.ne.jp\/ts\/language\/number\/danish.html\">danish<\/a> has a horrible numeral system, while swedish has a better one. They both have awkward teen numbers, but the e2 numbers are much more meaningful in <a href=\"https:\/\/en.wikibooks.org\/wiki\/Swedish\/Numerals\">swedish<\/a>: e.g. fem-tio, five-ten vs. halv-tres, half-tres, a remnant from a system based on 20&#8217;s, it&#8217;s half-three*20 = 2.5*20=50.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">So how are word meanings (senses, that is) represented in the mental lexicon? And what <\/span><\/p>\n<p><span style=\"color: #800000;\">research tools are appropriate to investigating word representations? One approach to <\/span><\/p>\n<p><span style=\"color: #800000;\">investigating word meaning relies on introspection\u2014thinking about word meanings and <\/span><\/p>\n<p><span style=\"color: #800000;\">drawing conclusions from subjective experience. It seems plausible, based on introspection, <\/span><\/p>\n<p><span style=\"color: #800000;\">that entries in the mental lexicon are close analogs to dictionary entries. If so, the lexical <\/span><\/p>\n<p><span style=\"color: #800000;\">representation of a given word would incorporate information about its grammatical <\/span><\/p>\n<p><span style=\"color: #800000;\">function (what category does it belong to, verb, noun, adjective, etc.), which determines how <\/span><\/p>\n<p><span style=\"color: #800000;\">it can combine with other words (adverbs go with verbs, adjectives with nouns). Using <\/span><\/p>\n<p><span style=\"color: #800000;\">words in this sense involves the assumption that individual words refer to types\u2014that the <\/span><\/p>\n<p><span style=\"color: #800000;\">core meaning of a word is a pointer to a completely interchangeable set of objects in the <\/span><\/p>\n<p><span style=\"color: #800000;\">world (Gabora, Rosch, &amp; Aerts, 2008). Each individual example of a category is a token. So, <\/span><\/p>\n<p><span style=\"color: #800000;\">teamis a type, and Yankees, Twins, and Mudhensare tokens of that type.2<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>He has misunderstood the type-token terminology. I will quite SEP:<\/p>\n<p><a href=\"http:\/\/plato.stanford.edu\/entries\/types-tokens\/#DisBetTypTok\">http:\/\/plato.stanford.edu\/entries\/types-tokens\/#DisBetTypTok<\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><a name=\"WhaDis\"><\/a><span style=\"color: #800000;\">1.1 What the Distinction Is<\/span><\/h3>\n<p><span style=\"color: #800000;\">The distinction between a <\/span><em><span style=\"color: #800000;\">type<\/span><\/em><span style=\"color: #800000;\"> and its <\/span><em><span style=\"color: #800000;\">tokens<\/span><\/em><span style=\"color: #800000;\"> is an ontological one between a general sort of thing and its particular concrete instances (to put it in an intuitive and preliminary way). So for example consider the number of words in the Gertrude Stein line from her poem <\/span><em><span style=\"color: #800000;\">Sacred Emily<\/span><\/em><span style=\"color: #800000;\"> on the page in front of the reader&#8217;s eyes:<\/span><\/p>\n<blockquote><p><span style=\"color: #800000;\">Rose is a rose is a rose is a rose.<\/span><\/p><\/blockquote>\n<p><span style=\"color: #800000;\">In one sense of \u2018word\u2019 we may count three different words; in another sense we may count ten different words. C. S. Peirce (1931-58, sec. 4.537) called words in the first sense \u201ctypes\u201d and words in the second sense \u201ctokens\u201d. Types are generally said to be abstract and unique; tokens are concrete particulars, composed of ink, pixels of light (or the suitably circumscribed lack thereof) on a computer screen, electronic strings of dots and dashes, smoke signals, hand signals, sound waves, etc. A study of the ratio of written types to spoken types found that there are twice as many word types in written Swedish as in spoken Swedish (Allwood, 1998). If a pediatrician asks how many words the toddler has uttered and is told \u201cthree hundred\u201d, she might well enquire \u201cword types or word tokens?\u201d because the former answer indicates a prodigy. A headline that reads \u201cFrom the Andes to Epcot, the Adventures of an 8,000 year old Bean\u201d might elicit \u201cIs that a bean type or a bean token?\u201d.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>He seems to be talking about members of sets.<\/p>\n<p>&nbsp;<\/p>\n<p>Or maybe not, perhaps his usage is just idiosyncratic, for in the note \u201c2\u201d to the above, he writes:<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Teamitself can be a token of a more general category, like organization(team, company, army). Typeand token<\/span><\/p>\n<p><span style=\"color: #800000;\">are used differently in the speech production literature. There, tokenis often used to refer to a single instance <\/span><\/p>\n<p><span style=\"color: #800000;\">of a spoken word; typeis used to refer to the abstract representation of the word that presumably comes into <\/span><\/p>\n<p><span style=\"color: #800000;\">play every time an individual produces that word<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">We could lo ok at a cor pus and count up ever y t ime t he word dogsappears in exactly that <\/span><\/p>\n<p><span style=\"color: #800000;\">form. We could count up the number of times that catsappears in precisely that form. In <\/span><\/p>\n<p><span style=\"color: #800000;\">that case we would be measuring surface frequency\u2014how often the exact word occurs. But <\/span><\/p>\n<p><span style=\"color: #800000;\">the words dogsand catsare both related to other words that share the same root morpheme. <\/span><\/p>\n<p><span style=\"color: #800000;\">We could decide t o ignore minor differences in s ur face for m and ins t ead concent r at e on <\/span><\/p>\n<p><span style=\"color: #800000;\">how often the family of related words appears. If so, we would treat dog, dogs, dog-tired, and <\/span><\/p>\n<p><span style=\"color: #800000;\">dogpileas being a single large class, and we would count up the number of times any member <\/span><\/p>\n<p><span style=\"color: #800000;\">of the class appears in the corpus. In that case, we would be measuring rootfrequency\u2014how <\/span><\/p>\n<p><span style=\"color: #800000;\">often the shared word root appears in the language. Those two ways of counting frequency <\/span><\/p>\n<p><span style=\"color: #800000;\">can come up with very different estimates. For example, perhaps the exact word dogappears <\/span><\/p>\n<p><span style=\"color: #800000;\">very often, but do-pileappears very infrequently. If we base our frequency estimate on <\/span><\/p>\n<p><span style=\"color: #800000;\">surface frequency, dogpileis very infrequent. But if we use root frequency instead, dogpileis <\/span><\/p>\n<p><span style=\"color: #800000;\">very frequent, because it is in the class of words that share the root dog, which appears <\/span><\/p>\n<p><span style=\"color: #800000;\">fairly often.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">If we use these different frequency estimates (surface frequency and root frequency) to <\/span><\/p>\n<p><span style=\"color: #800000;\">predict how long it will take people to respond on a reaction time task, root frequency <\/span><\/p>\n<p><span style=\"color: #800000;\">makes better predictions than surface frequency does. A word that has a low surface <\/span><\/p>\n<p><span style=\"color: #800000;\">frequency will be responded to quickly if its root frequency is high (Bradley, 1979; Taft, 1979, <\/span><\/p>\n<p><span style=\"color: #800000;\">1994). This outcome is predicted by an account like FOBS that says that word forms are <\/span><\/p>\n<p><span style=\"color: #800000;\">accessed via their roots, and not by models like logogen where each individual word form <\/span><\/p>\n<p><span style=\"color: #800000;\">has a separate entry in the mental lexicon.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Further evidence for the morphological decomposition hypothesis comes from priming <\/span><\/p>\n<p><span style=\"color: #800000;\">studies involving words with real and pseudo-affixes. Many polymorphemic words are <\/span><\/p>\n<p><span style=\"color: #800000;\">created when derivational affixes are added to a root. So, we can take the verb growand <\/span><\/p>\n<p><span style=\"color: #800000;\">turn it into a noun by adding the derivational suffix -er. A groweris someone who grows<\/span><\/p>\n<p><span style=\"color: #800000;\">things. There are a lot of words that end in -erand have a similar syllabic structure to <\/span><\/p>\n<p><span style=\"color: #800000;\">grower, but that are not real polymorphemic words. For example, sisterlooks a bit like <\/span><\/p>\n<p><span style=\"color: #800000;\">grower. They both end in -erand they both have a single syllable that precedes -er. <\/span><\/p>\n<p><span style=\"color: #800000;\">According to the FOBS model, we have to get rid of the affixes before we can identify the <\/span><\/p>\n<p><span style=\"color: #800000;\">root. So, anything that looks or sounds like it has a suffix is going to be treated like it really <\/span><\/p>\n<p><span style=\"color: #800000;\">does have a suffix, even when it doesn\u2019t. Even though sisteris a monomorphemic word, the <\/span><\/p>\n<p><span style=\"color: #800000;\">lexical access process breaks it down into a pseudo- (fake) root, sist, and a pseudo-suffix, -er. <\/span><\/p>\n<p><span style=\"color: #800000;\">After the affix strippingprocess has had a turn at breaking down sisterinto a root and a <\/span><\/p>\n<p><span style=\"color: #800000;\">suffix, the lexical access system will try to find a bin that matches the pseudo-root sist. This <\/span><\/p>\n<p><span style=\"color: #800000;\">process will fail, because there is no root morpheme in English that matches the input sist. <\/span><\/p>\n<p><span style=\"color: #800000;\">In that case, the lexical access system will have to re-search the lexicon using the entire <\/span><\/p>\n<p><span style=\"color: #800000;\">word sister. This extra process should take extra time, therefore the affix stripping <\/span><\/p>\n<p><span style=\"color: #800000;\">hypothesis predicts that pseudo-suffixedwords (like sister) should take longer to process <\/span><\/p>\n<p><span style=\"color: #800000;\">than words that have a real suffix (like grower). This prediction has been confirmed in a <\/span><\/p>\n<p><span style=\"color: #800000;\">number of reaction time studies\u2014people do have a harder time recognizing pseudosuffixed words than words with real suffixes (Lima, 1987; Smith &amp; Sterling, 1982; Taft, <\/span><\/p>\n<p><span style=\"color: #800000;\">1981). People also have more trouble rejecting pseudo-words that are made up of a prefix <\/span><\/p>\n<p><span style=\"color: #800000;\">(e.g., de) and a real root morpheme (e.g., juvenate) than a comparable pseudo-word that <\/span><\/p>\n<p><span style=\"color: #800000;\">contains a prefix and a non-root (e.g., pertoire). This suggests that morphological <\/span><\/p>\n<p><span style=\"color: #800000;\">decomposition successfully accesses a bin in the dejuvenatecase, and people are able to <\/span><\/p>\n<p><span style=\"color: #800000;\">rule out dejuvenateas a real word only after the entire bin has been fully searched (Taft &amp; <\/span><\/p>\n<p><span style=\"color: #800000;\">Forster, 1975). Morphological structure may also play a role in word learning. When people <\/span><\/p>\n<p><span style=\"color: #800000;\">are exposed to novel words that are made up of real morphemes, such as genvive(related <\/span><\/p>\n<p><span style=\"color: #800000;\">to the morpheme vive, as in revive) they rate that stimulus as being a better English word <\/span><\/p>\n<p><span style=\"color: #800000;\">and they recognize it better than an equally complex stimulus that does not incorporate a <\/span><\/p>\n<p><span style=\"color: #800000;\">familiar root (such as gencule) (Dorfman, 1994, 1999).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>In which case english is a pretty bad language as it tends not to re-use roots. Ex. \u201cgarlic\u201d vs. danish \u201chvidl\u00f8g\u201d (white-onion), or \u201cedible\u201d vs. \u201ceatable\u201d (eat-able). Esparanto shud do pretty good on a comparison.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">When comprehenders demonstrate sensitivity to subcategory preference information <\/span><\/p>\n<p><span style=\"color: #800000;\">(the fact that some structures are easier to process than others when a sentence contains a <\/span><\/p>\n<p><span style=\"color: #800000;\">particular verb), they are behaving in ways that are consistent with the tuning hypothesis. <\/span><\/p>\n<p><span style=\"color: #800000;\">The tuning hypothesis says, \u201cthat structural ambiguities are resolved on the basis of stored <\/span><\/p>\n<p><span style=\"color: #800000;\">records relating to the prevalence of the resolution of comparable ambiguities in the past\u201d <\/span><\/p>\n<p><span style=\"color: #800000;\">(Mitchell, Cuetos, Corley, &amp; Brysbaert, 1995, p. 470; see also Bates &amp; MacWhinney, 1987; <\/span><\/p>\n<p><span style=\"color: #800000;\">Ford, Bresnan, &amp; Kaplan, 1982; MacDonald et al., 1994). In other words, people keep track <\/span><\/p>\n<p><span style=\"color: #800000;\">of how often they encounter different syntactic structures, and when they are uncertain <\/span><\/p>\n<p><span style=\"color: #800000;\">about how a particular string of words should be structured, they use this stored information <\/span><\/p>\n<p><span style=\"color: #800000;\">to rank the different possibilities. In the case of subcategory preference information, the <\/span><\/p>\n<p><span style=\"color: #800000;\">frequencies of different structures are tied to specific words\u2014verbs in this case. The next <\/span><\/p>\n<p><span style=\"color: #800000;\">section will consider the possibility that frequencies are tied to more complicated <\/span><\/p>\n<p><span style=\"color: #800000;\">configurations of words, rather than to individual words.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>This seems like a plausible account of why practicing can boost reading speed.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">The other way that propositionis defined in construction\u2013integration theory is, \u201cThe <\/span><\/p>\n<p><span style=\"color: #800000;\">smallest unit of meaning that can be assigned a truth value.\u201d Anything smaller than that is <\/span><\/p>\n<p><span style=\"color: #800000;\">a predicate or an argument. Anything bigger than that is a macroproposition. So, wroteis a <\/span><\/p>\n<p><span style=\"color: #800000;\">predicate, and wrote the companyis a predicate and one of its arguments. Neither is <\/span><\/p>\n<p><span style=\"color: #800000;\">a proposition, because neither can be assigned a truth value. That is, it doesn\u2019t make sense <\/span><\/p>\n<p><span style=\"color: #800000;\">to ask, \u201cTrue or false: wrote the company?\u201d But it does make sense to ask, \u201cTrue or false: The <\/span><\/p>\n<p><span style=\"color: #800000;\">customer wrote the company?\u201d To answer that question, you would consult some <\/span><\/p>\n<p><span style=\"color: #800000;\">representation of the real or an imaginary world, and the statement would either accurately <\/span><\/p>\n<p><span style=\"color: #800000;\">describe the state of affairs in that world (i.e., it would be true) or it would not (i.e., it would <\/span><\/p>\n<p><span style=\"color: #800000;\">be false).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Although the precise mental mechanisms that are involved in converting the surface <\/span><\/p>\n<p><span style=\"color: #800000;\">form to a set of propositions have not been worked out, and there is considerable debate <\/span><\/p>\n<p><span style=\"color: #800000;\">about the specifics of propositional representation (see, e.g., Kintsch, 1998; Perfetti &amp; Britt, <\/span><\/p>\n<p><span style=\"color: #800000;\">1995), a number of experimental studies have supported the idea that propositions are a <\/span><\/p>\n<p><span style=\"color: #800000;\">real element of comprehenders\u2019 mental representations of texts (van Dijk &amp; Kintsch, 1983). <\/span><\/p>\n<p><span style=\"color: #800000;\">In other words, propositions are psychologically real\u2014there really are propositions in the <\/span><\/p>\n<p><span style=\"color: #800000;\">head. For example, Ratcliff and McKoon (1978) used priming methods to find out how <\/span><\/p>\n<p><span style=\"color: #800000;\">comprehenders\u2019 memories for texts are organized. There are a number of possibilities. It <\/span><\/p>\n<p><span style=\"color: #800000;\">could be that comprehenders\u2019 memories are organized to capture pretty much the verbatim <\/span><\/p>\n<p><span style=\"color: #800000;\">information that the text conveyed. In that case, we would expect that information that is <\/span><\/p>\n<p><span style=\"color: #800000;\">nearby in the verbatim form of the text would be very tightly connected in the comprehender\u2019s <\/span><\/p>\n<p><span style=\"color: #800000;\">memory of that text. So, for example, if you had a sentence like (2) (from Ratcliff &amp; McKoon, <\/span><\/p>\n<p><span style=\"color: #800000;\">1978)<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">(2) The geese crossed the horizon as the wind shuffled the clouds.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">the words horizonand windare pretty close together, as they are separated by only two short <\/span><\/p>\n<p><span style=\"color: #800000;\">function words. If the comprehender\u2019s memory of the sentence is based on remembering it <\/span><\/p>\n<p><span style=\"color: #800000;\">as it appeared on the page, then horizonshould be a pretty good retrieval cue for wind(and <\/span><\/p>\n<p><span style=\"color: #800000;\">vice versa).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">If we analyze sentence (2) as a set of propositions, however, we would make a different <\/span><\/p>\n<p><span style=\"color: #800000;\">prediction. Sentence (2) represents two connected propositions, because there are two <\/span><\/p>\n<p><span style=\"color: #800000;\">predicates, crossedand shuffled. If we built a propositional representation of sentence (2), <\/span><\/p>\n<p><span style=\"color: #800000;\">we would have a macroproposition(a proposition that is itself made up of other propositions), <\/span><\/p>\n<p><span style=\"color: #800000;\">and two micropropositions(propositions that combine to make up macropropositions). The <\/span><\/p>\n<p><span style=\"color: #800000;\">macroproposition is:<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">as (Proposition 1, Proposition 2)<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">The micropropositions are:<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Proposition 1: crossed [geese, the horizon]<\/span><\/p>\n<p><span style=\"color: #800000;\">Proposition 2: shuffled [the wind, the clouds]<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">Notice that the propositional representation of sentence (2) has horizonin one proposition, <\/span><\/p>\n<p><span style=\"color: #800000;\">and windin another. According to construction\u2013integration theory, all of the elements of <\/span><\/p>\n<p><span style=\"color: #800000;\">that go together to make a proposition should be more tightly connected in memory to each <\/span><\/p>\n<p><span style=\"color: #800000;\">other than to anything else in the sentence. As a result, two words from the same proposition <\/span><\/p>\n<p><span style=\"color: #800000;\">should make better retrieval cues than two words from different propositions. Those <\/span><\/p>\n<p><span style=\"color: #800000;\">predictions can be tested by asking subjects to read sentences like (2), do a distractor task <\/span><\/p>\n<p><span style=\"color: #800000;\">for a while, and then write down what they can remember about the sentences later on. On <\/span><\/p>\n<p><span style=\"color: #800000;\">each trial, one of the words from the sentence will be used as a retrieval cue or reminder. So, <\/span><\/p>\n<p><span style=\"color: #800000;\">before we ask the subject to remember sentence (2), we will give her a hint. The hint <\/span><\/p>\n<p><span style=\"color: #800000;\">(retrieval cue) might be a word from proposition 1 (like horizon) or a word from proposition<\/span><\/p>\n<p><span style=\"color: #800000;\">2 (like clouds), and the dependent measure would be the likelihood that the participant will <\/span><\/p>\n<p><span style=\"color: #800000;\">remember a word from the second proposition (like wind). Roger Ratcliff and Gail McKoon <\/span><\/p>\n<p><span style=\"color: #800000;\">found that words that came from the same proposition were much better retrieval cues <\/span><\/p>\n<p><span style=\"color: #800000;\">(participants were more likely to remember the target word) than words from different <\/span><\/p>\n<p><span style=\"color: #800000;\">propositions, even when distance in the verbatim form was controlled. In other words, it <\/span><\/p>\n<p><span style=\"color: #800000;\">does not help that much to be close to the target word in the verbatim form of the sentence <\/span><\/p>\n<p><span style=\"color: #800000;\">unless the reminder word is also from the same proposition as the target word (see also <\/span><\/p>\n<p><span style=\"color: #800000;\">Wanner, 1975; Weisberg, 1969).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Im surprised to see empirical evidence for this, but it is very neat when science does that &#8211; converge on the same result from two different angles (in this case metafysics and linguistics).<\/p>\n<p>&nbsp;<\/p>\n<p>As for his micro, macroproposition terminology, normally logicians call these compound\/non-atomic and atomic propositions.<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">How does suppression work? Is it as automatic as enhancement? There are a number of <\/span><\/p>\n<p><span style=\"color: #800000;\">reasons to think that suppression is not just a mirror image of enhancement. First, <\/span><\/p>\n<p><span style=\"color: #800000;\">suppression takes a lot longer to work than enhancement does. Second, while knowledge <\/span><\/p>\n<p><span style=\"color: #800000;\">activation (enhancement) occurs about the same way for everyone, not everyone is equally <\/span><\/p>\n<p><span style=\"color: #800000;\">good at suppressing irrelevant information, and this appears to be a major contributor to <\/span><\/p>\n<p><span style=\"color: #800000;\">differences in comprehension ability between different people (Gernsbacher, 1993; <\/span><\/p>\n<p><span style=\"color: #800000;\">Gernsbacher &amp; Faust, 1991; Gernsbacher et al., 1990). For example, Gernsbacher and her <\/span><\/p>\n<p><span style=\"color: #800000;\">colleagues acquired Verbal SAT scores for a large sample of students at the University of <\/span><\/p>\n<p><span style=\"color: #800000;\">Oregon (similar experiments have been done on Air Force recruits in basic training, who <\/span><\/p>\n<p><span style=\"color: #800000;\">are about the same age as the college students). Verbal SAT scores give a pretty good <\/span><\/p>\n<p><span style=\"color: #800000;\">indication of how well people are able to understand texts that they read, and there are <\/span><\/p>\n<p><span style=\"color: #800000;\">considerable differences between the highest and lowest scoring people in the sample. This <\/span><\/p>\n<p><span style=\"color: #800000;\">group of students was then asked to judge whether target words like acewere semantically <\/span><\/p>\n<p><span style=\"color: #800000;\">related to a preceding sentence like (15), above. Figure 5.4 presents representative data from <\/span><\/p>\n<p><span style=\"color: #800000;\">one of these experiments. The left-hand bars show that the acemeaning was highly activated <\/span><\/p>\n<p><span style=\"color: #800000;\">for both good comprehenders (the dark bars) and poorer comprehenders (the light bars) <\/span><\/p>\n<p><span style=\"color: #800000;\">immediately after the sentence. After a delay of one second (a very long time in language <\/span><\/p>\n<p><span style=\"color: #800000;\">processing terms), the good comprehenders had suppressed the contextually inappropriate <\/span><\/p>\n<p><span style=\"color: #800000;\">\u201cplaying card\u201d meaning of spade, but the poor comprehenders still had that meaning <\/span><\/p>\n<p><span style=\"color: #800000;\">activated (shown in the right-hand bars of Figure 5.4).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/emilkirkegaard.dk\/en\/wp-content\/uploads\/supression.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-4068\" title=\"supression\" src=\"http:\/\/emilkirkegaard.dk\/en\/wp-content\/uploads\/supression-300x210.png\" alt=\"\" width=\"300\" height=\"210\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Very neat! Didn&#8217;t know about this, but it fits very nicely in the ECT (elementary cognitive test) tradition of Jensen. I shud probably review this evidence and publish a review in Journal of Intelligence.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&#8211;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">To determine whether something is a cause, comprehenders apply the necessity in the <\/span><\/p>\n<p><span style=\"color: #800000;\">circumstances heuristic (which is based on the causal analysis of the philosopher Hegel). <\/span><\/p>\n<p><span style=\"color: #800000;\">The necessity in the circumstances heuristic says that \u201cA causes B, if, in the circumstances <\/span><\/p>\n<p><span style=\"color: #800000;\">of the story, B would not have occurred if A had not occurred, and if A is sufficient for B to <\/span><\/p>\n<p><span style=\"color: #800000;\">occur.\u201d<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>Sounds more like a logical fallacy, i.e. denying the antecedent:<\/p>\n<p>1. A\u2192B<br \/>\n2. \u00acA<br \/>\nThus, 3. \u00acB<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\">The importance of causal structure in the mental processing of texts can be demonstrated <\/span><\/p>\n<p><span style=\"color: #800000;\">in a variety of ways. First, the propositional structure of texts can be described as a network <\/span><\/p>\n<p><span style=\"color: #800000;\">of causal connections. Some of the propositions in a story will be on the central causal chain <\/span><\/p>\n<p><span style=\"color: #800000;\">that runs from the first proposition in the story (Once upon a time \u2026) to the last (\u2026 and <\/span><\/p>\n<p><span style=\"color: #800000;\">they lived happily ever after). Other propositions will be on causal dead-ends or side-plots. <\/span><\/p>\n<p><span style=\"color: #800000;\">In Cinderella, her wanting to go to the ball, the arrival of the fairy godmother, the loss of the <\/span><\/p>\n<p><span style=\"color: #800000;\">212<\/span><\/p>\n<p><span style=\"color: #800000;\">Discourse Processing<\/span><\/p>\n<p><span style=\"color: #800000;\">glass slipper, and the eventual marriage to the handsome prince, are all on the central causal <\/span><\/p>\n<p><span style=\"color: #800000;\">chain. Many of the versions of the Cinderella story do not bother to say what happens to the <\/span><\/p>\n<p><span style=\"color: #800000;\">evil stepmother and stepsisters after Cinderella gets married. Those events are off the <\/span><\/p>\n<p><span style=\"color: #800000;\">central causal chain and, no matter how they are resolved, they do not affect the central <\/span><\/p>\n<p><span style=\"color: #800000;\">causal chain. As a result, if non-central events are explicitly included in the story, they are <\/span><\/p>\n<p><span style=\"color: #800000;\">not remembered as well as more causally central elements (Fletcher, 1986; Fletcher &amp; <\/span><\/p>\n<p><span style=\"color: #800000;\">Bloom, 1988; Fletcher et al., 1990).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>A nice model for memetic evolution.<\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">Korean Air had a big problem (Kirk, 2002). Their planes were dropping out of <\/span><\/p>\n<p><span style=\"color: #800000;\">the sky like ducks during hunting season. They had the worst safety record of <\/span><\/p>\n<p><span style=\"color: #800000;\">any major airline. Worried company executives ordered a top-to-bottom <\/span><\/p>\n<p><span style=\"color: #800000;\">review of company policies and practices to find out what was causing all the <\/span><\/p>\n<p><span style=\"color: #800000;\">crashes. An obvious culprit would be faulty aircraft or bad maintenace <\/span><\/p>\n<p><span style=\"color: #800000;\">practices. But their review showed that Korean Air\u2019s aircraft were well <\/span><\/p>\n<p><span style=\"color: #800000;\">maintained and mechanically sound. So what was the problem? It turned out <\/span><\/p>\n<p><span style=\"color: #800000;\">that the way members of the flight crew talked to one another was a major <\/span><\/p>\n<p><span style=\"color: #800000;\">contributing factor in several air disasters. As with many airlines, Korean Air <\/span><\/p>\n<p><span style=\"color: #800000;\">co-pilots were generally junior to the pilots they flew with. Co-pilots\u2019 <\/span><\/p>\n<p><span style=\"color: #800000;\">responsibilities included, among other things, helping the pilot monitor the <\/span><\/p>\n<p><span style=\"color: #800000;\">flight instruments and communicating with the pilot when a problem occurred, <\/span><\/p>\n<p><span style=\"color: #800000;\">including when the pilot might be making an error flying the plane. But in the <\/span><\/p>\n<p><span style=\"color: #800000;\">wider Korean culture, younger people treat older people with great deference <\/span><\/p>\n<p><span style=\"color: #800000;\">and respect, and this social norm influences the way younger and older people <\/span><\/p>\n<p><span style=\"color: #800000;\">talk to one another. Younger people tend to defer to older people and feel <\/span><\/p>\n<p><span style=\"color: #800000;\">uncomfortable challenging their judgment or pointing out when they are <\/span><\/p>\n<p><span style=\"color: #800000;\">about to fly a jet into the side of a mountain. In the air, co-pilots were waiting <\/span><\/p>\n<p><span style=\"color: #800000;\">too long to point out pilot errors, and when they did voice their concerns, their <\/span><\/p>\n<p><span style=\"color: #800000;\">communication style, influenced by a lifetime of cultural conditioning, made it <\/span><\/p>\n<p><span style=\"color: #800000;\">more difficult for pilots to realize when something was seriously wrong. To <\/span><\/p>\n<p><span style=\"color: #800000;\">correct this problem, pilots and co-pilots had to re-learn how to talk to one <\/span><\/p>\n<p><span style=\"color: #800000;\">another. Pilots needed to learn to pay closer attention when co-pilots voiced <\/span><\/p>\n<p><span style=\"color: #800000;\">their opinions, and co-pilots had to learn to be more direct and assertive when <\/span><\/p>\n<p><span style=\"color: #800000;\">communicating with pilots. After instituting these and other changes, Korean <\/span><\/p>\n<p><span style=\"color: #800000;\">Air\u2019s safety record improved and they stopped losing planes.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>This one is probably not true. No source given either. Perhaps just a case of statistical regression towards the mean. Any airline that does bad for chance reasons will tend to recover.<\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">To date, exp er iment s on s t at is t ical lear ning in infant s have b een bas ed on highly <\/span><\/p>\n<p><span style=\"color: #800000;\">simplified mini-languages with very rigid statistical properties. For example, transitional <\/span><\/p>\n<p><span style=\"color: #800000;\">probabilities between syllables are set to 1.0 for \u201cwords\u201d in the language, and .33 for pairs of <\/span><\/p>\n<p><span style=\"color: #800000;\">syllables that cut across \u201cword\u201d boundaries.17Natural languages have a much wider range of <\/span><\/p>\n<p><span style=\"color: #800000;\">transitional probabilities between syllables, the vast majority of which are far lower than <\/span><\/p>\n<p><span style=\"color: #800000;\">1.0. Researchers have used mathematical models to simulate learning of natural languages, <\/span><\/p>\n<p><span style=\"color: #800000;\">using samples of real infant-directed speech to train the simulated learner (Yang, 2004). <\/span><\/p>\n<p><span style=\"color: #800000;\">When the model has to rely on transitional probabilities alone, it fails to segment speech <\/span><\/p>\n<p><span style=\"color: #800000;\">accurately. However, when the model makes two simple assumptions about prosody\u2014that <\/span><\/p>\n<p><span style=\"color: #800000;\">each word has a single stressed syllable, and that the prevailing pattern for bisyllables is <\/span><\/p>\n<p><span style=\"color: #800000;\">trochaic (STRONG\u2013weak)\u2014the model is about as accurate in its segmentation decisions as <\/span><\/p>\n<p><span style=\"color: #800000;\">7\u00bd-month-old infants. This result casts doubt on whether the statistical learning strategy is <\/span><\/p>\n<p><span style=\"color: #800000;\">sufficient for infants to learn how to segment naturally occurring speech (and if the strategy <\/span><\/p>\n<p><span style=\"color: #800000;\">is not sufficient, it can not be necessary either).<\/span><\/p>\n<p>More logic errors? Let&#8217;s translate the talk of sufficient and necessary conditions into logic:<\/p>\n<p>A is a sufficient condition for B, is the same as, A\u2192B<\/p>\n<p>A is a necessary condition for B, is the same as, B\u2192A<\/p>\n<p>The claim that A is not sufficient for B, then A is not necessary for B, is thus the same as: \u00ac(A\u2192B)\u2192\u00ac(B\u2192A). Clearly not true.<\/p>\n<p><a href=\"http:\/\/emilkirkegaard.dk\/en\/wp-content\/uploads\/invalid_argument.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4069\" title=\"invalid_argument\" src=\"http:\/\/emilkirkegaard.dk\/en\/wp-content\/uploads\/invalid_argument.png\" alt=\"\" width=\"144\" height=\"144\" \/><\/a><\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">So, if a child already knows the name of a concept, she will reject a second label as referring <\/span><\/p>\n<p><span style=\"color: #800000;\">to the same concept. Children can use this principle to figure out the meanings of new <\/span><\/p>\n<p><span style=\"color: #800000;\">words, because applying the principle of contrast rules out possible meanings. If you <\/span><\/p>\n<p><span style=\"color: #800000;\">already know that gavagaimeans \u201crabbit,\u201d and your guide points at a rabbit and says, blicket, <\/span><\/p>\n<p><span style=\"color: #800000;\">you will not assume that gavagaiand blicketare synonyms. Instead, you will consider the <\/span><\/p>\n<p><span style=\"color: #800000;\">possibility that blicketrefers to a salient part of the rabbit (its ears, perhaps) or a type of <\/span><\/p>\n<p><span style=\"color: #800000;\">rabbit or some other salient property of rabbits (that they\u2019re cute, maybe). In the lab, <\/span><\/p>\n<p><span style=\"color: #800000;\">children who are taught two new names while attending to an unfamiliar object interpret <\/span><\/p>\n<p><span style=\"color: #800000;\">the first name as referring to the entire object and the second name as referring to a salient <\/span><\/p>\n<p><span style=\"color: #800000;\">part of the object. For somewhat older children (3\u20134 years old), parents often provide an <\/span><\/p>\n<p><span style=\"color: #800000;\">explicit contrast when introducing children to new words that label parts of an object <\/span><\/p>\n<p><span style=\"color: #800000;\">(Saylor, Sabbagh, &amp; Baldwin, 2002). So, an adult might point to Flopsy and say, See <\/span><\/p>\n<p><span style=\"color: #800000;\">the bunny? These are his ears. Children do not need such explicit instruction, however, <\/span><\/p>\n<p><span style=\"color: #800000;\">as they appear to spontaneously apply the principle of contrast to deduce meanings for <\/span><\/p>\n<p><span style=\"color: #800000;\">subcomponents of objects (e.g., ears) and substances that objects are made out of (e.g., <\/span><\/p>\n<p><span style=\"color: #800000;\">wood, naugahyde, duck tape).<\/span><\/p>\n<p>Dogs (some of them) apparently can also do this. <a href=\"http:\/\/www.sciencemag.org\/content\/304\/5677\/1682.short\">http:\/\/www.sciencemag.org\/content\/304\/5677\/1682.short<\/a><\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">When Chinese was thought of as a pictographic script, it made sense to think that <\/span><\/p>\n<p><span style=\"color: #800000;\">Chinese script might be processed much differently than English script. But it turns out <\/span><\/p>\n<p><span style=\"color: #800000;\">that there are many similarities in how the two scripts are processed. For one thing, reading <\/span><\/p>\n<p><span style=\"color: #800000;\">both scripts leads to the rapid and automatic activation of phonological (sound) codes. <\/span><\/p>\n<p><span style=\"color: #800000;\">When we read English, we use groups of letters to activate phonological codes automatically <\/span><\/p>\n<p><span style=\"color: #800000;\">(this is one of the sources of the inner voicethat you often hear when you read). The fact <\/span><\/p>\n<p><span style=\"color: #800000;\">that phonological codes are automatically activated in English reading is shown by <\/span><\/p>\n<p><span style=\"color: #800000;\">experiments involving semantic categorization tasks where people have to judge whether a <\/span><\/p>\n<p><span style=\"color: #800000;\">word is a member of a category. Heterophonic(multiple pronunciations) homographs(one <\/span><\/p>\n<p><span style=\"color: #800000;\">spelling), such as wind, take longer to read than comparably long and frequent regular <\/span><\/p>\n<p><span style=\"color: #800000;\">words, because reading windactivates two phonological representations (as in the wind was <\/span><\/p>\n<p><span style=\"color: #800000;\">blowingvs. wind up the clock) (Folk &amp; Morris, 1995). A related consistency effect involves <\/span><\/p>\n<p><span style=\"color: #800000;\">words that have spelling patterns that have multiple pronunciations. The word havecontains <\/span><\/p>\n<p><span style=\"color: #800000;\">the letter \u201ca,\u201d which in this case is pronounced as a \u201cshort\u201d \/a\/ sound. But most of the time <\/span><\/p>\n<p><span style=\"color: #800000;\">-aveis pronounced with the \u201clong\u201d a sound, as in cave, and save. So, the words have, cave, <\/span><\/p>\n<p><span style=\"color: #800000;\">and save, are said to be inconsistentbecause the same string of letters can have multiple <\/span><\/p>\n<p><span style=\"color: #800000;\">pronunciations. Words of this type take longer to read than words that have entirely <\/span><\/p>\n<p><span style=\"color: #800000;\">consistent letter\u2013pronunciation patterns (Glushko, 1979), and the extra reading time <\/span><\/p>\n<p><span style=\"color: #800000;\">reflects the costs associated with selecting the correct phonological code from a number of <\/span><\/p>\n<p><span style=\"color: #800000;\">automatically activated candidates.<\/span><\/p>\n<p>Some potential good reasons for a &#8216;shallow&#8217; (i.e. good) orthografy here! A bad spelling system is literally causing us to take longer to read, not just to learn to read.<\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">Phonemic awareness is an important precursor of literacy (the ability to read and write). <\/span><\/p>\n<p><span style=\"color: #800000;\">It is thought to play a causal role in reading success, because differences in phonemic <\/span><\/p>\n<p><span style=\"color: #800000;\">awareness can be measured in children who have not yet begun to read. Those prereaders\u2019 <\/span><\/p>\n<p><span style=\"color: #800000;\">phonemic awareness test scores then predict how successfully and how quickly they will <\/span><\/p>\n<p><span style=\"color: #800000;\">master reading skills two or three years down the line when they begin to read (Torgesen <\/span><\/p>\n<p><span style=\"color: #800000;\">et al., 1999, 2001; Wagner &amp; Torgesen, 1987; Wagner, Torgesen, &amp; Rashotte, 1994; <\/span><\/p>\n<p><span style=\"color: #800000;\">Wagner et al., 1997; see Wagner, Piasta, &amp; Torgesen, 2006, for a review; but see Castles &amp; <\/span><\/p>\n<p><span style=\"color: #800000;\">Coltheart, 2004, for a different perspective). Phonemic awareness can be assessed in a <\/span><\/p>\n<p><span style=\"color: #800000;\">variety of ways, including the elision, sound categorization, and blendingtasks (Torgesen <\/span><\/p>\n<p><span style=\"color: #800000;\">et al., 1999), among others, but the best assessments of phonemic awareness involve multiple <\/span><\/p>\n<p><span style=\"color: #800000;\">measures. In the elision task, children are given a word such as catand asked what it would <\/span><\/p>\n<p><span style=\"color: #800000;\">sound like if you got rid of the \/k\/sound. Sound categorization involves listening to sets of <\/span><\/p>\n<p><span style=\"color: #800000;\">words, such as pin, bun, fun, and gun, and identifying the word \u201cthat does not sound like the <\/span><\/p>\n<p><span style=\"color: #800000;\">others\u201d (in this case, pin; Torgesen et al., 1999, p. 76). In blending tasks, children hear an <\/span><\/p>\n<p><span style=\"color: #800000;\">onset (word beginning) and a rime (vowel and consonant sound at the end of a syllable), <\/span><\/p>\n<p><span style=\"color: #800000;\">and say what they would sound like when they are put together. Children\u2019s composite scores <\/span><\/p>\n<p><span style=\"color: #800000;\">on tests of phonemic awareness are strongly correlated with the development of reading <\/span><\/p>\n<p><span style=\"color: #800000;\">skill at later points in time. Children who are less phonemically aware will experience <\/span><\/p>\n<p><span style=\"color: #800000;\">greater difficulty learning to read, but effective interventions have been developed to <\/span><\/p>\n<p><span style=\"color: #800000;\">enhance children\u2019s phonemic awareness, and hence toincrease the likelihood that they will <\/span><\/p>\n<p><span style=\"color: #800000;\">acquire reading skill within the normal time frame (Ehri, Nunes, Willows, et al., 2001).18<\/span><\/p>\n<p>These shud work as early IQ tests.<\/p>\n<p>And it does (even if this is a weak paper): <a href=\"http:\/\/www.psy.cuhk.edu.hk\/psy_media\/Cammie_files\/016.correlates%20of%20phonological%20awareness%20implications%20for%20gifted%20education.pdf\">http:\/\/www.psy.cuhk.edu.hk\/psy_media\/Cammie_files\/016.correlates%20of%20phonological%20awareness%20implications%20for%20gifted%20education.pdf<\/a><\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">There are different kinds of neighborhoods, and the kind of neighborhood a word <\/span><\/p>\n<p><span style=\"color: #800000;\">inhabits affects how easy it is to read that word. Different orthographic neighborhoods are <\/span><\/p>\n<p><span style=\"color: #800000;\">described as being consistent or inconsistent, based on how the different words in the <\/span><\/p>\n<p><span style=\"color: #800000;\">neighborhood are pronounced. If they are all pronounced alike, then the neighborhood is <\/span><\/p>\n<p><span style=\"color: #800000;\">consistent. If some words in the neighborhood are pronounced one way, and others are <\/span><\/p>\n<p><span style=\"color: #800000;\">pronounced another way, then the neighborhoodis inconsistent. The neighborhood that <\/span><\/p>\n<p><span style=\"color: #800000;\">madeinhabits is consistent, because all of the other members of the neighborhood (wade, <\/span><\/p>\n<p><span style=\"color: #800000;\">fade, etc.) are pronounced with the long \/a\/ sound. On the other hand, hint lives in an <\/span><\/p>\n<p><span style=\"color: #800000;\">inconsistent neighborhood because some of the neighbors are pronounced with the short <\/span><\/p>\n<p><span style=\"color: #800000;\">\/i\/ sound (mint, lint, tint), but some are pronounced with the long \/i\/ sound (pint). Words <\/span><\/p>\n<p><span style=\"color: #800000;\">from inconsistent neighborhoods take longer to pronounce than words from consistent <\/span><\/p>\n<p><span style=\"color: #800000;\">neighborhoods, and this effect extends to non-words as well (Glushko, 1979; see also Jared, <\/span><\/p>\n<p><span style=\"color: #800000;\">McRae, &amp; Seidenberg, 1990; Seidenberg, Plaut, Petersen, McClelland, &amp; McRae, 1994). So, <\/span><\/p>\n<p><span style=\"color: #800000;\">it takes you less time to say tadethan it takes you to say bint. Why would this be?<\/span><\/p>\n<p>Bad spelling even makes us speak slower&#8230;<\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">The single-route models would seem to enjoy a parsimony advantage, since they can <\/span><\/p>\n<p><span style=\"color: #800000;\">produce frequency and regularity effects, as well as their interaction, on the basis of a single <\/span><\/p>\n<p><span style=\"color: #800000;\">mechanism.25However, recent studies have indicated that the exact position in a word that <\/span><\/p>\n<p><span style=\"color: #800000;\">leads to inconsistent spelling\u2013sound mappings affects how quickly the word can be read <\/span><\/p>\n<p><span style=\"color: #800000;\">aloud. As noted above, it takes longer to read a word with an inconsistency at the beginning <\/span><\/p>\n<p><span style=\"color: #800000;\">(e.g., general, where hard \/g\/ as in goatis more common) than a word with an inconsistency <\/span><\/p>\n<p><span style=\"color: #800000;\">at the end (e.g., bomb, where the bis silent). This may be more consistent with the DRC <\/span><\/p>\n<p><span style=\"color: #800000;\">serial mapping of letters to sounds than the parallel activation posited by PDP-style singleroute models (Coltheart &amp; Rastle, 1994; Cortese, 1998; Rastle &amp; Coltheart, 1999b; Roberts, <\/span><\/p>\n<p><span style=\"color: #800000;\">Rastle, Coltheart, &amp; Besner, 2003).<\/span><\/p>\n<p>In practical terms, this means that we shud begin with words that have problematic beginnings and endings. Words like \u201cmnemonic\u201d and \u201cpsychology\u201d.<\/p>\n<p>&#8211;<\/p>\n<p><span style=\"color: #800000;\">Treat ment opt ions for aphasia include pharmacological t herapy (dr ugs) and various <\/span><\/p>\n<p><span style=\"color: #800000;\">forms of speech therapy.18Let\u2019s review pharmacological therapy before turning to speech <\/span><\/p>\n<p><span style=\"color: #800000;\">therapy. One of the main problems that happens following strokes is that damage to the <\/span><\/p>\n<p><span style=\"color: #800000;\">blood vessels in the brain reduces the blood flow to perisylvian brain regions, and <\/span><\/p>\n<p><span style=\"color: #800000;\">hypometabolism\u2014less than normal activity\u2014in those regions likely contributes to aphasic <\/span><\/p>\n<p><span style=\"color: #800000;\">symptoms. Therefore, some pharmacological treatments focus on increasing the blood <\/span><\/p>\n<p><span style=\"color: #800000;\">supply to the brain, and those treatments have been shown to be effective in some studies <\/span><\/p>\n<p><span style=\"color: #800000;\">(Kessler, Thiel, Karbe, &amp; Heiss, 2001). The periodimmediately following the stroke appears <\/span><\/p>\n<p><span style=\"color: #800000;\">to be critical in terms of intervening to preserve function. For example, aphasia symptoms <\/span><\/p>\n<p><span style=\"color: #800000;\">can be alleviated by drugs that increase blood pressure if they are administered very rapidly <\/span><\/p>\n<p><span style=\"color: #800000;\">when the stroke occurs (Wise, Sutter, &amp; Burkholder, 1972). During this period, aphasic <\/span><\/p>\n<p><span style=\"color: #800000;\">symptoms will reappear if blood pressure is allowed to fall, even if the patient\u2019s blood <\/span><\/p>\n<p><span style=\"color: #800000;\">pressure is not abnormally low. In later stages of recovery, blood pressure can be reduced <\/span><\/p>\n<p><span style=\"color: #800000;\">without causing the aphasic symptoms to reappear. Other treatment options capitalize on <\/span><\/p>\n<p><span style=\"color: #800000;\">the fact that the brain has some ability to reorganize itself following an injury (this ability is <\/span><\/p>\n<p><span style=\"color: #800000;\">called neural plasticity). It turns out that stimulant drugs, including amphetamines, appear <\/span><\/p>\n<p><span style=\"color: #800000;\">to magnify or boost brain reorganization. When stimulants are taken in the period <\/span><\/p>\n<p><span style=\"color: #800000;\">immediately following a stroke, and patients are also given speech-language therapy, their <\/span><\/p>\n<p><span style=\"color: #800000;\">language function improves more than control patients who receive speech-language <\/span><\/p>\n<p><span style=\"color: #800000;\">therapy and a placebo in the six months after their strokes (Walker-Batson et al., 2001).<\/span><\/p>\n<p>Very interesting application of amfetamins.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overall an interesting introduction. Some chapters were much more interesting to me than others, which were somewhere between kinda boring and boring. Generally, the book is way too light on the statistical features of the studies cited. When I hear of a purportedly great study, I want to know the sample size and the significance [&hellip;]<\/p>\n","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1660,1653],"tags":[1067],"class_list":["post-4063","post","type-post","status-publish","format-standard","hentry","category-linguisticslanguage","category-psychology","tag-review","entry"],"_links":{"self":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/4063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/comments?post=4063"}],"version-history":[{"count":5,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/4063\/revisions"}],"predecessor-version":[{"id":4371,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/4063\/revisions\/4371"}],"wp:attachment":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/media?parent=4063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/categories?post=4063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/tags?post=4063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}