Cambridge.University.Press.Analyzing.Grammar.An.Introduction.Jun.2005 free pdf download


Overall, there is nothing much to say about this book. It covers most stuff. Neither particularly good, or interesting, or particularly bad or uninteresting, IMO.

Forexample, what is the meaning of the word hello? What information

does it convey? It is a very difficult word to define, but every speaker of

English knows how to use it: for greeting an acquaintance, answering the

telephone, etc. We might say that hello conveys the information that the

speaker wishes to acknowledge the presence of, or initiate a conversation

with, the hearer. But it would be very strange to answer the phone or greet

your best friend by saying “I wish to acknowledge your presence” or “I

wish to initiate a conversation with you.”What is important about the word

hello is not its information content (if any) but its use in social interaction.

In the Teochew language (a “dialect” of Chinese), there is no word for

‘hello’. The normal way for one friend to greet another is to ask: “Have you

already eaten or not?” The expected reply is: “I have eaten,” even if this is

not in fact true.

In our comparison of English with Teochew, we saw that both languages

employ a special formof sentence for expressing Yes–No questions. In fact,

most, if not all, languages have a special sentence pattern which is used for

asking such questions. This shows that the linguistic form of an utterance

is often closely related to its meaning and its function. On the other hand,

we noted that the grammatical features of a Yes–No question in English

are not the same as in Teochew. Different languages may use very different

grammatical devices to express the same basic concept. So understanding

the meaning and function of an utterance will not tell us everything we need

to know about its form.

interesting for me becus of my work on a logic of questions and answers.

Both of the hypotheses we have reached so far about Lotuko words are

based on the assumption that themeaning of a sentence is composed in some

regular way from the meanings of the individual words. That is, we have

been assuming that sentence meanings are compositional.Of course,

every language includes numerous expressions where this is not the case.

Idioms are one common example. The English phrase kick the bucket can

mean ‘die,’ even though none of the individual words has this meaning.

Nevertheless, the compositionality of meaning is an important aspect of the

structure of all human languages.

for more on compositionality see:

We have discussed three types of reasoning that can be used to

identify the meaningful elements of an utterance (whether parts of a word

or words in a sentence): minimal contrast, recurring partials, and pattern-

matching. In practice, when working on a new body of data, we often use

all three at once, without stopping to think which method we use for which

element. Sometimes, however, it is important to be able to state explicitly

the pattern of reasoning which we use to arrive at certain conclusions. For

example, suppose that one of our early hypotheses about the language is

contradicted by further data. We need to be able to go back and determine

what evidence that hypothesis was based on so that we can re-evaluate

that evidence in the light of additional information. This will help us to

decide whether the hypothesis can be modified to account for all the facts,

orwhether it needs to be abandoned entirely.Grammatical analysis involves

an endless process of “guess and check” – forming hypotheses, testing them

against further data, andmodifying or abandoning those which do not work.

quite a lot of science works like that. conjecture and refutation, pretty much (Popper)

What do we mean when we say that a certain form, such as Zapotec ka–,

is a “morpheme?” Charles Hockett (1958) gave a definition of this term

which is often quoted:

Morphemes are the smallest individually meaningful elements in the utter-

ances of a language.

There are two crucial aspects of this definition. First, a morpheme is mean-

ingful.A morpheme normally involves a consistent association of phono-

logical formwith some aspect ofmeaning, as seen in (7) where the form ˜ nee

was consistently associated with the concept ‘foot.’ However, this associ-

ation of form with meaning can be somewhat flexible. We will see various

ways in which the phonological shape of a morpheme may be altered to

some extent in particular environments, and there are some morphemes

whose meaning may depend partly on context.

obviously does not work for

what is the solution to this inconsistency in terminology?

In point (c) above we noted that a word which contains no plural marker

is always singular. The chart in (17) shows that the plural prefix is optional,

and that when it is present it indicates plurality; but it doesn’t say anything

about the significance of the lack of a prefix. One way to tidy up this loose

end is to assume that the grammar of the language includes a default

rule which says something like the following: “a countable noun which

contains no plural prefix is interpreted as being singular.”

Another possible way to account for the same fact is to assume that sin-

gular nouns carry an “invisible” (or null) prefix which indicates singular

number. That would mean that the number prefix is actually obligatory for

this class of noun. Under this approach, our chart would look something

like (18):

the default theory with is more plausible than positing invisible morphemes.

since the book contiues to use Malay as an ex. including the word <orang> i’m compelled to mention that it is not a coincidence that it is similar to <orangutan>.

The name “orangutan” (also written orang-utan, orang utan, orangutang, and ourang-outang) is derived from the Malay and Indonesian words orang meaning “person” and hutan meaning “forest”,[1] thus “person of the forest”.[2]Orang Hutan was originally not used to refer to apes, but to forest-dwelling humans. The Malay words used to refer specifically to the ape is maias and mawas, but it is unclear if those words refer to just orangutans, or to all apes in general. The first attestation of the word to name the Asian ape is in Jacobus Bontius‘ 1631 Historiae naturalis et medicae Indiae orientalis – he described that Malaysians had informed him the ape was able to talk, but preferred not to “lest he be compelled to labour”.[3] The word appeared in several German-language descriptions of Indonesian zoology in the 17th century. The likely origin of the word comes specifically from the Banjarese variety of Malay.[4]

The word was first attested in English in 1691 in the form orang-outang, and variants with -ng instead of -n as in the Malay original are found in many languages. This spelling (and pronunciation) has remained in use in English up to the present, but has come to be regarded as incorrect.[5][6][7] The loss of “h” in Utan and the shift from n to -ng has been taken to suggest that the term entered English through Portuguese.[4] In 1869, British naturalist Alfred Russel Wallace, co-creator of modern evolutionary theory, published his account of Malaysia’s wildlife: The Malay Archipelago: The Land of the Orang-Utan and the Bird of Paradise.[3]

Traditional definitions for parts of speech are based on “notional”

(i.e. semantic) properties such as the following:

(17) A noun is a word that names a person, place, or thing.

A verb is a word that names an action or event.

An adjective is a word that describes a state.

However, these characterizations fail to identify nouns like destruction,

theft, beauty, heaviness. They cannot distinguish between the verb love and

the adjective fond (of),or between the noun fool and the adjective foolish.

Note that there is very little semantic difference between the two sentences

in (18).

(18) They are fools.

They are foolish.

it is easy to fix 17a to include abstractions. all his counter-examples are abstractions.

<love> is both a noun and a verb, but 17 definitions, which is right.

the 18 ex. seems weak too. what about the possibility of interpreting 18b as claiming that they are foolish. this does not mean that they are fools. it may be a temporary situation (drunk perhaps), or isolated to specific areas of reality (ex. religion).

not that i’m especially happy about semantic definitions, it’s just that the argumentation above is not convincing.

Third, the head is more likely to be obligatory than the modifiers or other

non-head elements. For example, all of the elements of the subject noun

phrase in (22a) can be omitted except the head word pigs.If this word is

deleted, as in (22e), the result is ungrammatical.

(22) a [The three little pigs] eat truffles.

b [The three pigs] eat truffles.

c [The pigs] eat truffles.

d [Pigs] eat truffles.

e *[The three little] eat truffles.

not so quick. if the context makes it clear that they are speaking about pigs, or children, or whatever, 22e is perfectly understandable, since context ‘fiils out’ the missing information, grammatically speaking. but the author is right in that it is incomplete and without context to fill in, one would be forced to ask ”three little what?”. but still, that one will actually respond like this shows that the utterance was understood, at least in part.

Of course, English noun phrases do not always contain a head noun. In

certain contexts a previously mentioned head may be omitted because it is

“understood,” as in (23a). This process is called ellipsis . Moreover, in

English, and in many other languages, adjectives can sometimes be used

without any head noun to name classes of people, as in (23b,c). But, aside

from a few fairly restricted patterns like these, heads of phrases in English

tend to be obligatory.

(23) a [The third little pig] was smarter than [the second ].

b [the good], [the bad] and [the ugly]

c [The rich] get richer and [the poor] get children.

i was going to write the author doesn’t seem to understand the word ”obligatory”, but it another interpretation dawned upon me. i think he means that under must conditions, one cannot leave out the noun in a noun phrase (NP), but sometimes one can. confusing wording.

As we can already see from example (5), different predicates require

different numbers of arguments: hungry and snores require just one, loves

and slapping require two. Some predicates may not require any arguments

at all. For example, in many languages comments about the weather (e.g. It

is raining,or It is dark,or It is hot) could be expressed by a single word, a

bare predicate with no arguments.

it is worth mentioning that there is a name for this:

It is important to remember that arguments can also be optional. For exam-

ple,many transitive verbs allowan optional beneficiary argument (18a), and

most transitive verbs of the agent–patient type allow an optional instrument

argument (18b). The crucial fact is that adjuncts are always optional. So

the inference “if obligatory then argument” is valid; but the inference “if

optional then adjunct” is not.

strictly speaking, this is using the terminology incorrectly. conditionals are not inferences. the author should have written ex ”the inference “obligatory, therefore, argument” is valid.”, or alternatively ”the conditional “if obligatory, then argument” is true.”.

confusing inferences with conditionals leads to all kinds of confusions in logic.

Another way of specifying the transitivity of a verb is to ask, how many

term (subject or object) arguments does it take? The number of terms, or

direct arguments, is sometimes referred to as the valence of the verb.

Since most verbs can be said to have a subject, the valence of a verb is

normally one greater than the number of objects it takes: an intransitive

verb has a valence of one, a transitive verb has a valence of two, and a

ditransitive verb has a valence of three.

the author is just talking about how many operands the expressed predicate has. there are also verbs which can express predicates with four operands. consider <transfer>. ex. ”Peter transfers 5USD from Mike to Jim.”. There Peter, subject, agent; 5USD, object, theme, a repicient, Jim, ?; Mike, antirecpient?, ?.

The distinctions between OBJ2 and OBL make little to no sense to me.

It is important to notice that the valence of the verb (in this sense) is not

the same as the number of arguments it takes. For example, the verb donate

takes three semantic arguments, as illustrated in (8).However, donate has70 Analyzing Grammar: An Introduction

avalence of two because it takes only two term arguments, SUBJ and

OBJ. With this predicate, the recipient is always expressed as an oblique


(8) a Michael Jackson donated his sunglasses to the National Museum.

b donate < agent, theme, recipient >

|| |

subj obj obl

Some linguists use the term “semantic valence” to refer to the number of

semantic arguments which a predicate takes, and “syntactic valence” to

specify the number of terms which a verb requires. In this book we will use

the term “valence” primarily in the latter (syntactic) sense.

doens’t help.

We have already seen that some verbs can be used in more than

one way. In chapter 4, for example, we saw that the verb give occurs in

two different clause patterns, as illustrated in (10).We can now see that

these two uses of the verb involve the same semantic roles but a different

assignment of Grammatical Relations, i.e. different subcategorization. This

difference is represented in (11). The lexical entry for give must allow for

both of these configurations.3

(10) a John gave Mary his old radio.

b John gave his old radio to Mary.

(11) a give < agent, theme, recipient >

|| |

subj obj2 obj

b give < agent, theme, recipient >

|| |

subj obj obl

it seems to me that there is something wholly wrong with a theory that treats 10a-b much different. those two sentences mean the same thing, and their structure is similar, and only one word makes the differnece. this word seems to just have the function of allowing for another order of the operands of the verb.

A number of languages have grammatical processes which, in effect,

“change” an oblique argument into an object. The result is a change in

the valence of the verb. This can be illustrated by the sentences in (19).

In (19a), the beneficiary argument is expressed as an OBL, but in (19b)

the beneficiary is expressed as an OBJ. So (19b) contains one more term

than (19a), and the valence of the verb has increased from two to three;

but there is no change in the number of semantic arguments. Grammatical

operations which increase or decrease the valence of a verb are a topic of

great interest to syntacticians. We will discuss a few of these operations in

chapter 14.

(19) a John baked a cake for Mary.

b John baked Mary a cake.

IMO, these two have the exact same number of operands, both have 3. for word <for> allows for a different ordering, i.e., it is a syntax-modifier.

at least, that’s one reading. 19a seems to be a less clear case of my alternative theory. one reading of 19a is that Mary was tasked with baking a cake, but John baked it for her. another reading has the same meaning as 19b.

(20) a #The young sausage likes the white dog.

b #Mary sings a white cake.

c #A small dog gives Mary to the young tree.

(21) a *John likes.

b *Mary gives the young boy.

c *The girl yawns Mary.

The examples in (20) are grammatical but semantically ill-formed –

they don’tmake sense.4

the footnote is: One reason for saying that examples like (20) and (22) are grammatical, even though

they sound so odd, is that it would often be possible to invent a context (e.g. in a fairy

tale or a piece of science fiction) in which these sentences would be quite acceptable.

This is not possible for ungrammatical sentences like those in (21).

i can think about several contexts where 21b makes sense. think of a situation where everybody is required to give something/someone to someone. after it is mentioned that several other people give this and that, 21b follows. in that context it makes sense just fine. however, it is because the repicient is implicit, since it is unnecessary (economic principle) to mention the recipient in every single sentence or clause.

21c is interpretable with if one considers ”the girl” an utterance, that Mary utters while yawning.

21a is almost common on Facebook. ”John likes this”, shortened to ”John likes”.

not that i think the author is wrong, i’m just being creative. :)

The famous example in (23) was used by Chomsky (1957) to show how

a sentence can be grammatical without being meaningful. What makes this

sentence so interesting is that it contains so many collocational clashes:

something which is green cannot be colorless; ideas cannot be green,or

any other color, but we cannot call themcolorless either; ideas cannot sleep;

sleeping is not the kind of thing one can do furiously; etc.

(23) #Colorless green ideas sleep furiously.

it is writings such as this that result in so much confusion. clear the different <cannot>’s in the above are not about the same kind of impossibility. let’s consider them:

<something which is green cannot be colorless> this is logical impossibility. these two predicates are logically incompatible, that is, they imply the lack of each other, that is, ∀xGreen(x)→¬Colorless(x). but actually this predicate has an internal negation. we can make it more explicit like this: ∀xGreen(x)→Colorful(x), and ∀xColorful(x)↔¬Colorless(x).

< ideas cannot be green,or any other color, but we cannot call themcolorless either; ideas cannot sleep;

sleeping is not the kind of thing one can do furiously> this is semantic impossibility. it concerns the meaning of the sentence. there is no meaning, and hence nothing expressed that can be true or false. from that it follows that there is nothing that can be impossible, since impossibility implies falsity. hence, if there is something connected with that sentence that is impossible, it has to be something else.

This kind of annotated tree diagramallows us to see at oncewhat iswrong

with the ungrammatical examples in (21) above: (21b) is incomplete, as

demonstrated in (34a), while (21c) is incoherent, as demonstrated in (34b).

a better set of terms are perhaps <undersaturated> and <oversaturated>.

there is nothing inconsistent about the second that isn’t also inconsitent in the first, and hence using that term is misleading. <incomplete> does capture an essential feature, which is that something is missing. the other ex. has something else too much. one could go for <incomplete> and <overcomplete> but it sounds odd. hence my choice of different terms.

The pro-formone can be used to refer to the head nounwhen it is followed

by an adjunct PP, as in (6a),but not when it is followed by a complement

PP as in (6b).

(6) a The [student] with short hair is dating the one with long hair.

b ∗The [student] of Chemistry was older than the one of Physics.

6b seems fine to me.

There is no fixed limit on howmanymodifiers can appear in such a sequence.

But in order to represent an arbitrarily long string of alternating adjectives

and intensifiers, it is necessary to treat each such pair as a single unit.

The “star” notation used in (15) is one way of representing arbitrarily

long sequences of the same category. For any category X, the symbol “X∗”

stands for “a sequence of any number (zero or more) of Xs.” So the symbol

“AP∗” stands for “a sequence of zero or more APs.” It is easy to mod-

ify the rule in (12b) to account for examples like (14b); this analysis is

shown in (15b). Under the analysis in (12a),wewould need to write a more

complex rule something like (15a).3 Because simplicity tends to be favored

in grammatical systems, (12b) and (15b) provide a better analysis for this


(15) aNP → Det ((Adv) A)

∗ N (PP)

bNP → Det AP∗ N (PP)

for those that are wondering where this use of asterisk comes from, it is from here:

In English, a possessor phrase functions as a kind of determiner. We

can see this because possessor phrases do not normally occur together with

other determiners in the same NP:

(19) a the new motorcycle

b Mary’s new motorcycle

c ∗Mary’s the new motorcycle

d ∗the Mary’s new motorcycle

looks more like it is because they are using proper nouns in their example. if one used a common noun, then it works just fine:

19e: The dog’s new bone.

Another kind of evidence comes fromthe fact that predicate complement

NPs cannot appear in certain constructions where direct objects can. For

example, an objectNP can become the subject of a passive sentence (44b) or

of certain adjectives (like hard, easy, etc.) which require a verbal or clausal

complement (44c).However, predicate complement NPs never occur in

these positions, as illustrated in (45).

(44) a Mary tickled an elephant.

b An elephant was tickled (by Mary).

c An elephant is hard (for Mary) to tickle.

(45) a Mary became an actress.

b *An actress was become (by Mary).

c *An actress is hard (for Mary) to become.

45c is grammatical with the optional element in place: An actress is hard for Mary to become. Altho it is ofc archaic in syntax.

mi amamas. ‘I am happy.’

yu amamas. ‘You (sg) are happy.’

em i amamas. ‘He/she is happy.’

yumi amamas. ‘We (incl.) are happy.’

mipela i amamas. ‘We (excl.) are happy.’

yupela i amamas. ‘You (pl) are happy.’

ol i amamas. ‘They are happy.’

it is difficult not to like this system, except for the arbitrary requirement of ”i” some places and not others. its clearly english-inspired. inclusive ”we” is interesting ”youme” :D

This constituent is normally labeled S’or S (pronounced “S-bar”). It con-

tains two daughters: COMP (for “complementizer”) and S (the complement

clause itself). This structure is illustrated in the tree diagram in (15), which

represents a sentence containing a finite clausal complement.

how to make this fit perfectly with the other use of N-bar terminology. in the case of noun phrases, we have NP on top, then N’ (with DET and adj) and then N at the bottom. it seems that we need to introduce some analogue to NP with S. the only level left is the entire sentence. SP sounds like a contradiction in terms or oxymoron though, ”sentence phrase”.

From here.


Frankly I cannot answer your question about Lancan because I really don’t understand what he is saying. However, let me ask you, in turn, what you think about the following quotation from Wittgenstein’s Philosophical Investigations. I think it is relevant to this discussion.

We are under the illusion that what is peculiar, profound, essential in our investigation, resides in its trying to grasp the incomparable essence of language. That is, the order existing between the concepts of proposition; word, proof, truth, experience, and so on. This order is a super-order between – so to speak – super-concepts. Whereas, of course, if the words “language,” “experience,” “world,” have a use, it must be as humble a one as that of the words “table,” “lamp,” “door.” (p. 44e)


It is funny that you bring up W. in this, Ken, as he wrote most incomprehensibly! Perhaps he was doing analytic philosophy but it is certainly extremely hard to understand anything he wrote. It’s not like reading Hume which is also hard to understand. H. is hard to understand because the texts he wrote were written 250 years ago or so. W. wrote only some 70-50 years ago and yet I can’t understand it easily. I can understand other persons from the same era just fine (Clifford, W. James, Quine, Russell, etc.).


W. wrote aphoristically (Like Lichtenstein) so you have to get used to his style. But what of the passage. Do you understand that?


No, I have no clue what it means. I didn’t read PI yet so maybe that is why. I read the Tractatus.


Well, he says that philosophers should not think that words like, “knowledge” or “reality” have a different kind of meaning than, and need a different kind of understanding from, ordinary words like “lamp” and “table”. “Philosophical” words are not special. Their meanings are to be discovered in how they are ordinarily used. (That does not, I think, suppose you have read, PI).


Alright. Then why didn’t he just write what you just wrote? I suppose this is the paradigmatic thesis of the ordinary language philosophy.


First of all it was in German. And second, it wasn’t his style. But I don’t think it was particularly hard to get that out of it. Yes, it is ordinary language philosophy. But, going beyond interpretation (I hope) don’t you think it is true? Why should “knowledge” (say) be treated differently from “lamp”?


I think it is. Especially for a person that hasn’t read much of W.’s works. You have read a lot more than I have.

I agree with it, yes.


There are lots of people who think that words like “knowledge” and “information” are superconcepts which have a special philosophical meaning they do not have in ordinary discourse (and which it is beneath philosophy to treat like the word, “lamp”) That’s why they are interested in what some particular philosopher means by, “knowledge”. They think there is some “incomparable essence of language” that philosophers are “trying to grasp”.


Ok. But some words do have meanings in philosophical contexts that they do not have in other, normal contexts. Think of “valid” as an example.


Yes, of course. But in that sense, “valid” is a technical term. “Knowledge” is not a technical term in the ordinary sense. It doesn’t have some deep philosophical meaning in addition to its ordinary meaning, nor is its ordinary meaning some deep meaning detached from its usual meaning. What meaning could Lacan find that was the real philosophical meaning? Where would that meaning even come from? Heidegger does the same thing. He ignores what a word means, and then finds (invents” a deep philosophical meaning for it. But he uses etymology to do that. It is wrong-headed from the word “go”. If you read Plato’s Cratylus you find how Socrates makes fun of this view of meaning (although, Plato here is making fun of himself, because he really originates this idea that the meaning of a word is its essence which is hidden).

Wittgenstein’s positive point is, of course, the ordinary language thing. But his negative point (which I think is more important for this discussion) is that terms like “knowledge” or “truth” do not have special meanings to be dug out by philosophers who are supposed to have some special factual for spying them. Lancan has no particular insight into the essence of knowledge hidden from the rest of us which, if we understand him, will provide us with philosophical enlightenment. Why should he?


There is a risk in all of this that by excluding the idea of the ‘super concept’ in W’s sense, or insisting that it must simply have the same kind of meaning as ‘lamp’ or ‘table’ that you also exclude what is most distinctive about philosophy. Surely we can acknowledge that there is a distinction between abstract and concrete expression. ‘The lamp is on the table’ is a different kind of expression to ‘knowledge has limits’.

When we ‘discuss language’ we are on a different level of explanation to merely ‘using language’. I mean, using language, you can explain many things, especially concrete and specific things, like ‘this is how to fix a lamp’ or ‘this is how to build a table’. But when it comes to discussing language itself, we are up against a different order of problem, not least of which is that we are employing the subject of the analysis to conduct the analysis. (I have a feeling that Wittgenstein said this somewhere.)

So it is important to recognise what language is for and what it can and can’t do. There are some kinds of speculations which can be articulated and might be answerable. But there are others which you can say, but might not really be possible to answer, even though they seem very simple (such as, what is number/meaning/the nature of being). Of which Wittgenstein said, that of which we cannot speak, of that we must remain silent. So knowing what not to say must be part of this whole consideration.


“Lamp” is a term for a concrete object. “Knowledge” is a term for an abstract object. But the central point is that neither has a hidden meaning that only a philosopher can ferret out. The meaning of both are their use(s) by fluent speakers of the language. It is not necessary to go to Lancan or Nietzsche to discover what “knowledge” really means anymore that it is to discover what “lamp” really means. As Wittgenstein wrote, “nothing is hidden”. Philosophy is not science. It is not necessary to go underneath the phenomena to discover what there really is. It is ironic that interpretationists accuse analytic philosophy of “scientism” when it is they who think that philosophy is a kind of science.


I interpret Wittgenstein as saying that the philosophical language-game is not a privileged language game. To say that something isn’t hidden is not to say that everyone finds it. This is just figurative language. Wittgenstein should be read by the light of Wittgenstein. His game is one more game, the game of describing the game. I interpret him as shattering the hope (for himself and those whom he persuades) for some unified authority on meaning.
Also he stressed the relationship of language and social practice. He finally took a more holistic view of language, and dropped his reductive Tractatus views. (This is not to deny the greatness of the Tractatus. Witt is one of my favorites, early and late.)
I associate Wittgenstein with a confession of the impossibility of closure. I don’t think language is capable of tying itself up.


To say that “nothing is hidden” is to say that words like “truth” or “knowledge” do not have, in addition to their ordinary everyday meanings, some secret meanings that only philosophers are able to discover. There are no secret meanings. There is no, “what the word ‘really means'” that Lacan or Heidegger has discovered.



Well my reason is that a lot of what goes on in this life seems perfectly meaningless and in the true sense of the word, irrational. Many things which seem highly valued by a lot of people seem hardly worth the effort of pursuing, we live our three score years and ten, if we’re lucky, and then vanish into the oblivion from whence we came. None of it seems to make much sense to me. I am the outcome, or at least an expression, of a process which started billions of years ago inside some star somewhere. For what? Watch television? Work until I die?

That’s my reason.


Just what are you questioning? (One sense of the word, “meaningless” may well be something like “irrational”. But that is not the true sense of the word. What about all the other senses of the word, “meaningless”? ). By the way, I think that “non-rational” would be a better term than “irrational”. And, just one more thing: what would it be for what goes on in this world to be rational? If you could tell me that, then I would have a better idea of what it is you are saying when you say it is irrational or it is non-rational. What is it that it is not? What would it be for you to discover that what goes on is rational?


Have you ever looked out at life and thought ‘boy what does it all mean? Isn’t there more to it than just our little lives and personalities and the things we do and have?’ You know, asked The Big Questions. That’s really what I see philosophy as being. So now I am beginning to understand why we always seem to be arguing at cross purposes.

Dunno. Maybe I shouldn’t say this stuff. Maybe I am being too personal or too earnest.


In my opinion, it is the belief that philosophers are supposed to ask only the Big Questions that partly fuels the view that philosophy gets nowhere and is a lot of nonsense, and is a big waste of time. And that would be right if that is what philosophy is.

Where would science have got if scientists had not rolled up their sleeves and asked many little questions.


from what I know of Heidegger, I very much admire his philosophy. There are many philosophers I admire, and many of them do deal with profound questions; and I know there are many kindred spirits on the forum. But – each to his own, I don’t want to labour the point.


How about “deal with seemingly profound questions”? But one of the philosopher’s seminal jobs is to ask whether a seemingly profound question is really all that profound, and what the question means, and supposes is true.Philosophers should have Hume’s “tincture of scepticism” even in regard to questions.

Negation_in_English_and_Other_Languages pdf download ebook free

This book is actually very advanced for its age. it contains lots of stuff of interest to logicians and linguists, even those reading it today. the thing that annoys me the most is the poor quality of the scan making reading a hazzle. second to that comes the untranslated quotes from other languages (german, french, greek, latin, danish altho DA isnt a problem for me ofc). third but small annoyance is the difficulty of the reference system used.



About the existence of double negatives


My own pet theory is that neither is right; logically one

negative suffices, but two or three in the same sentence cannot

be termed illogical; they are simply a redundancy, that

may be superfluous from a stylistic point of view, just as any

repetition in a positive sentence (every and any, always and

on all occasions, etc.), but is otherwise unobjectionable. Double

negation arises because under the influence of a strong feeling

the two tendencies specified above, one to attract the negative

to the verb as nexal negative, and the other to prefix it to

some other word capable of receiving this element, may both

be gratified in the same sentence. But repeated negation

seems to become a habitual phenomenon only in those languages

in which the ordinary negative element is comparatively

small in regard to phonetic bulk, as ne and n- in OE and Russian,

en and n- in MHG., on (sounded u) in Greek, s- or n- in

Magyar. The insignificance of these elements makes it desirable

to multiply them so as to prevent their being overlooked.

Hence also the comparative infrequency of this repetition in

English and German, after the fuller negatives not and nicht

have been thoroughly established – though, as already stated,

the logic of the schools and the influence of Latin has had some

share in restricting the tendency to this particular kind of

redundancy. It might, however, finally be said that it requires

greater mental energy to content oneself, with one negative,

which has to be remembered during the whole length of

the utterance both by the speaker and by the hearer, than

to repeat the negative idea (and have it repeated) whenever

an occasion offers itself.


seems legit



Jespersen came close to one of the gricean maxims


If we say, according to the general rule, that “not four” means “different from four”, this should be taken with a certain quahfication, for in practice it generally means, not whatever is above or below 4 in the scale, but only what is below 4. thus less than 4, something between 4 and 0, just as *”not everything” means something between everything and nothing (and as “not good” means ‘inferior’, but does not comprise ‘excellent’). Thus in “He does not read three books in a year” | “the hill is not two hundred feet high” | “his income is not 200 a year” | “he does not see her once a week”.


This explains how ‘not one’ comes to be the natural expression in many languages for ‘none, no’, and ‘not one thing’ for ‘nothing’, as in OE nan = ne-an, whence none and no, OE nanthing, whence nothing, ON eingi, whence Dan. ingen. G. k-ein etc. Cf. also Tennyson 261 That not one life shall be destroy ‘d . . . That not a worm is cloven in vain; see also p. 49. In French similarly: Pas im bruit n’interrompit le silence, etc.


When not + a numeral is exceptionally to be taken as ‘more than’, the numeral has to be strongly stressed, and generally to be followed by a more exact indication: “the hill is not ‘two hundred feet high, but three hundred” | “his income is not 200, but at least 300 a year” | Locke S. 321 Not one invention, but fifty – from a corkscrew to a machinegun | Defoe R. 342 not once, but two or three times | Gissing R. 149 books that well merit to be pored over, not once but many a time I Benson A. 220 he would bend to kiss her, not once, not once only.


But not once or twice always means ‘several times’, as in Tennyson 220 Not once or twice in our rough island-story The path of duty was the way to glory.


In Russian, on the other hand, ne raz ‘not (a) time’, thus really without a numeral, means ‘several times, sometimes’ and in the same way ne odin ‘not one’ means ‘more than one’; corresponding phenomena are found in other languages as well, see a valuable little article by Schuchardt, An Aug. Leskien zum 4. juli1894 (privately printed).He rightly con- nects this with the use in Russian of the stronger negative ni with a numeral to signify ‘lessthan’ : ni odin ‘not even one’.


hat the exact import is of a negative quantitative indication may in some instances depend on what is expected, or what is the direction of thought in each case. While the two sentences “he spends ” 200 a year” and “he lives on 200 a year” are practically synonymous, everything is changed if we add not: “he doesn’t spend 200 a year” means ‘less than’; “he doesn’t live on 200 a year” means ‘more than’; because in the former case we expect an indication of a maximum, and in the latter of a minimum.


and actually the discussion continues from here. it is worth reading.


also normal formulations of the maxim doesnt take account of the fenomenon pointed out in the last paragraf.



Negative words or formulas may in some combinations be used in such a way that the negative force is almost vanishing. There is scarcely any difference between questions like “Will you have a glass of beer ?” and “Won’t you have a glass of beer ?”, because the real question is “Will you, or will you not, have. . . . ” ; therefore in offering one a glass both formulas may be employed indifferently, though a marked tone of surprise can make the two sentences into distinct contrasts: “Will you have a glass of beer ?” then coming to mean ‘I am surprised at your wanting it’, and “Won’t you have a glass of beer ?” the reverse. (In this case really is often added.)


In the same way in Dan. “Vil De ha et glas 0I ?” and “Vil De ikke ha et glas 0I ?” A Dutch lady once told me how surprised she was at first in Denmark at having questions like “Vil De ikke raekke mig saltet ?” asked her at table in a boarding- house; she took the ikke literally and did not pass the salt. Ikke is also used in indirect (reported) questions, as in Faber Stegek. 28 saa bar madammen bedt Giovanni, om han ikke vil passe lidt paa barnet.


true, it dosent make a lot of sense. the <ikke> / <not> almost has no meaning. it seems to create a kind of ”please” meaning in the utterance.



In writing the forms in nH make their appearance about 1660 and are already frequent in Dryden’s, Congreve’s, and Farquhar’s comedies. Addison in the Spectator nr. 135 speaks of mayn’t, canH, sha’nH, won’t, and the like as having “very much imtxmed our language, and clogged it with consonants”. Swift also (inthe Tatler nr. 230)brands as examples of “the continual corruption of our English tongue” such forms as coii’dn’t, ha’n’t, can’t, shan’t; but nevertheless he uses some of them very often in his Journal to Stella.






This is another of those ideas that ive had independently, and that it turned out that others had thought of before me, by thousands of years in this case. The idea is that longer expressions of language as made out of smaller parts of language, and that the meaning of the whole is determined by the parts and their structure. This is rather close to the formulation used on SEP. Heres the introduction on SEP:


Anything that deserves to be called a language must contain meaningful expressions built up from other meaningful expressions. How are their complexity and meaning related? The traditional view is that the relationship is fairly tight: the meaning of a complex expression is fully determined by its structure and the meanings of its constituents—once we fix what the parts mean and how they are put together we have no more leeway regarding the meaning of the whole. This is the principle of compositionality, a fundamental presupposition of most contemporary work in semantics.

Proponents of compositionality typically emphasize the productivity and systematicity of our linguistic understanding. We can understand a large—perhaps infinitely large—collection of complex expressions the first time we encounter them, and if we understand some complex expressions we tend to understand others that can be obtained by recombining their constituents. Compositionality is supposed to feature in the best explanation of these phenomena. Opponents of compositionality typically point to cases when meanings of larger expressions seem to depend on the intentions of the speaker, on the linguistic environment, or on the setting in which the utterance takes place without their parts displaying a similar dependence. They try to respond to the arguments from productivity and systematicity by insisting that the phenomena are limited, and by suggesting alternative explanations.


SEP goes on to discuss some more formal versions of the general idea:


(C) The meaning of a complex expression is determined by its structure and the meanings of its constituents.



(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


SEP goes on to disguish between a lot of different versions of this. See the article for details.

The thing i wanted to discuss was the counterexamples offered. I found none of them to be rather compelling. Based mostly on intuition pumps as far as i can tell, and im rather wary of such (cf. Every Thing Must Go, amazon).


Heres SEP’s first example, using chess notation (many other game notations wud also work, e.g. Taifho):


Consider the Algebraic notation for chess.[15] Here are the basics. The rows of the chessboard are represented by the numerals 1, 2, … , 8; the columns are represented by the lower case letters a, b, … , h. The squares are identified by column and row; for example b5 is at the intersection of the second column and the fifth row. Upper case letters represent the pieces: K stands for king, Q for queen, R for rook, B for bishop, and N for knight. Moves are typically represented by a triplet consisting of an upper case letter standing for the piece that makes the move and a sign standing for the square where the piece moves. There are five exceptions to this: (i) moves made by pawns lack the upper case letter from the beginning, (ii) when more than one piece of the same type could reach the same square, the sign for the square of departure is placed immediately in front of the sign for the square of arrival, (iii) when a move results in a capture an x is placed immediately in front of the sign for the square of arrival, (iv) the symbol 0-0 represents castling on the king’s side, (v) the symbol 0-0-0 represents castling on the queen’s side. + stands for check, and ++ for mate. The rest of the notation serves to make commentaries about the moves and is inessential for understanding it.

Someone who understands the Algebraic notation must be able to follow descriptions of particular chess games in it and someone who can do that must be able to tell which move is represented by particular lines within such a description. Nonetheless, it is clear that when someone sees the line Bb5 in the middle of such a description, knowing what B, b, and 5 mean will not be enough to figure out what this move is supposed to be. It must be a move to b5 made by a bishop, but we don’t know which bishop (not even whether it is white or black) and we don’t know which square it is coming from. All this can be determined by following the description of the game from the beginning, assuming that one knows what the initial configurations of figures are on the chessboard, that white moves first, and that afterwards black and white move one after the other. But staring at Bb5 itself will not help.


It is exacly the bold lines i dont accept. Why must one be able to know that from the meaning alone? Knowing the meaning of expressions does not always make it easy to know what a given noun (or NP) refers to. In this case “B” is a noun refering to a bishop, which one? Well, who knows. There are lots of examples of words refering to differnet things (people usually) when used in diffferent contexts. For instance, the word “me” refers to the source of the expression, but when an expression is used by different speakers, then “me” refers to different people, cf. indexicals (SEP and Wiki).


Ofc, my thoughts about are not particularly unique, and SEP mentions the defense that i also thought of:


The second moral is that—given certain assumptions about meaning in chess notation—we can have productive and systematic understanding of representations even if the system itself is not compositional. The assumptions in question are that (i) the description I gave in the first paragraph of this section fully determines what the simple expressions of chess notation mean and also how they can be combined to form complex expressions, and that (ii) the meaning of a line within a chess notation determines a move. One can reject (i) and argue, for example, that the meaning of B in Bb5 contains an indexical component and within the context of a description, it picks out a particular bishop moving from a particular square. One can also reject (ii) and argue, for example, that the meaning of Bb5 is nothing more than the meaning of ‘some bishop moves from somewhere to square b5’—utterances of Bb5 might carry extra information but that is of no concern for the semantics of the notation. Both moves would save compositionality at a price. The first complicates considerably what we have to say about lexical meanings; the second widens the gap between meanings of expressions and meanings of their utterances. Whether saving compositionality is worth either of these costs (or whether there is some other story to be told about our understanding of the Algebraic notation) is by no means clear. For all we know, Algebraic notation might be non-compositional.


I also dont agree that it widens the gap between meanings of expressions and meanings of utterances. It has to do with refering to stuff, not meaning in itself.

4.2.1 Conditionals

Consider the following minimal pair:

(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.

A good translation of (1) into a first-order language is (1′). But the analogous translation of (2) would yield (2′), which is inadequate. A good translation for (2) would be (2″) but it is unclear why. We might convert ‘¬∃’ to the equivalent ‘∀¬’ but then we must also inexplicably push the negation into the consequent of the embedded conditional.

(1′) ∀x(x works hard → x will succeed)
(2′) ¬∃
x (x goofs off → x will succeed)
(2″) ∀
x (x goofs off → ¬(x will succeed))

This gives rise to a problem for the compositionality of English, since is seems rather plausible that the syntactic structure of (1) and (2) is the same and that ‘if’ contributes some sort of conditional connective—not necessarily a material conditional!—to the meaning of (1). But it seems that it cannot contribute just that to the meaning of (2). More precisely, the interpretation of an embedded conditional clause appears to be sensitive to the nature of the quantifier in the embedding sentence—a violation of compositionality.[16]

One response might be to claim that ‘if’ does not contribute a conditional connective to the meaning of either (1) or (2)—rather, it marks a restriction on the domain of the quantifier, as the paraphrases under (1″) and (2″) suggest:[17]

(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.

But this simple proposal (however it may be implemented) runs into trouble when it comes to quantifiers like ‘most’. Unlike (3′), (3) says that those students (in the contextually given domain) who succeed if they work hard are most of the students (in the contextually relevant domain):

(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.

The debate whether a good semantic analysis of if-clauses under quantifiers can obey compositionality is lively and open.[18]


Doesnt seem particularly difficult to me. When i look at an “if-then” clause, the first thing i do before formalizing is turning it around so that “if” is first, and i also insert any missing “then”. With their example:


(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.


this results in:


(1)* If he works hard, then everyone will succeed.
(2)* If he goofs off, then no one will succeed.


Both “everyone” and “no one” express a universal quantifer, ∀. The second one has a negation as well. We can translate this to something like “all”, and “no” to “not”. Then we might get:


(1)** If he works hard, then all will succeed.
(2)** If he goofs off, then all will not succeed.


Then, we move the quantifier to the beginning and insert a pronoun, “he”, to match. Then we get something like:


(1)*** For any person, if he works hard, then he will succeed.
(2)*** For any person, if he goofs off, then he will not succeed.


These are equivalent with SEP’s


(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.


The difference between (3) and (3′) is interesting, not becus of relevance to my method about (i think), but since it deals with something beyond first-order logic. Quantification logic, i suppose? I did a brief Google and Wiki search, but didnt find something like that i was looking for. I also tried Graham Priest’s Introduction to non-classical logic, also without luck.


So here goes some system i just invented to formalize the sentences:


(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.


Capital greek letters are set variables. # is a function that returns the cardinality a set.


(3)* (∃Γ)(∃Δ)(∀x)(∀y)(Sx↔x∈Γ∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ)→(Wy→Uy))


In english: There is a set, gamma, and there is another set, delta, and for any x, and for any y, x is a student iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then (if y works hard, then y will succeed).


Quite complicated in writing, but the idea is not that complicated. It shud be possible to find some simplified writing convention for easier expression of this way of formalizing it.


(3′)* (∃Γ)(∃Δ)(∀x)(∀y)(((Sx∧Wx)↔x∈Γ)∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ→Uy))


In english: there is a set, gamma, and there is another set, delta, and for any x, and for any y, (x is a student and x works hard) iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then u will succeed.


To my logician intuition, these are not equivalent, but proving this is left as an exercise to the reader if he can figure out a way to do so in this set theory+predicate logic system (i might try later).


4.2.2 Cross-sentential anaphora

Consider the following minimal pair from Barbara Partee:


(4) I dropped ten marbles and found all but one of them. It is probably under the sofa.

(5) I dropped ten marbles and found nine of them. It is probably under the sofa.


There is a clear difference between (4) and (5)—the first one is unproblematic, the second markedly odd. This difference is plausibly a matter of meaning, and so (4) and (5) cannot be synonyms. Nonetheless, the first sentences are at least truth-conditionally equivalent. If we adopt a conception of meaning where truth-conditional equivalence is sufficient for synonymy, we have an apparent counterexample to compositionality.


I dont accept that premise either. I havent done so since i read Swartz and Bradley years ago. Sentences like


“Canada is north of Mexico”

“Mexico is south of Canada”


are logically equivalent, but are not synonymous. The concept of being north of, and the concept of being south of are not the same, even tho they stand in a kind reverse relation. That is to say, xR1y↔yR2x. Not sure what to call such relations. It’s symmetry+substitition of relations.


Sentences like


“Everything that is round, has a shape.”

“Nothing is not identical to itself.”


are logically equivalent but dont mean the same. And so on, cf. Swartz and Bradley 1979, and SEP on theories of meaning.


Interesting though these cases might be, it is not at all clear that we are faced with a genuine challenge to compositionality, even if we want to stick with the idea that meanings are just truth-conditions. For it is not clear that (5) lacks the normal reading of (4)—on reflection it seems better to say that the reading is available even though it is considerably harder to get. (Contrast this with an example due to—I think—Irene Heim: ‘They got married. She is beautiful.’ This is like (5) because the first sentence lacks an explicit antecedent for the pronoun in the second. Nonetheless, it is clear that the bride is said to be beautiful.) If the difference between (4) and (5) is only this, it is no longer clear that we must accept the idea that they must differ in meaning.


I agree that (4) and (5) mean the same, even if (5) is a rather bad way to express the thing one normally wud express with something like (4).


In their bride example, one can also consider homosexual weddings, where “he” and “she” similarly fails to refer to a specific person out of the two newlywed.

4.2.3 Adjectives

Suppose a Japanese maple leaf, turned brown, has been painted green. Consider someone pointing at this leaf uttering (6):


(6) This leaf is green.


The utterance could be true on one occasion (say, when the speaker is sorting leaves for decoration) and false on another (say, when the speaker is trying to identify the species of tree the leaf belongs to). The meanings of the words are the same on both occasions and so is their syntactic composition. But the meaning of (6) on these two occasions—what (6) says when uttered in these occasions—is different. As Charles Travis, the inventor of this example puts it: “…words may have all the stipulated features while saying something true, but also while saying something false.”[[20]


At least three responses offer themselves. One is to deny the relevant intuition. Perhaps the leaf really is green if it is painted green and (6) is uttered truly in both situations. Nonetheless, we might be sometimes reluctant to make such a true utterance for fear of being misleading. We might be taken to falsely suggest that the leaf is green under the paint or that it is not painted at all.[21] The second option is to point out that the fact that a sentence can say one thing on one occasion and something else on another is not in conflict with its meaning remaining the same. Do we have then a challenge to compositionality of reference, or perhaps to compositionality of content? Not clear, for the reference or content of ‘green’ may also change between the two situations. This could happen, for example, if the lexical representation of this word contains an indexical element.[22] If this seems ad hoc, we can say instead that although (6) can be used to make both true and false assertions, the truth-value of the sentence itself is determined compositionally.[23]


Im going to bite the bullet again, and just say that the sentence means the same on both occasions. What is different is that in different contexts, one might interpret the same sentence to express different propositions. This is not something new as it was already featured before as well, altho this time it is without indexicals. The reason is that altho the sentence means the same, one is guessing at which proposition the utterer meant to express with his sentence. Context helps with that.

4.2.4 Propositional attitudes

Perhaps the most widely known objection to compositionality comes from the observation that even if e and e′ are synonyms, the truth-values of sentences where they occur embedded within the clausal complement of a mental attitude verb may well differ. So, despite the fact that ‘eye-doctor’ and ‘ophthalmologist’ are synonyms (7) may be true and (8) false if Carla is ignorant of this fact:


(7) Carla believes that eye doctors are rich.
(8) Carla believes that ophthalmologists are rich.


So, we have a case of apparent violation of compositionality; cf. Pelletier (1994).

There is a sizable literature on the semantics of propositional attitude reports. Some think that considerations like this show that there are no genuine synonyms in natural languages. If so, compositionality (at least the language-bound version) is of course vacuously true. Some deny the intuition that (7) and (8) may differ in truth-conditions and seek explanations for the contrary appearance in terms of implicature.[24] Some give up the letter of compositionality but still provide recursive semantic clauses.[25] And some preserve compositionality by postulating a hidden indexical associated with ‘believe’.[26]


Im not entirely sure what to do about these propositional attitude reports, but im inclined to bite the bullet. Perhaps i will change my mind after i have read the two SEP articles about the matter.


Idiomatic language

The SEP article really didnt have a proper discussion of idiomatic language use. Say, frases like “dont mention it” which can either mean what it literally (i.e., by composition) means, or its idiomatic meaning: This is used as a response to being thanked, suggesting that the help given was no trouble (same source).

Depending on what one takes “complex expression” to mean. Recall the principle:


(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


What is a complex expression? Is any given complex expression made up of either complex expressions themselves or simple expressions? Idiomatic expressions really just are expressions whose meaning is not determined by their parts. One might thus actually take them to be simple expressions themselves. If one does, then the composition principle is pretty close to trivially true.


If one does not take idiomatic expressions to be complex expressions or simple expressions, then the principle of composition is trivially false. I dont consider that a huge problem, it generally holds, and explains the things it is required to explain just fine when it isnt universally true.


One can also note that idiomatic expressions can be used as parts of larger expressions. Depending on which way to think about idiomatic expressions, and of constituents, then larger expressions which have idiomatic expressions as parts of them might be trivially non-compositional. This is the case if one takes constituents to mean smallest parts. If one does, then since the idiomatic expressions’ meanings cannot be determined from syntax+smallest parts, then neither can the larger expression. If one on the other hand takes constituents to mean smallest decompositional parts, then idiomatic expressions do not trivially make the larger expressions they are part of non-compositional. Consider the sentence:


“He is pulling your leg”


the sentence is compositional since its meaning is determinable from “he”, “is”, “pulling your leg”, the syntax, and the meaning function.


There is a reason i bring up this detail, and that is that there is another kind of idiomatic use of language that apparently hasnt been mentioned so much in the literature, judging from SEP not mentioning it. It is the use of prepositions. Surely, many prepositions are used in perfectly compositional ways with other words, like in


“the cat is on the mat”


where “on” has the usual meaning of being on top of (something), or being above and resting upon or somesuch (difficult to avoid circular definitions of prepositions).


However, consider the use of “on” in


“he spent all his time on the internet”


clearly “on” does not mean the same as above here, it doesnt seem to mean much, it is a kind of indefinite relationship. Apparently aware of this fact (and becus languages differ in which prepositions are used in such cases), the designer of esperanto added a preposition for any indefinite relation to the language (“je”). Some languages have lots of such idiomatic preposition+noun frases, and they have to be learned by heart exactly the same way as the idiomatic expressions mentioned earlier, exactly becus they are idiomatic expressions.


As an illustration, in danish if one is at an island, one is “på Fyn”, but if one is at the mainland, then one is “i Jylland”. I think such usage of prepositions shud be considered idiomatic.



I just wanted to look up some stuff on the questions that a teacher had posed. Since i dont actually have the book, and since one cant search properly in paper books, i googled around instead, and ofc ended up at Wikipedia…


and it took off as usual. Here are the tabs i ended up with (36 tabs):



and with three more longer texts to consume over the next day or so: (which i had discovered independently) (long overdue)


And quite a few other longer texts in pdf form also to be read in the next few days.

Victoria Fromkin, Robert Rodman, Nina Hyams – An Introduction to Language

I thought i better read a linguistics textbook before i start studying it formally. Who wud want to look like a noob? ;)

I have not read any other textbook on this subject, but i think it was a fairly typical okish textbook. Many of the faults with it are mentioned below in this long ‘review’.

Chapter 1

In the Renaissance a new middle class emerged who wanted their children
to speak the dialect of the “upper” classes. This desire led to the publication of
many prescriptive grammars. In 1762 Bishop Robert Lowth wrote A Short Intro-
duction to English Grammar with Critical Notes. Lowth prescribed a number
of new rules for English, many of them influenced by his personal taste. Before
the publication of his grammar, practically everyone—upper-class, middle-class,
and lower-class—said I don’t have none and You was wrong about that. Lowth,
however, decided that “two negatives make a positive” and therefore one should
say I don’t have any; and that even when you is singular it should be followed by
the plural were. Many of these prescriptive rules were based on Latin grammar
and made little sense for English. Because Lowth was influential and because
the rising new class wanted to speak “properly,” many of these new rules were
legislated into English grammar, at least for the prestige dialect—that variety of
the language spoken by people in positions of power.
The view that dialects that regularly use double negatives are inferior can-
not be justified if one looks at the standard dialects of other languages in the
world. Romance languages, for example, use double negatives, as the following
examples from French and Italian show:

French: Je ne veux parler avec personne.
I not want speak with no-one.

Italian: Non voglio parlare con nessuno.
not I-want speak with no-one.

English translation: “I don’t want to speak with anyone.”

Lowth seems to have done a good thing with his reasoning, which was obviously inspired from math: multiplying two negatives does give a positive (-1*-1=+1). The reason is logic, altho predicate logic which wasnt invented at his time (i.e., in the 1700s).

Formalizing the negro english sentence “I don’t have none” yields something like this: ¬∃x¬Hix — it is not the case that there is something such that i dont have it. which is equivalent with: ∀xHix — For any thing, i have that thing [i.e. i have everything]. Ofc, it may seem that with this remark im begging the question, but the formalization wud be closer to the natural language which is always a good thing. Im not begging the question with that remark.

Furthermore, his rule made the language simpler as one no longer had to needlessly inflect the frase “anyone” into its negative form “no one”. Simpler languages are better if they have the same expressive power. Doing away with a needless inflection is good per definition makes the language simpler without losing expressive power.

He was wrong about the thing with “you was”. It wud have been nice if it had stayed that way. Then english cud have begun moving towards the simplicity of verb conjugation in scandinavian.

When we say in later chapters that a sentence is grammatical we mean that it
conforms to the rules of the mental grammar (as described by the linguist); when
we say that it is ungrammatical, we mean it deviates from the rules in some way.
If, however, we posit a rule for English that does not agree with your intuitions
as a speaker, then the grammar we are describing differs in some way from the
mental grammar that represents your linguistic competence; that is, your lan-
guage is not the one described. No language or variety of a language (called a
dialect) is superior to any other in a linguistic sense. Every grammar is equally
complex, logical, and capable of producing an infinite set of sentences to express
any thought. If something can be expressed in one language or one dialect, it
can be expressed in any other language or dialect. It might involve different
means and different words, but it can be expressed. We will have more to say
about dialects in chapter 10. This is true as well for languages of technologically
underdeveloped cultures. The grammars of these languages are not primitive or
ill formed in any way. They have all the richness and complexity of the gram-
mars of languages spoken in technologically advanced cultures.

Stupid relativism. Of course some dialects and languages are superior to others! The awful german grammar system is much inferior to the simpler scandinavian systems or the english system. More difficult it is to say which of those systems are superior to which. English has gotten rid of grammatical gender (good!) but returns pointless verb conjugations (bad!) in scandinavian there are grammatical genders (bad, but only 2 not 3 as in german) but much less pointless verb conjugation (good!).

Why do the authors write this relativism nonsense? They dislike language puritanists:

Today our bookstores are populated with books by language purists attempt-
ing to “save the English language.” They criticize those who use  enormity to
mean “enormous” instead of “monstrously evil.” But languages change in the
course of time and words change meaning. Language change is a natural pro-
cess, as we discuss in chapter 11. Over time enormity was used more and more
in the media to mean “enormous,” and we predict that now that President
Barack Obama has used it that way (in his victory speech of November 4, 2008),
that usage will gain acceptance. Still, the “saviors” of the English language will
never disappear. They will continue to blame television, the schools, and even
the National Council of Teachers of English for failing to preserve the standard
language, and are likely to continue to dis (oops, we mean disparage) anyone
who suggests that African American English (AAE)4 and other dialects are via-
ble, complete languages.
In truth, human languages are without exception fully expressive, complete,
and logical, as much as they were two hundred or two thousand years ago.
Hopefully (another frowned-upon usage), this book will convince you that all
languages and dialects are rule-governed, whether spoken by rich or poor, pow-
erful or weak, learned or illiterate. Grammars and usages of particular groups
in society may be dominant for social and political reasons, but from a linguistic
(scientific) perspective they are neither superior nor inferior to the grammars
and usages of less prestigious members of society.

They are right to be annoyed at the purists, they are wrong to completely abandon prescriptive grammar because of it. (Baby, bathtub)

To hold that animals communicate by systems qualitatively different from
human language systems is not to claim human superiority. Humans are not
inferior to the one-celled amoeba because they cannot reproduce by splitting
in two; they are just different sexually. They are not inferior to hunting dogs,
whose sense of smell is far better than that of human animals. As we will discuss
in the next chapter, the human language ability is rooted in the human brain,
just as the communication systems of other species are determined by their bio-
logical structure. All the studies of animal communication systems, including
those of primates, provide evidence for Descartes’ distinction between other ani-
mal communication systems and the linguistic creative ability possessed by the
human animal.

More relativism. So, humans are not inferior to dogs with regards to smelling.. they are just.. olfactory challenged?

With thing with reproduction is harder. Asexual and (bi)sexual reproduction both have some advantages and disadvantages. Cellular division wud obviously not work for humans (we are too complex), but asexual reproduction might work somewhat. We get to try it out soon when we start cloning people. Im looking forward to when we start digging up the graves of past geniuses to make a clone of them i.e., harvest some DNA and insert it into an egg, and put that egg into a woman.

In our understanding of the world we are certainly not “at the mercy of what-
ever language we speak,” as Sapir suggested. However, we may ask whether the
language we speak influences our cognition in some way. In the domain of color
categorization, for example, it has been shown that if a language lacks a word
for red, say, then it’s harder for speakers to reidentify red objects. In other words,
having a label seems to make it easier to store or access information in memory.
Similarly, experiments show that Russian speakers are better at discriminating
light blue (goluboy) and dark blue (siniy) objects than English speakers, whose
language does not make a lexical distinction between these categories. These
results show that words can influence simple perceptual tasks in the domain
of color discrimination. Upon reflection, this may not be a surprising finding.
Colors exist on a continuum, and the way we segment into “different” colors
happens at arbitrary points along this spectrum.
Because there is no physical
motivation for these divisions, this may be the kind of situation where language
could show an effect.

But this is simply not true. The segmentations are not at all arbitrary. It is strange that the authors claim this as they just reviewed information form a language that segments colors into two categories: light and dark colors. These are not arbitrary categories. I learned about this from Lakoff’s Women, Fire, Dangerous Things (which is hosted somewhere on my site), but see also:

Chapter 2

Additional evidence regarding hemispheric specialization is drawn from Japa-
nese readers. The Japanese language has two main writing systems. One system,
kana, is based on the sound system of the language; each symbol corresponds to
a syllable. The other system, kanji, is ideographic; each symbol corresponds to
a word. (More about this in chapter 12 on writing systems.) Kanji is not based
on the sounds of the language. Japanese people with left-hemisphere damage
are impaired in their ability to read kana, whereas people with right-hemisphere
damage are impaired in their ability to read kanji. Also, experiments with unim-
paired Japanese readers show that the right hemisphere is better and faster than
the left hemisphere at reading kanji, and vice versa.

This is pretty cool! Even better, it fits with the data from the last book i read:

Visual memory is not normally tested in intelligence tests. There have been four studies of the
visual memory of the Japanese, the results of which are summarized in Table 10.7. Row 1
gives a Japanese IQ of 107 for 5-10-year-olds on the MFFT calculated from error scores com-
pared with an American sample numbering 2,676. The MFFT consists of the presentation of
drawings of a series of objects, e.g., a boat, hen, etc. that have to be matched to an identical
drawing among several that are closely similar. The task entails the memorization of the de-
tails of the drawings in order to find the perfect match. Performance on the task correlates
0.38 with the performance scale of the WISC (Plomin and Buss, 1973), so that it is a weak
test of visualization ability and general intelligence as well as a test of visual memory. Row 2
gives a visual memory IQ of 105 for ethnic  Japanese Americans compared with American
Europeans on two tests of visual memory consisting of the presentation of 20 objects for 25
seconds and then removed, and the task was to remember and rearrange their positions. Row 3
shows a visual memory IQ of 110 obtained by comparing a sample of Japanese high school
and university students with a sample of 52 European students at University College, Dublin.
Row 4 shows a visual memory IQ of 113 for the visual reproduction subtests of the Wechsler
Memory Scale-Revised obtained from the Japanese standardization of the test compared with
the American standardization sample. The test involves the drawing from memory of geomet-
ric designs presented for 10 seconds. The authors suggest that the explanation for the Japanese
superiority may be that Japanese children learn kanji, the Japanese idiographic script, and this
develops visual memory capacity. However, this hypothesis was apparently disproved by the
Flaherty and Connolly study (1996) whose results are shown in row 2. Some of the ethnic
Japanese American participants had a knowledge of kanji, while others did not, and there was
no difference in visual memory between those who knew and those who did not know kanji,
disproving the theory that the advantage of East Asians on visualization tasks arises from their
practice on visualizing idiographic scripts. (Richard Lynn, Race differences in intelligence, p. 94)

It fits. Why else wud those people choose a very visual language instead of a more sound (i.e. verbal) focused one? Tests also show that east asians are worse at verbal tasks. This makes perfectly sense with their writing system.

Chapter 3

In the foregoing dialogue, Humpty Dumpty is well aware that the prefix un-
means “not,” as further shown in the following pairs of words:
A —————– B
desirable —— undesirable
likely ———- unlikely
inspired ——- uninspired
happy ——— unhappy
developed—– undeveloped
sophisticated – unsophisticated

Thousands of English adjectives begin with un-. If we assume that the most
basic unit of meaning is the word, what do we say about parts of words like
un-, which has a fixed meaning? In all the words in the B column, un- means
the same thing—“not.” Undesirable means “not desirable,” unlikely means “not
likely,” and so on. All the words in column B consist of at least two meaningful
units: un + desirable, un + likely, un + inspired, and so on.

The authors are again wrong. The un prefix does not mean “not” in these examples! An undesirable person is more than just someone that isnt desirable, it is someone who is, well, positively undesirable; that one wants to avoid. Similarly for likely+unlikely. When one says that something is unlikely, one is saying more than just saying that it is not likely. One is saying that it has a low probability of happening. The difference here is that the event cud be neither likely or unlikely, i.e. having a probability around .5 (or whatever, depends on context). An unhappy person is someone who is sad or depressed, not just someone who isnt happy. A neutral person is neither happy or unhappy. An example of a word where the un prefix has the simple meaning of negation, is something like unmarried which really only does mean “not married”. The un prefix in many if not all of their examples has the function of reversing the quality in question, not negating it.

I have pointed this out before, but it was in a forum post on FRDB where i am now banned and therefore cannot search using the built-in search tool.

Chapter 4

Whether a verb takes a complement or not depends on the properties of the
verb. For example, the verb find is a transitive verb. A transitive verb requires an
NP complement (direct object), as in The boy found the ball, but not *The boy
found, or *The boy found in the house. Some verbs like eat are optionally tran-
sitive. John ate and John ate a sandwich are both grammatical.
Verbs select different kinds of complements. For example, verbs like put and
give take both an NP and a PP complement, but cannot occur with either alone:

Sam put the milk in the refrigerator.
*Sam put the milk.
Robert gave the film to his client.
*Robert gave to his client.

Sleep is an intransitive verb; it cannot take an NP complement.
Michael slept.
*Michael slept a fish.

What about: “Sam puts out.” (see meaning #6) That lacks a NP and is grammatical. And how about: “Robert gave a talk.” (see meaning #2) That lacks a PP and is grammatical. It seems that the authors shud have chosen some better example verbs.

Chapter 5

For most sentences it does not make sense to say that they are always true
or always false. Rather, they are true or false in a given situation, as we pre-
viously saw with  Jack swims. But a restricted number of sentences are indeed
always true regardless of the circumstances. They are called  tautologies. (The
term analytic is also used for such sentences.) Examples of tautologies are sen-
tences like Circles are round or A person who is single is not married. Their
truth is guaranteed solely by the meaning of their parts and the way they are
put together. Similarly, some sentences are always false. These are called contra-
dictions. Examples of contradictions are sentences like Circles are square or A
bachelor is married.

Not entirely correct. Analytic sentences are noncontingent sentences, not just noncontingetly true sentences.

Later on they write:

The following sentences are either tautologies (analytic), contradictions, or
situationally true or false.

Indicating that they think analytic refers only to noncontingetly true propositions/sentences. Also, they shud perhaps have studied some more filosofy, so that they wudn’t have to rely on the homemade term situationally true when there already exists a standard term for this, namely contingently true.

Much of what we know is deduced from what people say alongside our obser-
vations of the world. As we can deduce from the quotation, Sherlock Holmes
took deduction to the ultimate degree. Often, deductions can be made based on
language alone.

Sadly, the authors engage in the common practice of referring to what Sherlock Holmes did as “deduction”. It wasn’t. It was mostly abduction aka. inference to the best explanation.

Generally, entailment goes only in one direction. So while the sentence Jack
swims beautifully entails Jack swims, the reverse is not true. Knowing merely that
Jack swims is true does not necessitate the truth of Jack swims beautifully. Jack
could be a poor swimmer. On the other hand, negating both sentences reverses
the entailment. Jack doesn’t swim entails Jack doesn’t swim beautifully.

They are not negating it properly. They are using what i before called short-form negation. Compare:

“Jack doesn’t swim” (∃!x)x=j∧¬Sj
“It is not the case that Jack swims” ¬(∃!x)x=j∧Sj

These two do not mean the same, strictly speaking. And the distinction does sometimes matter. The one entails that Jack exists and the second does not. This matters when one is talking about sentences such as “The current king of France is bald”.  I have explained this before.

The notion of entailment can be used to reveal knowledge that we have about
other meaning relations. For example, omitting tautologies and contradictions,
two sentences are  synonymous (or paraphrases) if they are both true or both
false with respect to the same situations. Sentences like Jack put off the meeting
and Jack postponed the meeting are synonymous, because when one is true the
other must be true; and when one is false the other must also be false. We can
describe this pattern in a more concise way by using the notion of entailment:
Two sentences are synonymous if they entail each other.

The authors conflate ‘meaning the same’ with ‘having the same truth-value’. These are not the same. Some sentences always have the same truth-value (they belong to the same equivalence class) but do not mean the same. Examples are e.g.:

“Canada is north of the US”
“The US is south of Canada”

These two don’t mean the same, but they belong to the same equivalence class. The relation among the entities is reversed in the other sentence i.e. “… is north of …” and “… is south of …” do not mean the same. They mean the opposite of each other.

See Swartz and Bradley (1979:35ff) for more examples and a more thoro discussion.

The semantic theory of sentence meaning that we just sketched is not the
only possible one, and it is also incomplete, as shown by the paradoxical sen-
tence This sentence is false. The sentence cannot be true, else it’s false; it cannot
be false, else it’s true. Therefore it has no truth value, though it certainly has
meaning. This notwithstanding, compositional truth-conditional semantics has
proven to be an extremely powerful and useful tool for investigating the seman-
tic properties of natural languages.

Obviously, i’m not going to let this one fly! :) Things are not nearly as simple as they write. I will just point to my friend’s, Benjamin Burgis, recent dissertation (ph.d.) about the liar paradox and other related problems.

One point tho. Note the authors strange inference from to “Therefore, it has no truth value”.

In the previous sections we saw that semantic rules compute sentence meaning
compositionally based on the meanings of words and the syntactic structure that
contains them. There are, however, interesting cases in which compositionality
breaks down, either because there is a problem with words or with the semantic
rules. If one or more words in a sentence do not have a meaning, then obviously
we will not be able to compute a meaning for the entire sentence.
even if the individual words have meaning but cannot be combined together as
required by the syntactic structure and related semantic rules, we will also not
get to a meaning. We refer to these situations as semantic anomaly. Alternatively,
it might require a lot of creativity and imagination to derive a meaning. This is
what happens in metaphors. Finally, some expressions—called idioms—have a
fixed meaning, that is, a meaning that is not compositional. Applying composi-
tional rules to idioms gives rise to funny or inappropriate meanings.

A bit of clarification is needed here. They are right if they mean the word is used in the sentence. They are wrong if they mean the word is mentioned in the sentence. The unclear frasing “in a sentence” won’t do here. See

The semantic properties of words determine what other words they can be com-
bined with. A sentence widely used by linguists that we encountered in chapter
4 illustrates this fact:

Colorless green ideas sleep furiously.

The sentence obeys all the syntactic rules of English. The subject is  colorless
green ideas and the predicate is sleep furiously. It has the same syntactic struc-
ture as the sentence

Dark green leaves rustle furiously.

but there is obviously something semantically wrong with the sentence. The
meaning of  colorless  includes the semantic feature “without color,” but it is
combined with the adjective green, which has the feature “green in color.” How
can something be both “without color” and “green in color”? Other semantic
violations occur in the sentence. Such sentences are semantically anomalous.

The authors seem to be saying that all sentences that involves contradictions are semantically anomalous. But that is not true, if by that they mean that such sentences are meaningless. Self-contradictory sentences are meaningless alright. Otherwise, their negations (which are necessarily true) wud be meaningless too. A grammatically correct placed negation can never make a sentence meaningful or meaningless.

I have discussed this before. See this essay, and this post (by the good doctor Burgis) and the comments section below.

The authors however do mention later that:

The well-known colorless green ideas sleep furiously is semantically
anomalous because ideas (colorless or not) are not animate.

So, i’m not sure what they think. Perhaps they think that the chomsky is anomalous for both reasons, i.e. 1) that it is self-contradictory, and 2) it involves a category error with the verb sleep and the subject ideas.

Another part of the meaning of the words baby and child is that they are
“young.” (We will continue to indicate words by using italics and semantic fea-
tures by double quotes.) The word father has the properties “male” and “adult”
as do uncle and bachelor.

(I have restored the authors italicization in the above quote)

First, it bothers me when authors want to put a given word in quotation marks but then include something that doesn’t belong in there with it, typically a comma or a dot. Very annoying!

Second, they are wrong about these semantic features. The word father has the features “parent” and “male”. It has no feature about adulthood altho that it is often the case. There is nothing semantically strange or anomalous about calling a person who is 15 years old a father if he has a child. Similar things hold about their other example uncle.

Generally, the count/mass distinction corresponds to the difference between
discrete objects and homogeneous substances. But it would be incorrect to say
that this distinction is grounded in human perception, because different lan-
guages may treat the same object differently. For example, in English the words
hair, furniture, and spaghetti are mass nouns. We say Some hair is curly, Much
furniture is poorly made, John loves spaghetti. In Italian, however, these words
are count nouns, as illustrated in the following sentences:

Ivano ha mangiato molti spaghetti ieri sera.
Ivano ate many spaghettis last evening.
Piero ha comprato un mobile.
Piero bought a furniture.
Luisella ha pettinato i suoi capelli.
Luisella combed her hairs.

We would have to assume a radical form of linguistic determinism (remem-
ber the Sapir-Whorf hypothesis from chapter 1) to say that Italian and English
speakers have different perceptions of hair, furniture, and spaghetti. It is more
reasonable to assume that languages can differ to some extent in the semantic
features they assign to words with the same referent, somewhat independently
of the way they conceptualize that referent. Even within a particular language
we can have different words—count and mass—to describe the same object or
substance. For example, in English we have shoes (count) and footwear (mass),
coins (count) and change (mass).

But what about a nonperfect correlation? The data mentioned above does not disprove the existence of a such thing. It wud be interesting to do a cross-language study to see if there was a correlation. I wud be very surprised if there was no such correlation. I will bet money that something like this is the case: The more discrete an entity is, the higher the chance that the thing will be a countable noun. It is not surprising that their examples involves things that almost always but not always come in bundles. But i’d wager that no language has car as a noncountable noun. The entity is too discrete for that to make sense. Likely, i’d be surprised if any language had water as a countable noun. Generally, words for fluids are probably always (or nearly so) noncountable nouns. Even if the words for the entities that these fluids are made of are countable nouns e.g. a molecule.

In all languages, the reference of certain words and expressions relies entirely
on the situational context of the utterance, and can only be understood in light
of these circumstances. This aspect of pragmatics is called deixis (pronounced
“dike-sis”). Pronouns are deictic. Their reference (or lack of same) is ultimately
context dependent.
Expressions such as

this person
that man
these women
those children

are also deictic, because they require situational information for the listener to
make a referential connection and understand what is meant. These examples
illustrate person deixis. They also show that the demonstrative articles like this
and that are deictic.
We also have  time deixis and place deixis. The following examples are all
deictic expressions of time:

now then tomorrow
this time that time seven days ago
two weeks from now last week next April

In filosofy, these are called indexicals. Or so i thought, apparently, there is some difference according to Wikipedia. Deixis seems to be a bit broader.

Implicatures are different than entailments. An entailment cannot be can-
celled; it is logically necessary. Implicatures are also different than presupposi-
tions. They are the possible consequences of utterances in their context, whereas
presuppositions are situations that must exist for utterances to be appropriate in
context, in other words, to obey Grice’s Maxims. Further world knowledge may
cancel an implicature, but the utterances that led to it remain sensible and well-
formed, whereas further world knowledge that negates a presupposition—oh,
the team didn’t lose after all—renders the entire utterance inappropriate and in
violation of Grice’s Maxims.

To be fair, they only talked about deductive inferences or entailment before. But some entailment maybe be ‘cancelled’ by further information or premises as they are called in logic. Logics where new information can make an inference worse or better are called non-monotonic.

Chapter 6

Throughout several centuries English scholars have advocated spelling
reform. George Bernard Shaw complained that spelling was so inconsistent that
fish could be spelled ghoti—gh as in tough, o as in women, and ti as in nation.
Nonetheless, spelling reformers failed to change our spelling habits, and it took
phoneticians to invent an alphabet that absolutely guaranteed a one sound–one
symbol correspondence. There could be no other way to study the sounds of all
human languages scientifically.

It’s not their fault tho. Blame the politicians. As i have repeatedly shown, there are various good ways to reform english spelling. In fact, i’ve begun working on my own ultra minimalistic reform proposal. More on that later. :)

The sounds of all languages fall into two classes: consonants and vowels. Con-
sonants are produced with some restriction or closure in the vocal tract that
impedes the flow of air from the lungs. In phonetics, the terms consonant and
vowel refer to types of sounds, not to the letters that represent them. In speaking
of the alphabet, we may call “a” a vowel and “c” a consonant, but that means
only that we use the letter “a” to represent vowel sounds and the letter “c” to
represent consonant sounds.

Indeed. I recall that when i invented Lyddansk (my danish reform proposal) i had to make this distinction. I called them vowel-letters and consonant-letters (translated).

5.  The following are all English words written in a broad phonetic transcrip-
tion (thus omitting details such as nasalization and aspiration). Write the
words using normal English orthography.
a. [hit]
b. [strok]
c. [fez]
d. [ton]
e. [boni]
f. [skrim]
g. [frut]
h. [pritʃər]
i. [krak]
j. [baks]
k. [θæŋks]
l. [wɛnzde]
m. [krɔld]
n. [kantʃiɛntʃəs]
o. [parləmɛntæriən]
p. [kwəbɛk]
q. [pitsə]
r. [bərak obamə]
s. [dʒɔn məken]
t. [tu θaʊzənd ænd et]

I really, really dislike their strange choice of fonetical symbols. They don’t correspond to major dictionaries online nor the OED. Especially confusing is using /e/ for both /e/ and /eɪ/ as in eight, which they write as /et/ instead of the normal /eɪt/ found in pretty much all dictionaries (example: 1, 2, and the OED gives the same pronunciation).

To those that are wondering, here is what i think the correct answers are:

a. [hit] hit
b. [strok] stroke but their symbolism is confusing, they use /o/ to mean IPA /əʊ/
c. [fez] face? which shud be /feɪs/
d. [ton] it is tempting to guess ton until one thinks of their strange use of /o/ to mean /əʊ/, the correct word must be tone /təʊn/
e. [boni] bunny is tempting, but it seems to be boney /bəʊni/
f. [skrim] scrim
g. [frut] froot, they fail to indicate that the vowel is long i.e. /fru:t/
h. [pritʃər] preacher
i. [krak] crack
j. [baks] backs is tempting, but it appears to be barks i.e. /bɑːks/
k. [θæŋks] thanks
l. [wɛnzde] another strange one, i think it is wednesday i.e. /wʒnzdeɪ/
m. [krɔld] crawled
n. [kantʃiɛntʃəs] conscientious? i.e. /kɒnʃɪˈɛnʃəs/
o. [parləmɛntæriən] parliamentarian
p. [kwəbɛk] Quebec
q. [pitsə] pizza
r. [bərak obamə] Barack Obama
s. [dʒɔn məken] John McCain
t. [tu θaʊzənd ænd et] two thousand and eight, with eight which shud be /eɪt/.

In general, their introduction to fonetics is bad when it disagrees with pretty much all dictionaries. Learn fonetics somewhere else. I learned it from Wikipedia and using lots of dictionaries.

Chapter 7

Nothing interesting to note here.

Chapter 8

Some time after the age of one, the child begins to repeatedly use the same string
of sounds to mean the same thing. At this stage children realize that sounds are
related to meanings. They have produced their first true words. The age of the
child when this occurs varies and has nothing to do with the child’s intelligence.
(It is reported that Einstein did not start to speak until he was three or four
years old.)

It saddens me to see that a textbook with a chapter about children and learning spread this myth! It is not that hard to google it and discover it to be an urban myth. See:

[bərt]  “(Big) Bird”

Another annoying detail with their chosen fonetical symbols is that they fail to distinguish between schwa /ə/ which is an unstressed vowel, and the similar sounding but potentially stressed vowel /ɜ/. Again, they don’t use the same standards as used by dictionaries, which is annoying! But see: and

1.  Hans hat ein Buch gekauft. “Hans has a book bought.”
2.  Hans kauft ein Buch. “Hans is buying a book.”

I don’t get it. How can a linguistics textbook get the translation wrong? The correct translation of (2) is “Hans buys a book.”.

Another experimental technique, called the naming task, asks the subject to
read aloud a printed word. (A variant of the naming task is also used in stud-
ies of people with aphasia, who are asked to name the object shown in a pic-
ture.) Subjects read irregularly spelled words like dough and steak just slightly
more slowly than regularly spelled words like doe and stake, but still faster than
invented strings like cluff. This suggests that people can do two different things
in the naming task. They can look for the string in their mental lexicon, and if
they find it (i.e., if it is a real word), they can pronounce the stored phonologi-
cal representation for it. They can also “sound it out,” using their knowledge
of how certain letters or letter sequences (e.g., “gh,” “oe”) are most commonly
pronounced. The latter is obviously the only way to come up with a pronuncia-
tion for a nonexisting word.
The fact that irregularly spelled words are read more slowly than regularly
spelled real words suggests that the mind “notices” the irregularity. This may be
because the brain is trying to do two tasks—lexical look-up and sounding out
the word—in parallel in order to perform naming as fast as possible. When the
two approaches yield inconsistent results, a conflict arises that takes some time
to resolve.

 This is very interesting! I didn’t know that badly spelled words were read more slowly. That’s good news, or bad news, depending. :P It is good in that i may now have another argument for spelling reform: it makes people more efficient readers. It is also testable between populations+languages. Everything else equal, are people that read a well-spelled language faster readers than people that read a horribly spelled language (like english and danish)? That’s an interesting question actually. It sounds sufficiently simple and obvius that someone must have done the study. As for the bad news part, if they are right, it means i’m being inefficient becus i’m reading in a bad language. Worse, the entire world is being inefficient becus of its ‘choice’ of world language (i.e. english).

Chapter 9

Some systems draw on formal logic for semantic representations. You put up
the switch would be represented in a function/argument form, which is its logi-
cal form:


where PUT UP is a “two-place predicate,” in the jargon of logicians, and the
arguments are YOU and THE SWITCH. The lexicon indicates the appropriate
relationships between the arguments of the predicate PUT UP.

I really, really dislike the term argument when used to mean the thing that one puts into functions or predicates. It is really a very, very bad choice of words for the context (logic). argument already has a rather precise meaning in that context. I prefer the term variable but there is another and better term that i prefer more, but i can’t seem to recall it right now.

 A keyword as general as bird may return far more information than could be
read in ten lifetimes if a thorough search of the Web occurs. (A search on the
day of this writing produced 200 million hits, compared to 122 million four
years prior.) […]

I re-did the search. 1,100 million hits.

 Chapter 10

It is not always easy to decide whether the differences between two speech
communities reflect two dialects or two languages. Sometimes this rule-of-
thumb definition is used: When dialects become mutually unintelligible—when
the speakers of one dialect group can no longer understand the speakers of
another dialect group—these dialects become different languages.
However, this rule of thumb does not always jibe with how languages are
officially recognized, which is determined by political and social considerations.
For example, Danes speaking Danish and Norwegians speaking Norwegian and
Swedes speaking Swedish can converse with each other. Nevertheless, Danish
and Norwegian and Swedish are considered separate languages because they are
spoken in separate countries and because there are regular differences in their
grammars. Similarly, Hindi and Urdu are mutually intelligible “languages” spo-
ken in Pakistan and India, although the differences between them are not much
greater than those between the English spoken in America and the English spo-
ken in Australia.

Not citing any sources for such claims is bad. The mutual intelligibility is not that high between the scandinavian languages. It is much higher for written text between norwegian (bokmål) and danish. Etc. See the Wikipedia link.

English is the most widely spoken language in the world (as a first or second
language). It is the national language of several countries, including the United
States, large parts of Canada, the British Isles, Australia, and New Zealand. For
many years it was the official language in countries that were once colonies of
Britain, including India, Nigeria, Ghana, Kenya, and the other “anglophone”
countries of Africa. There are many other phonological differences in the vari-
ous dialects of English used around the globe.

This is certainly false. Look at Wikipedia. Mandarin is the most spoken native language. English is probably the most spoken non-native language.
ETA: But then later they write

The Sino-Tibetan family includes Mandarin, the most populous language in
the world, spoken by more than one billion Chinese. This family also includes
all of the Chinese languages, as well as Burmese and Tibetan.

So, i don’t know what they think.

Even though every language is a composite of dialects, many people talk and
think about a language as if it were a well-defined fixed system with various
dialects diverging from this norm. This is false, although it is a falsehood that is
widespread. One writer of books on language accused the editors of Webster’s
Third New International Dictionary, published in 1961, of confusing “to the
point of obliteration the older distinction between standard, substandard, collo-
quial, vulgar, and slang,” attributing to them the view that “good and bad, right
and wrong, correct and incorrect no longer exist.” In the next section we argue
that such criticisms are ill founded.

It’s time for the authors to again say negative things about language standardization, and promote a very relativistic view of languages and dialects. I will defend my views against their criticisms of such views.

I don’t know about a ‘fixed’ system, if they meant unchanging system, then i ofc don’t agree that there is any unchanging system of standard english (or standard danish etc.). However, there is a kind of danish that is the most standard. It may be a good idea to speak as normal a version of a language as possible, becus this makes it the easiest for the listeners to understand what one is saying. The general idea is to avoid things that are peculiar to a small minority of the speakers of the relevant language. This includes everything: syntax, grammar, word choice, pronunciation, etc. Speaking a language with in the most common way is the standard version of that language, nothing else. It is actually possible that there is no regional dialect that speaks that way, but that doesn’t matter. A standard version of a language need not be a regional dialect.

A standard version of a language is also a necessity if one wants a relatively fonetic spelling system without lots of alternative forms. The idea is that one spells after the sound of the standard version of the language.

No dialect, however, is more expressive, less corrupt, more logical, more
complex, or more regular than any other dialect or language. They are sim-
ply different. More precisely, dialects represent different set of rules or lexical
items represented in the minds of its speakers. Any judgments, therefore, as to
the superiority or inferiority of a particular dialect or language are social judg-
ments, which have no linguistic or scientific basis.
To illustrate the arbitrariness of “standard usage,” consider the English r-drop
rule discussed earlier. Britain’s prestigious RP accent omits the r in words such
as “car,” “far,” and “barn.” Thus an r-less pronunciation is thought to be better
than the less prestigious rural dialects that maintain the r. However, r-drop in the
northeast United States is generally considered substandard, and the more pres-
tigious dialects preserve the r, though this was not true in the past when r-drop
was considered more prestigious. This shows that there is nothing inherently bet-
ter or worse about one pronunciation over another, but simply that one variant is
perceived of as better or worse depending on a variety of social factors.

I don’t care about the typical purist stuff like ‘corruption’, but they are certainly wrong that some dialects are not more complex or regular than others. I really don’t know what makes people make these claims when they are so obviously false. I’ll give a very brief example. Consider a language that has a verb. As it happens, this verb is irregular in one dialect and not so in another. If everything else is equal, then clearly the one dialect is more regular than the other (and less complex), and indeed, better.

Their illustration is strange. First they say that they want to illustrate it, but then end up concluding that their example “shows that there is nothing inherently better or worse about one pronunciation over another, but simply that one variant is perceived of as better or worse depending on a variety of social factors” which is either trivially true becus of the clause about “social factors” (such clauses are almost never explained, in typical sociology fashion), or false becus these differences matter. If the difference is such that other speakers of the language from other dialects fail to understand one, then that is indeed worse, since the purpose of language is generally to be able to communicate. Obviously, if one is not trying to communicate with everyone using the language, this point is irrelevant.

Constructions with multiple negatives akin to AAE He don’t know nothing are
commonly found in languages of the world, including French, Italian, and the
Engl ish of Chaucer, as i l lustrated in the epigraph from The Canterbury Tales. The
multiple negatives of AAE are governed by rules of syntax and are not illogical.

While perhaps not ‘illogical’, they are redundant and so increase the complexity of a language without adding any increased expressiveness. This is a bad thing.

The authors spend some time discussing various differences between african american english (AAE) and standard american english (SAE). Some of these differences have relevance to complexity and expressive power, but i’m not knowledgeable enuf to comment on all of their points.

The first—the whole-word approach—teaches children to recognize a vocab-
ulary of some fifty to one hundred words by rote learning, often by seeing the
words used repeatedly in a story, for example, Run, Spot, Run from the Dick
and Jane series well-known to people who learned to read in the 1950s. Other
words are acquired gradually. This approach does not teach children to “sound
out” words according to the individual sounds that make up the words. Rather,
it treats the written language as though it were a logographic system, such as
Chinese, in which a single written character corresponds to a whole word or
word root. In other words, the whole-word approach fails to take advantage
of the fact that English (and the writing systems of most literate societies) is
based on an alphabet, in which the symbols correspond to the individual sounds
(roughly phonemes) of the language. This is ironic because alphabetic writing
systems are the easiest to learn and are maximally efficient for transcribing any
human language. (my bolding)

So much for their language relativism.

Chapter 12

Another simplification is that the “dead ends”—languages that evolved and
died leaving no offspring—are not included. We have already mentioned Hittite
and Tocharian as two such Indo-European languages. The family tree also fails
to show several intermediate stages that must have existed in the evolution of
modern languages. Languages do not evolve abruptly, which is why comparisons
with the genealogical trees of biology have limited usefulness. Finally, the dia-
gram fails to show some Indo-European languages because of lack of space.

The authors give the impression that in biology, species do somehow evolve abruptly. But they do no such thing. The analogy works fine in that area. The main problem with the analogy is that languages can share ‘genes’ (words, etc.) between ‘species’ and this does not generally happen in biology. (At least, except for in bacteria?)

 The term sound writing is sometimes used in place of alphabetic writing, but
it does not truly represent the principle involved in the use of alphabets. One-
sound ↔ one-letter is inefficient and unintuitive, because we do not need to
represent the [pʰ] in pit and the [p] in spit by two different letters. It is confusing
to represent nonphonemic differences in writing because the sounds are seldom
perceptible to speakers. Except for the phonetic alphabets, whose function is
to record the sounds of all languages for descriptive purposes, most, if not all,
alphabets have been devised on the phonemic principle.

This is a good observation. I hadn’t thought of that. I shud update my Lyddansk to fix the fonetic principle to the fonemic principle (in danish ofc). Another way of putting it in ordinary language is: one sound↔one symbol, but include only differences in sounds that are relevant.

If writing represented the spoken language perfectly, spelling reforms would
never have arisen. In chapter 6 we discussed some of the problems in the En  glish
orthographic system. These problems prompted George Bernard Shaw to observe

[I]t was as a reading and writing animal that Man achieved his human
eminence above those who are called beasts. Well, it is I and my like who
have to do the writing. I have done it professionally for the last sixty
years as well as it can be done with a hopelessly inadequate alphabet
devised centuries before the English language existed to record another
and very different language. Even this alphabet is reduced to absurdity
by a foolish orthography based on the notion that the business of spelling
is to represent the origin and history of a word instead of its sound and
meaning. Thus an intelligent child who is bidden to spell debt, and very
properly spells it d-e-t, is caned for not spelling it with a b because Julius
Caesar spelt the Latin word for it with a b.

The source of the quote is given as: Shaw, G. B. 1948. Preface to R. A. Wilson, The miraculous birth of language.

Anyway, this particular etymology is actually wrong too! There are many such false etymologies that people have based their spelling on. Very utterly foolish. Quoting Wikipedia:

From the 16th century onward, English writers who were scholars of Greek and Latin literature tried to link English words to their Graeco-Latin counterparts. They did this by adding silent letters to make the real or imagined links more obvious. Thus det became debt (to link it to Latin debitum), dout became doubt (to link it to Latin dubitare), sissors became scissors and sithe became scythe (as they were wrongly thought to come from Latin scindere), iland became island (as it was wrongly thought to come from Latin insula), ake became ache (as it was wrongly thought to come from Greek akhos), and so forth.[5][6]
I cudnt find it online due to the dam copyright trolls.
Same with this one.
Noam Chomsky on nonsense pomo.

Since no one has succeeded in showing me what I’m missing, we’re left with the second option: I’m just incapable of understanding. I’m certainly willing to grant that it may be true, though I’m afraid I’ll have to remain suspicious, for what seem good reasons. There are lots of things I don’t understand — say, the latest debates over whether neutrinos have mass or the way that Fermat’s last theorem was (apparently) proven recently. But from 50 years in this game, I have learned two things: (1) I can ask friends who work in these areas to explain it to me at a level that I can understand, and they can do so, without particular difficulty; (2) if I’m interested, I can proceed to learn more so that I will come to understand it. Now Derrida, Lacan, Lyotard, Kristeva, etc. — even Foucault, whom I knew and liked, and who was somewhat different from the rest — write things that I also don’t understand, but (1) and (2) don’t hold: no one who says they do understand can explain it to me and I haven’t a clue as to how to proceed to overcome my failures. That leaves one of two possibilities: (a) some new advance in intellectual life has been made, perhaps some sudden genetic mutation, which has created a form of “theory” that is beyond quantum theory, topology, etc., in depth and profundity; or (b) … I won’t spell it out.

Which is a rather long essay on pomo. The author is not critical enough in my view, perhaps becus he is an epistemologician who focuses on skepticism?
Thomas Nagel, the filosofer, reviews Sokal et al’s book. He is generally positive about the book and hostile to pomo nonsense.