The redundant use of “myself”, “personally”, and perhaps others

Here’s a random example [source is this book]:

A database (DB) is simply a collection of data, placed into an arbitrary
structured format. The most common DB is a relational database;
tables are used to store the data and relationships can be defined
between different tables. SQL (Structured Query Language) is the
language used to work with most DBs. (SQL can either be pronounced
as discrete letters “S-Q-L” or as a word “sequel”. I personally use

Note the bolded word above. What meaning does it add to the sentence? Nothing. “I” already identifies who the person is – namely, the writer/speaker.

Sometimes people do the same with “myself”, random example from Google [Romans 9:3]:

For I could wish that I myself were cursed and cut off from Christ for the sake of my brothers, those of my own race,

Sometimes it gets extra annoying when both of them are used [from here]:

I myself personally like women with a nice bush and feel in the minority. So am I or what?

Tripple indication much?

Yes, yes, i understand that this is used for emfasis. I just dont like redundancy. :(

I do recall at least one structure where one cannot leave out “myself”, and i dont just mean those where an object is needed and it happens to be the same as the subject which is “I”. I think i stumbled upon a few of such cases, but i dont specifically recall them right now.

Some thoughts about the principle of compositionality

This is another of those ideas that ive had independently, and that it turned out that others had thought of before me, by thousands of years in this case. The idea is that longer expressions of language as made out of smaller parts of language, and that the meaning of the whole is determined by the parts and their structure. This is rather close to the formulation used on SEP. Heres the introduction on SEP:


Anything that deserves to be called a language must contain meaningful expressions built up from other meaningful expressions. How are their complexity and meaning related? The traditional view is that the relationship is fairly tight: the meaning of a complex expression is fully determined by its structure and the meanings of its constituents—once we fix what the parts mean and how they are put together we have no more leeway regarding the meaning of the whole. This is the principle of compositionality, a fundamental presupposition of most contemporary work in semantics.

Proponents of compositionality typically emphasize the productivity and systematicity of our linguistic understanding. We can understand a large—perhaps infinitely large—collection of complex expressions the first time we encounter them, and if we understand some complex expressions we tend to understand others that can be obtained by recombining their constituents. Compositionality is supposed to feature in the best explanation of these phenomena. Opponents of compositionality typically point to cases when meanings of larger expressions seem to depend on the intentions of the speaker, on the linguistic environment, or on the setting in which the utterance takes place without their parts displaying a similar dependence. They try to respond to the arguments from productivity and systematicity by insisting that the phenomena are limited, and by suggesting alternative explanations.


SEP goes on to discuss some more formal versions of the general idea:


(C) The meaning of a complex expression is determined by its structure and the meanings of its constituents.



(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


SEP goes on to disguish between a lot of different versions of this. See the article for details.

The thing i wanted to discuss was the counterexamples offered. I found none of them to be rather compelling. Based mostly on intuition pumps as far as i can tell, and im rather wary of such (cf. Every Thing Must Go, amazon).


Heres SEP’s first example, using chess notation (many other game notations wud also work, e.g. Taifho):


Consider the Algebraic notation for chess.[15] Here are the basics. The rows of the chessboard are represented by the numerals 1, 2, … , 8; the columns are represented by the lower case letters a, b, … , h. The squares are identified by column and row; for example b5 is at the intersection of the second column and the fifth row. Upper case letters represent the pieces: K stands for king, Q for queen, R for rook, B for bishop, and N for knight. Moves are typically represented by a triplet consisting of an upper case letter standing for the piece that makes the move and a sign standing for the square where the piece moves. There are five exceptions to this: (i) moves made by pawns lack the upper case letter from the beginning, (ii) when more than one piece of the same type could reach the same square, the sign for the square of departure is placed immediately in front of the sign for the square of arrival, (iii) when a move results in a capture an x is placed immediately in front of the sign for the square of arrival, (iv) the symbol 0-0 represents castling on the king’s side, (v) the symbol 0-0-0 represents castling on the queen’s side. + stands for check, and ++ for mate. The rest of the notation serves to make commentaries about the moves and is inessential for understanding it.

Someone who understands the Algebraic notation must be able to follow descriptions of particular chess games in it and someone who can do that must be able to tell which move is represented by particular lines within such a description. Nonetheless, it is clear that when someone sees the line Bb5 in the middle of such a description, knowing what B, b, and 5 mean will not be enough to figure out what this move is supposed to be. It must be a move to b5 made by a bishop, but we don’t know which bishop (not even whether it is white or black) and we don’t know which square it is coming from. All this can be determined by following the description of the game from the beginning, assuming that one knows what the initial configurations of figures are on the chessboard, that white moves first, and that afterwards black and white move one after the other. But staring at Bb5 itself will not help.


It is exacly the bold lines i dont accept. Why must one be able to know that from the meaning alone? Knowing the meaning of expressions does not always make it easy to know what a given noun (or NP) refers to. In this case “B” is a noun refering to a bishop, which one? Well, who knows. There are lots of examples of words refering to differnet things (people usually) when used in diffferent contexts. For instance, the word “me” refers to the source of the expression, but when an expression is used by different speakers, then “me” refers to different people, cf. indexicals (SEP and Wiki).


Ofc, my thoughts about are not particularly unique, and SEP mentions the defense that i also thought of:


The second moral is that—given certain assumptions about meaning in chess notation—we can have productive and systematic understanding of representations even if the system itself is not compositional. The assumptions in question are that (i) the description I gave in the first paragraph of this section fully determines what the simple expressions of chess notation mean and also how they can be combined to form complex expressions, and that (ii) the meaning of a line within a chess notation determines a move. One can reject (i) and argue, for example, that the meaning of B in Bb5 contains an indexical component and within the context of a description, it picks out a particular bishop moving from a particular square. One can also reject (ii) and argue, for example, that the meaning of Bb5 is nothing more than the meaning of ‘some bishop moves from somewhere to square b5’—utterances of Bb5 might carry extra information but that is of no concern for the semantics of the notation. Both moves would save compositionality at a price. The first complicates considerably what we have to say about lexical meanings; the second widens the gap between meanings of expressions and meanings of their utterances. Whether saving compositionality is worth either of these costs (or whether there is some other story to be told about our understanding of the Algebraic notation) is by no means clear. For all we know, Algebraic notation might be non-compositional.


I also dont agree that it widens the gap between meanings of expressions and meanings of utterances. It has to do with refering to stuff, not meaning in itself.


4.2.1 Conditionals

Consider the following minimal pair:

(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.

A good translation of (1) into a first-order language is (1′). But the analogous translation of (2) would yield (2′), which is inadequate. A good translation for (2) would be (2″) but it is unclear why. We might convert ‘¬∃’ to the equivalent ‘∀¬’ but then we must also inexplicably push the negation into the consequent of the embedded conditional.

(1′) ∀x(x works hard → x will succeed)
(2′) ¬∃
x (x goofs off → x will succeed)
(2″) ∀
x (x goofs off → ¬(x will succeed))

This gives rise to a problem for the compositionality of English, since is seems rather plausible that the syntactic structure of (1) and (2) is the same and that ‘if’ contributes some sort of conditional connective—not necessarily a material conditional!—to the meaning of (1). But it seems that it cannot contribute just that to the meaning of (2). More precisely, the interpretation of an embedded conditional clause appears to be sensitive to the nature of the quantifier in the embedding sentence—a violation of compositionality.[16]

One response might be to claim that ‘if’ does not contribute a conditional connective to the meaning of either (1) or (2)—rather, it marks a restriction on the domain of the quantifier, as the paraphrases under (1″) and (2″) suggest:[17]

(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.

But this simple proposal (however it may be implemented) runs into trouble when it comes to quantifiers like ‘most’. Unlike (3′), (3) says that those students (in the contextually given domain) who succeed if they work hard are most of the students (in the contextually relevant domain):

(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.

The debate whether a good semantic analysis of if-clauses under quantifiers can obey compositionality is lively and open.[18]


Doesnt seem particularly difficult to me. When i look at an “if-then” clause, the first thing i do before formalizing is turning it around so that “if” is first, and i also insert any missing “then”. With their example:


(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.


this results in:


(1)* If he works hard, then everyone will succeed.
(2)* If he goofs off, then no one will succeed.


Both “everyone” and “no one” express a universal quantifer, ∀. The second one has a negation as well. We can translate this to something like “all”, and “no” to “not”. Then we might get:


(1)** If he works hard, then all will succeed.
(2)** If he goofs off, then all will not succeed.


Then, we move the quantifier to the beginning and insert a pronoun, “he”, to match. Then we get something like:


(1)*** For any person, if he works hard, then he will succeed.
(2)*** For any person, if he goofs off, then he will not succeed.


These are equivalent with SEP’s


(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.


The difference between (3) and (3′) is interesting, not becus of relevance to my method about (i think), but since it deals with something beyond first-order logic. Quantification logic, i suppose? I did a brief Google and Wiki search, but didnt find something like that i was looking for. I also tried Graham Priest’s Introduction to non-classical logic, also without luck.


So here goes some system i just invented to formalize the sentences:


(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.


Capital greek letters are set variables. # is a function that returns the cardinality a set.


(3)* (∃Γ)(∃Δ)(∀x)(∀y)(Sx↔x∈Γ∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ)→(Wy→Uy))


In english: There is a set, gamma, and there is another set, delta, and for any x, and for any y, x is a student iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then (if y works hard, then y will succeed).


Quite complicated in writing, but the idea is not that complicated. It shud be possible to find some simplified writing convention for easier expression of this way of formalizing it.


(3′)* (∃Γ)(∃Δ)(∀x)(∀y)(((Sx∧Wx)↔x∈Γ)∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ→Uy))


In english: there is a set, gamma, and there is another set, delta, and for any x, and for any y, (x is a student and x works hard) iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then u will succeed.


To my logician intuition, these are not equivalent, but proving this is left as an exercise to the reader if he can figure out a way to do so in this set theory+predicate logic system (i might try later).



4.2.2 Cross-sentential anaphora

Consider the following minimal pair from Barbara Partee:


(4) I dropped ten marbles and found all but one of them. It is probably under the sofa.

(5) I dropped ten marbles and found nine of them. It is probably under the sofa.


There is a clear difference between (4) and (5)—the first one is unproblematic, the second markedly odd. This difference is plausibly a matter of meaning, and so (4) and (5) cannot be synonyms. Nonetheless, the first sentences are at least truth-conditionally equivalent. If we adopt a conception of meaning where truth-conditional equivalence is sufficient for synonymy, we have an apparent counterexample to compositionality.


I dont accept that premise either. I havent done so since i read Swartz and Bradley years ago. Sentences like


“Canada is north of Mexico”

“Mexico is south of Canada”


are logically equivalent, but are not synonymous. The concept of being north of, and the concept of being south of are not the same, even tho they stand in a kind reverse relation. That is to say, xR1y↔yR2x. Not sure what to call such relations. It’s symmetry+substitition of relations.


Sentences like


“Everything that is round, has a shape.”

“Nothing is not identical to itself.”


are logically equivalent but dont mean the same. And so on, cf. Swartz and Bradley 1979, and SEP on theories of meaning.


Interesting though these cases might be, it is not at all clear that we are faced with a genuine challenge to compositionality, even if we want to stick with the idea that meanings are just truth-conditions. For it is not clear that (5) lacks the normal reading of (4)—on reflection it seems better to say that the reading is available even though it is considerably harder to get. (Contrast this with an example due to—I think—Irene Heim: ‘They got married. She is beautiful.’ This is like (5) because the first sentence lacks an explicit antecedent for the pronoun in the second. Nonetheless, it is clear that the bride is said to be beautiful.) If the difference between (4) and (5) is only this, it is no longer clear that we must accept the idea that they must differ in meaning.


I agree that (4) and (5) mean the same, even if (5) is a rather bad way to express the thing one normally wud express with something like (4).


In their bride example, one can also consider homosexual weddings, where “he” and “she” similarly fails to refer to a specific person out of the two newlywed.


4.2.3 Adjectives

Suppose a Japanese maple leaf, turned brown, has been painted green. Consider someone pointing at this leaf uttering (6):


(6) This leaf is green.


The utterance could be true on one occasion (say, when the speaker is sorting leaves for decoration) and false on another (say, when the speaker is trying to identify the species of tree the leaf belongs to). The meanings of the words are the same on both occasions and so is their syntactic composition. But the meaning of (6) on these two occasions—what (6) says when uttered in these occasions—is different. As Charles Travis, the inventor of this example puts it: “…words may have all the stipulated features while saying something true, but also while saying something false.”[[20]


At least three responses offer themselves. One is to deny the relevant intuition. Perhaps the leaf really is green if it is painted green and (6) is uttered truly in both situations. Nonetheless, we might be sometimes reluctant to make such a true utterance for fear of being misleading. We might be taken to falsely suggest that the leaf is green under the paint or that it is not painted at all.[21] The second option is to point out that the fact that a sentence can say one thing on one occasion and something else on another is not in conflict with its meaning remaining the same. Do we have then a challenge to compositionality of reference, or perhaps to compositionality of content? Not clear, for the reference or content of ‘green’ may also change between the two situations. This could happen, for example, if the lexical representation of this word contains an indexical element.[22] If this seems ad hoc, we can say instead that although (6) can be used to make both true and false assertions, the truth-value of the sentence itself is determined compositionally.[23]


Im going to bite the bullet again, and just say that the sentence means the same on both occasions. What is different is that in different contexts, one might interpret the same sentence to express different propositions. This is not something new as it was already featured before as well, altho this time it is without indexicals. The reason is that altho the sentence means the same, one is guessing at which proposition the utterer meant to express with his sentence. Context helps with that.


4.2.4 Propositional attitudes

Perhaps the most widely known objection to compositionality comes from the observation that even if e and e′ are synonyms, the truth-values of sentences where they occur embedded within the clausal complement of a mental attitude verb may well differ. So, despite the fact that ‘eye-doctor’ and ‘ophthalmologist’ are synonyms (7) may be true and (8) false if Carla is ignorant of this fact:


(7) Carla believes that eye doctors are rich.
(8) Carla believes that ophthalmologists are rich.


So, we have a case of apparent violation of compositionality; cf. Pelletier (1994).

There is a sizable literature on the semantics of propositional attitude reports. Some think that considerations like this show that there are no genuine synonyms in natural languages. If so, compositionality (at least the language-bound version) is of course vacuously true. Some deny the intuition that (7) and (8) may differ in truth-conditions and seek explanations for the contrary appearance in terms of implicature.[24] Some give up the letter of compositionality but still provide recursive semantic clauses.[25] And some preserve compositionality by postulating a hidden indexical associated with ‘believe’.[26]


Im not entirely sure what to do about these propositional attitude reports, but im inclined to bite the bullet. Perhaps i will change my mind after i have read the two SEP articles about the matter.


Idiomatic language

The SEP article really didnt have a proper discussion of idiomatic language use. Say, frases like “dont mention it” which can either mean what it literally (i.e., by composition) means, or its idiomatic meaning: This is used as a response to being thanked, suggesting that the help given was no trouble (same source).

Depending on what one takes “complex expression” to mean. Recall the principle:


(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


What is a complex expression? Is any given complex expression made up of either complex expressions themselves or simple expressions? Idiomatic expressions really just are expressions whose meaning is not determined by their parts. One might thus actually take them to be simple expressions themselves. If one does, then the composition principle is pretty close to trivially true.


If one does not take idiomatic expressions to be complex expressions or simple expressions, then the principle of composition is trivially false. I dont consider that a huge problem, it generally holds, and explains the things it is required to explain just fine when it isnt universally true.


One can also note that idiomatic expressions can be used as parts of larger expressions. Depending on which way to think about idiomatic expressions, and of constituents, then larger expressions which have idiomatic expressions as parts of them might be trivially non-compositional. This is the case if one takes constituents to mean smallest parts. If one does, then since the idiomatic expressions’ meanings cannot be determined from syntax+smallest parts, then neither can the larger expression. If one on the other hand takes constituents to mean smallest decompositional parts, then idiomatic expressions do not trivially make the larger expressions they are part of non-compositional. Consider the sentence:


“He is pulling your leg”


the sentence is compositional since its meaning is determinable from “he”, “is”, “pulling your leg”, the syntax, and the meaning function.


There is a reason i bring up this detail, and that is that there is another kind of idiomatic use of language that apparently hasnt been mentioned so much in the literature, judging from SEP not mentioning it. It is the use of prepositions. Surely, many prepositions are used in perfectly compositional ways with other words, like in


“the cat is on the mat”


where “on” has the usual meaning of being on top of (something), or being above and resting upon or somesuch (difficult to avoid circular definitions of prepositions).


However, consider the use of “on” in


“he spent all his time on the internet”


clearly “on” does not mean the same as above here, it doesnt seem to mean much, it is a kind of indefinite relationship. Apparently aware of this fact (and becus languages differ in which prepositions are used in such cases), the designer of esperanto added a preposition for any indefinite relation to the language (“je”). Some languages have lots of such idiomatic preposition+noun frases, and they have to be learned by heart exactly the same way as the idiomatic expressions mentioned earlier, exactly becus they are idiomatic expressions.


As an illustration, in danish if one is at an island, one is “på Fyn”, but if one is at the mainland, then one is “i Jylland”. I think such usage of prepositions shud be considered idiomatic.

Another linguistics trip on Wiki



I just wanted to look up some stuff on the questions that a teacher had posed. Since i dont actually have the book, and since one cant search properly in paper books, i googled around instead, and ofc ended up at Wikipedia…


and it took off as usual. Here are the tabs i ended up with (36 tabs):



and with three more longer texts to consume over the next day or so: (which i had discovered independently) (long overdue)


And quite a few other longer texts in pdf form also to be read in the next few days.

Some Wikipedia links and remarks to them

It is somewhat depressive that we do not have EO (esperanto) taught as a foreign language in all european countries. The evidence cited in this article in undeniable and overwhelming. We shud immediately put EO on the school curriculum. The good thing about it is that we don’t even need to sacrifice any time, since the investment earns itself with time. If it can help to increase enthusiasm for learning foreign languages, that’s just a bonus! Learning foreign languages does not have to consist of ludicrous memorization of which words are which gender. Yes, i hate grammatical gender with a passion. I cannot bring myself to study any language with senseless grammatical genders (this excludes scandinavian since i can do it by heart becus i’m a native).

Where are the rational politicians when we need them? This brings me, incidentally, to the next link…


Last week i think it was, i wrote about evidence-based policy. It sounds good, but, ofc, politicians have immediately found a way to fuck it up. Meet policy-based evidence making!

“[Ministers] should certainly not seek selectively to pick pieces of evidence which support an already agreed policy, or even commission research in order to produce a justification for policy: so-called “policy-based evidence making” (see paragraphs 95–6). Where there is an absence of evidence, or even when the Government is knowingly contradicting the evidence—maybe for very good reason—this should be openly acknowledged.
Paragraph 89, House of Commons Science and Technology Committee: Scientific Advice, Risk and Evidence Based Policy Making” (quote used in the Wikipedia article)

It is kinda sad. There was a case of this in Denmark some years ago. A researcher had been told to investigate whether some policy wud work, and after some time she discovered that it didn’t. The government then ignored her and the report and put the policy in motion anyway. The making of the report was apparently just a case of what the quote above is about. Unfortunately, i don’t recall what the topic was besides that it had to do with justice and harder sentencing, and the researcher was a woman.

ETA: Apparently, that was enough for me to find it via Google.


A good comparison between the two dialects. Since i’m planning to propose another spelling reform for english, this comes very much in handy. There are already two reform proposals to my knowledge.

The first of them (New Spelling, see my review of it here) goes very far towards a perfect fonemic system with 1 to 1 correspondence between symbol and foneme. “symbol” here includes digrafs which are necessary to avoid diacritical signs.

The second of the (Cut Spelling) doesn’t go quite as far, but still goes pretty far making it kinda difficult to read things written in it. It does so mostly by getting rid of silent or otherwise unnecessary letters.

And my proposed proposal? What makes it different? My idea is that a minimalist proposal is missing, one that exploits variation that already exists within the language (broad sense, includes english used different places) to guide the language in the right direction. So, i plan on taking a look at lists of commonly mispelled (or is it misspelled?) words and see if any of the variations are better ways to spell the respective words. So, i will focus on things such as youu, Ii.

The differences between american and british spellings are almost always such that the american ones are better. This is no surprise since they were explicitly made for that reason by Noah Webster, who supported reforms of english. Unfortunately, he didn’t get to put all his ideas into effect. Otherwise, american english wud have looked very much different from british english now. See his essay on the subject dating to the 1700s!



A modal fallacy in linguistics

I’m writing this piece as i have gotten rather tired of explaining this point over and over. Writing an article about it saves me time.

The form of reasoning goes something like this:

1. This person uses some other spelling than the standard one for a word.
Therefore, 2. This person does not know how to spell the word.

It shud be relatively easy to see that this does not follow. Obviously, if one is familiar with spelling reform ideas, then that makes for easy counter-examples. But even people who have not thought/read about spelling reforms shud be somewhat familiar with familiar use of non-standard spellings in their native language. For instance, when speed is important, people may use alternative spellings becus they are shorter. For instance, using y for yes. Altho someone might see this as using abbreviations and not non-standard spellings. It can be rather difficult to distinguish between the two. Is becus an abbreviation that doesn’t count? How about ‘cus?

Another realm of counter-examples is when people deliberately ‘misspell’ a word for some other purpose, e.g. humor (making a pun), or to signal dialect (writing aks instead of ask), or murika instead of America thereby noting that the word is pronounced like that by many americans. Clearly, many people who do these things are aware of the standard spelling.

As an inductive inference?

In the above, i assumed that the argument was deductive, as in, the conclusion was supposed to follow with necessity by the reasoner. However, one might think of it as a probabilistic inference. Does it fare much better this way? Sort of. Certainly, sometimes people do try to write the standard spelling of some word, but for some reason write something else than that. This can be for many reasons: hitting the wrong key on the keyboard, being distracted at the moment of typing (typically results in one typing the word that was said to one) or just genuinely making a mistake (genuinely making a mistake! :D) becus one was wrong about how the word is spelled in the standard spelling.

Generally, tho, there are some patterns that one can use to make better guesses at whether people really did make a mistake becus they didn’t know the correct spelling, or something else is the case. There are lists of commonly misspelled words (example), if a person used one of the common but nonstandard spellings, then it increases the probability that it was a mistake. How does one spot typos? Usually, the character that is part of the nonstandard pattern is located near the intended one. This produces things like I luke you instead of I like you. Finally, the function words of a language are rarely misspelled becus they occur in high frequency. For that exact reason they also have the poorest spellings. If someone uses nonstandard spellings for such words, then that increases the probability that it is on purpose. Examples of this are words like could should would you I which might be written cud shud wud u i for various reasons.

Under which conditions wud the inference actually work deductively?

Perhaps if one added some extra premises then the inference wud work. Any candidates? Yes. Adding something like: Every person is always trying to use the standard spelling for every word. Implausible? Very! And it isn’t even enuf. There remains the possibilities of being distracted and hitting the wrong keys, and perhaps some other things i haven’t thought of.

Is it really a modal fallacy?


Modal logic is that branch of logic which studies logical relations involving modalities. Modalities are ways, so to speak, in which propositions can be true or false. The most commonly studied modalities are necessity and possibility, which are modalities because some propositions are necessarily true/false and others are possibly true/false. (source)

Name for the fallacy?

Can’t think of anything good. It shud be short and relevant. Things like the anti language reformist fallacy is not short but at least it’s descriptive.

Thoughts about: An Introduction to Language (Fromkin et al)

Victoria Fromkin, Robert Rodman, Nina Hyams – An Introduction to Language

I thought i better read a linguistics textbook before i start studying it formally. Who wud want to look like a noob? ;)

I have not read any other textbook on this subject, but i think it was a fairly typical okish textbook. Many of the faults with it are mentioned below in this long ‘review’.

Chapter 1


In the Renaissance a new middle class emerged who wanted their children
to speak the dialect of the “upper” classes. This desire led to the publication of
many prescriptive grammars. In 1762 Bishop Robert Lowth wrote A Short Intro-
duction to English Grammar with Critical Notes. Lowth prescribed a number
of new rules for English, many of them influenced by his personal taste. Before
the publication of his grammar, practically everyone—upper-class, middle-class,
and lower-class—said I don’t have none and You was wrong about that. Lowth,
however, decided that “two negatives make a positive” and therefore one should
say I don’t have any; and that even when you is singular it should be followed by
the plural were. Many of these prescriptive rules were based on Latin grammar
and made little sense for English. Because Lowth was influential and because
the rising new class wanted to speak “properly,” many of these new rules were
legislated into English grammar, at least for the prestige dialect—that variety of
the language spoken by people in positions of power.
The view that dialects that regularly use double negatives are inferior can-
not be justified if one looks at the standard dialects of other languages in the
world. Romance languages, for example, use double negatives, as the following
examples from French and Italian show:

French: Je ne veux parler avec personne.
I not want speak with no-one.

Italian: Non voglio parlare con nessuno.
not I-want speak with no-one.

English translation: “I don’t want to speak with anyone.”

Lowth seems to have done a good thing with his reasoning, which was obviously inspired from math: multiplying two negatives does give a positive (-1*-1=+1). The reason is logic, altho predicate logic which wasnt invented at his time (i.e., in the 1700s).

Formalizing the negro english sentence “I don’t have none” yields something like this: ¬∃x¬Hix — it is not the case that there is something such that i dont have it. which is equivalent with: ∀xHix — For any thing, i have that thing [i.e. i have everything]. Ofc, it may seem that with this remark im begging the question, but the formalization wud be closer to the natural language which is always a good thing. Im not begging the question with that remark.

Furthermore, his rule made the language simpler as one no longer had to needlessly inflect the frase “anyone” into its negative form “no one”. Simpler languages are better if they have the same expressive power. Doing away with a needless inflection is good per definition makes the language simpler without losing expressive power.

He was wrong about the thing with “you was”. It wud have been nice if it had stayed that way. Then english cud have begun moving towards the simplicity of verb conjugation in scandinavian.

When we say in later chapters that a sentence is grammatical we mean that it
conforms to the rules of the mental grammar (as described by the linguist); when
we say that it is ungrammatical, we mean it deviates from the rules in some way.
If, however, we posit a rule for English that does not agree with your intuitions
as a speaker, then the grammar we are describing differs in some way from the
mental grammar that represents your linguistic competence; that is, your lan-
guage is not the one described. No language or variety of a language (called a
dialect) is superior to any other in a linguistic sense. Every grammar is equally
complex, logical, and capable of producing an infinite set of sentences to express
any thought. If something can be expressed in one language or one dialect, it
can be expressed in any other language or dialect. It might involve different
means and different words, but it can be expressed. We will have more to say
about dialects in chapter 10. This is true as well for languages of technologically
underdeveloped cultures. The grammars of these languages are not primitive or
ill formed in any way. They have all the richness and complexity of the gram-
mars of languages spoken in technologically advanced cultures.

Stupid relativism. Of course some dialects and languages are superior to others! The awful german grammar system is much inferior to the simpler scandinavian systems or the english system. More difficult it is to say which of those systems are superior to which. English has gotten rid of grammatical gender (good!) but returns pointless verb conjugations (bad!) in scandinavian there are grammatical genders (bad, but only 2 not 3 as in german) but much less pointless verb conjugation (good!).

Why do the authors write this relativism nonsense? They dislike language puritanists:

Today our bookstores are populated with books by language purists attempt-
ing to “save the English language.” They criticize those who use  enormity to
mean “enormous” instead of “monstrously evil.” But languages change in the
course of time and words change meaning. Language change is a natural pro-
cess, as we discuss in chapter 11. Over time enormity was used more and more
in the media to mean “enormous,” and we predict that now that President
Barack Obama has used it that way (in his victory speech of November 4, 2008),
that usage will gain acceptance. Still, the “saviors” of the English language will
never disappear. They will continue to blame television, the schools, and even
the National Council of Teachers of English for failing to preserve the standard
language, and are likely to continue to dis (oops, we mean disparage) anyone
who suggests that African American English (AAE)4 and other dialects are via-
ble, complete languages.
In truth, human languages are without exception fully expressive, complete,
and logical, as much as they were two hundred or two thousand years ago.
Hopefully (another frowned-upon usage), this book will convince you that all
languages and dialects are rule-governed, whether spoken by rich or poor, pow-
erful or weak, learned or illiterate. Grammars and usages of particular groups
in society may be dominant for social and political reasons, but from a linguistic
(scientific) perspective they are neither superior nor inferior to the grammars
and usages of less prestigious members of society.

They are right to be annoyed at the purists, they are wrong to completely abandon prescriptive grammar because of it. (Baby, bathtub)


To hold that animals communicate by systems qualitatively different from
human language systems is not to claim human superiority. Humans are not
inferior to the one-celled amoeba because they cannot reproduce by splitting
in two; they are just different sexually. They are not inferior to hunting dogs,
whose sense of smell is far better than that of human animals. As we will discuss
in the next chapter, the human language ability is rooted in the human brain,
just as the communication systems of other species are determined by their bio-
logical structure. All the studies of animal communication systems, including
those of primates, provide evidence for Descartes’ distinction between other ani-
mal communication systems and the linguistic creative ability possessed by the
human animal.

More relativism. So, humans are not inferior to dogs with regards to smelling.. they are just.. olfactory challenged?

With thing with reproduction is harder. Asexual and (bi)sexual reproduction both have some advantages and disadvantages. Cellular division wud obviously not work for humans (we are too complex), but asexual reproduction might work somewhat. We get to try it out soon when we start cloning people. Im looking forward to when we start digging up the graves of past geniuses to make a clone of them i.e., harvest some DNA and insert it into an egg, and put that egg into a woman.


In our understanding of the world we are certainly not “at the mercy of what-
ever language we speak,” as Sapir suggested. However, we may ask whether the
language we speak influences our cognition in some way. In the domain of color
categorization, for example, it has been shown that if a language lacks a word
for red, say, then it’s harder for speakers to reidentify red objects. In other words,
having a label seems to make it easier to store or access information in memory.
Similarly, experiments show that Russian speakers are better at discriminating
light blue (goluboy) and dark blue (siniy) objects than English speakers, whose
language does not make a lexical distinction between these categories. These
results show that words can influence simple perceptual tasks in the domain
of color discrimination. Upon reflection, this may not be a surprising finding.
Colors exist on a continuum, and the way we segment into “different” colors
happens at arbitrary points along this spectrum.
Because there is no physical
motivation for these divisions, this may be the kind of situation where language
could show an effect.

But this is simply not true. The segmentations are not at all arbitrary. It is strange that the authors claim this as they just reviewed information form a language that segments colors into two categories: light and dark colors. These are not arbitrary categories. I learned about this from Lakoff’s Women, Fire, Dangerous Things (which is hosted somewhere on my site), but see also:


Chapter 2

Additional evidence regarding hemispheric specialization is drawn from Japa-
nese readers. The Japanese language has two main writing systems. One system,
kana, is based on the sound system of the language; each symbol corresponds to
a syllable. The other system, kanji, is ideographic; each symbol corresponds to
a word. (More about this in chapter 12 on writing systems.) Kanji is not based
on the sounds of the language. Japanese people with left-hemisphere damage
are impaired in their ability to read kana, whereas people with right-hemisphere
damage are impaired in their ability to read kanji. Also, experiments with unim-
paired Japanese readers show that the right hemisphere is better and faster than
the left hemisphere at reading kanji, and vice versa.

This is pretty cool! Even better, it fits with the data from the last book i read:

Visual memory is not normally tested in intelligence tests. There have been four studies of the
visual memory of the Japanese, the results of which are summarized in Table 10.7. Row 1
gives a Japanese IQ of 107 for 5-10-year-olds on the MFFT calculated from error scores com-
pared with an American sample numbering 2,676. The MFFT consists of the presentation of
drawings of a series of objects, e.g., a boat, hen, etc. that have to be matched to an identical
drawing among several that are closely similar. The task entails the memorization of the de-
tails of the drawings in order to find the perfect match. Performance on the task correlates
0.38 with the performance scale of the WISC (Plomin and Buss, 1973), so that it is a weak
test of visualization ability and general intelligence as well as a test of visual memory. Row 2
gives a visual memory IQ of 105 for ethnic  Japanese Americans compared with American
Europeans on two tests of visual memory consisting of the presentation of 20 objects for 25
seconds and then removed, and the task was to remember and rearrange their positions. Row 3
shows a visual memory IQ of 110 obtained by comparing a sample of Japanese high school
and university students with a sample of 52 European students at University College, Dublin.
Row 4 shows a visual memory IQ of 113 for the visual reproduction subtests of the Wechsler
Memory Scale-Revised obtained from the Japanese standardization of the test compared with
the American standardization sample. The test involves the drawing from memory of geomet-
ric designs presented for 10 seconds. The authors suggest that the explanation for the Japanese
superiority may be that Japanese children learn kanji, the Japanese idiographic script, and this
develops visual memory capacity. However, this hypothesis was apparently disproved by the
Flaherty and Connolly study (1996) whose results are shown in row 2. Some of the ethnic
Japanese American participants had a knowledge of kanji, while others did not, and there was
no difference in visual memory between those who knew and those who did not know kanji,
disproving the theory that the advantage of East Asians on visualization tasks arises from their
practice on visualizing idiographic scripts. (Richard Lynn, Race differences in intelligence, p. 94)

It fits. Why else wud those people choose a very visual language instead of a more sound (i.e. verbal) focused one? Tests also show that east asians are worse at verbal tasks. This makes perfectly sense with their writing system.


Chapter 3

In the foregoing dialogue, Humpty Dumpty is well aware that the prefix un-
means “not,” as further shown in the following pairs of words:
A —————– B
desirable —— undesirable
likely ———- unlikely
inspired ——- uninspired
happy ——— unhappy
developed—– undeveloped
sophisticated – unsophisticated

Thousands of English adjectives begin with un-. If we assume that the most
basic unit of meaning is the word, what do we say about parts of words like
un-, which has a fixed meaning? In all the words in the B column, un- means
the same thing—“not.” Undesirable means “not desirable,” unlikely means “not
likely,” and so on. All the words in column B consist of at least two meaningful
units: un + desirable, un + likely, un + inspired, and so on.

The authors are again wrong. The un prefix does not mean “not” in these examples! An undesirable person is more than just someone that isnt desirable, it is someone who is, well, positively undesirable; that one wants to avoid. Similarly for likely+unlikely. When one says that something is unlikely, one is saying more than just saying that it is not likely. One is saying that it has a low probability of happening. The difference here is that the event cud be neither likely or unlikely, i.e. having a probability around .5 (or whatever, depends on context). An unhappy person is someone who is sad or depressed, not just someone who isnt happy. A neutral person is neither happy or unhappy. An example of a word where the un prefix has the simple meaning of negation, is something like unmarried which really only does mean “not married”. The un prefix in many if not all of their examples has the function of reversing the quality in question, not negating it.

I have pointed this out before, but it was in a forum post on FRDB where i am now banned and therefore cannot search using the built-in search tool.

Chapter 4

Whether a verb takes a complement or not depends on the properties of the
verb. For example, the verb find is a transitive verb. A transitive verb requires an
NP complement (direct object), as in The boy found the ball, but not *The boy
found, or *The boy found in the house. Some verbs like eat are optionally tran-
sitive. John ate and John ate a sandwich are both grammatical.
Verbs select different kinds of complements. For example, verbs like put and
give take both an NP and a PP complement, but cannot occur with either alone:

Sam put the milk in the refrigerator.
*Sam put the milk.
Robert gave the film to his client.
*Robert gave to his client.

Sleep is an intransitive verb; it cannot take an NP complement.
Michael slept.
*Michael slept a fish.

What about: “Sam puts out.” (see meaning #6) That lacks a NP and is grammatical. And how about: “Robert gave a talk.” (see meaning #2) That lacks a PP and is grammatical. It seems that the authors shud have chosen some better example verbs.


Chapter 5

For most sentences it does not make sense to say that they are always true
or always false. Rather, they are true or false in a given situation, as we pre-
viously saw with  Jack swims. But a restricted number of sentences are indeed
always true regardless of the circumstances. They are called  tautologies. (The
term analytic is also used for such sentences.) Examples of tautologies are sen-
tences like Circles are round or A person who is single is not married. Their
truth is guaranteed solely by the meaning of their parts and the way they are
put together. Similarly, some sentences are always false. These are called contra-
dictions. Examples of contradictions are sentences like Circles are square or A
bachelor is married.

Not entirely correct. Analytic sentences are noncontingent sentences, not just noncontingetly true sentences.

Later on they write:

The following sentences are either tautologies (analytic), contradictions, or
situationally true or false.

Indicating that they think analytic refers only to noncontingetly true propositions/sentences. Also, they shud perhaps have studied some more filosofy, so that they wudn’t have to rely on the homemade term situationally true when there already exists a standard term for this, namely contingently true.


Much of what we know is deduced from what people say alongside our obser-
vations of the world. As we can deduce from the quotation, Sherlock Holmes
took deduction to the ultimate degree. Often, deductions can be made based on
language alone.

Sadly, the authors engage in the common practice of referring to what Sherlock Holmes did as “deduction”. It wasn’t. It was mostly abduction aka. inference to the best explanation.


Generally, entailment goes only in one direction. So while the sentence Jack
swims beautifully entails Jack swims, the reverse is not true. Knowing merely that
Jack swims is true does not necessitate the truth of Jack swims beautifully. Jack
could be a poor swimmer. On the other hand, negating both sentences reverses
the entailment. Jack doesn’t swim entails Jack doesn’t swim beautifully.

They are not negating it properly. They are using what i before called short-form negation. Compare:

“Jack doesn’t swim” (∃!x)x=j∧¬Sj
“It is not the case that Jack swims” ¬(∃!x)x=j∧Sj

These two do not mean the same, strictly speaking. And the distinction does sometimes matter. The one entails that Jack exists and the second does not. This matters when one is talking about sentences such as “The current king of France is bald”.  I have explained this before.


The notion of entailment can be used to reveal knowledge that we have about
other meaning relations. For example, omitting tautologies and contradictions,
two sentences are  synonymous (or paraphrases) if they are both true or both
false with respect to the same situations. Sentences like Jack put off the meeting
and Jack postponed the meeting are synonymous, because when one is true the
other must be true; and when one is false the other must also be false. We can
describe this pattern in a more concise way by using the notion of entailment:
Two sentences are synonymous if they entail each other.

The authors conflate ‘meaning the same’ with ‘having the same truth-value’. These are not the same. Some sentences always have the same truth-value (they belong to the same equivalence class) but do not mean the same. Examples are e.g.:

“Canada is north of the US”
“The US is south of Canada”

These two don’t mean the same, but they belong to the same equivalence class. The relation among the entities is reversed in the other sentence i.e. “… is north of …” and “… is south of …” do not mean the same. They mean the opposite of each other.

See Swartz and Bradley (1979:35ff) for more examples and a more thoro discussion.


The semantic theory of sentence meaning that we just sketched is not the
only possible one, and it is also incomplete, as shown by the paradoxical sen-
tence This sentence is false. The sentence cannot be true, else it’s false; it cannot
be false, else it’s true. Therefore it has no truth value, though it certainly has
meaning. This notwithstanding, compositional truth-conditional semantics has
proven to be an extremely powerful and useful tool for investigating the seman-
tic properties of natural languages.

Obviously, i’m not going to let this one fly! :) Things are not nearly as simple as they write. I will just point to my friend’s, Benjamin Burgis, recent dissertation (ph.d.) about the liar paradox and other related problems.

One point tho. Note the authors strange inference from to “Therefore, it has no truth value”.


In the previous sections we saw that semantic rules compute sentence meaning
compositionally based on the meanings of words and the syntactic structure that
contains them. There are, however, interesting cases in which compositionality
breaks down, either because there is a problem with words or with the semantic
rules. If one or more words in a sentence do not have a meaning, then obviously
we will not be able to compute a meaning for the entire sentence.
even if the individual words have meaning but cannot be combined together as
required by the syntactic structure and related semantic rules, we will also not
get to a meaning. We refer to these situations as semantic anomaly. Alternatively,
it might require a lot of creativity and imagination to derive a meaning. This is
what happens in metaphors. Finally, some expressions—called idioms—have a
fixed meaning, that is, a meaning that is not compositional. Applying composi-
tional rules to idioms gives rise to funny or inappropriate meanings.

A bit of clarification is needed here. They are right if they mean the word is used in the sentence. They are wrong if they mean the word is mentioned in the sentence. The unclear frasing “in a sentence” won’t do here. See


The semantic properties of words determine what other words they can be com-
bined with. A sentence widely used by linguists that we encountered in chapter
4 illustrates this fact:

Colorless green ideas sleep furiously.

The sentence obeys all the syntactic rules of English. The subject is  colorless
green ideas and the predicate is sleep furiously. It has the same syntactic struc-
ture as the sentence

Dark green leaves rustle furiously.

but there is obviously something semantically wrong with the sentence. The
meaning of  colorless  includes the semantic feature “without color,” but it is
combined with the adjective green, which has the feature “green in color.” How
can something be both “without color” and “green in color”? Other semantic
violations occur in the sentence. Such sentences are semantically anomalous.

The authors seem to be saying that all sentences that involves contradictions are semantically anomalous. But that is not true, if by that they mean that such sentences are meaningless. Self-contradictory sentences are meaningless alright. Otherwise, their negations (which are necessarily true) wud be meaningless too. A grammatically correct placed negation can never make a sentence meaningful or meaningless.

I have discussed this before. See this essay, and this post (by the good doctor Burgis) and the comments section below.

The authors however do mention later that:

The well-known colorless green ideas sleep furiously is semantically
anomalous because ideas (colorless or not) are not animate.

So, i’m not sure what they think. Perhaps they think that the chomsky is anomalous for both reasons, i.e. 1) that it is self-contradictory, and 2) it involves a category error with the verb sleep and the subject ideas.


Another part of the meaning of the words baby and child is that they are
“young.” (We will continue to indicate words by using italics and semantic fea-
tures by double quotes.) The word father has the properties “male” and “adult”
as do uncle and bachelor.

(I have restored the authors italicization in the above quote)

First, it bothers me when authors want to put a given word in quotation marks but then include something that doesn’t belong in there with it, typically a comma or a dot. Very annoying!

Second, they are wrong about these semantic features. The word father has the features “parent” and “male”. It has no feature about adulthood altho that it is often the case. There is nothing semantically strange or anomalous about calling a person who is 15 years old a father if he has a child. Similar things hold about their other example uncle.


Generally, the count/mass distinction corresponds to the difference between
discrete objects and homogeneous substances. But it would be incorrect to say
that this distinction is grounded in human perception, because different lan-
guages may treat the same object differently. For example, in English the words
hair, furniture, and spaghetti are mass nouns. We say Some hair is curly, Much
furniture is poorly made, John loves spaghetti. In Italian, however, these words
are count nouns, as illustrated in the following sentences:

Ivano ha mangiato molti spaghetti ieri sera.
Ivano ate many spaghettis last evening.
Piero ha comprato un mobile.
Piero bought a furniture.
Luisella ha pettinato i suoi capelli.
Luisella combed her hairs.

We would have to assume a radical form of linguistic determinism (remem-
ber the Sapir-Whorf hypothesis from chapter 1) to say that Italian and English
speakers have different perceptions of hair, furniture, and spaghetti. It is more
reasonable to assume that languages can differ to some extent in the semantic
features they assign to words with the same referent, somewhat independently
of the way they conceptualize that referent. Even within a particular language
we can have different words—count and mass—to describe the same object or
substance. For example, in English we have shoes (count) and footwear (mass),
coins (count) and change (mass).

But what about a nonperfect correlation? The data mentioned above does not disprove the existence of a such thing. It wud be interesting to do a cross-language study to see if there was a correlation. I wud be very surprised if there was no such correlation. I will bet money that something like this is the case: The more discrete an entity is, the higher the chance that the thing will be a countable noun. It is not surprising that their examples involves things that almost always but not always come in bundles. But i’d wager that no language has car as a noncountable noun. The entity is too discrete for that to make sense. Likely, i’d be surprised if any language had water as a countable noun. Generally, words for fluids are probably always (or nearly so) noncountable nouns. Even if the words for the entities that these fluids are made of are countable nouns e.g. a molecule.


In all languages, the reference of certain words and expressions relies entirely
on the situational context of the utterance, and can only be understood in light
of these circumstances. This aspect of pragmatics is called deixis (pronounced
“dike-sis”). Pronouns are deictic. Their reference (or lack of same) is ultimately
context dependent.
Expressions such as

this person
that man
these women
those children

are also deictic, because they require situational information for the listener to
make a referential connection and understand what is meant. These examples
illustrate person deixis. They also show that the demonstrative articles like this
and that are deictic.
We also have  time deixis and place deixis. The following examples are all
deictic expressions of time:

now then tomorrow
this time that time seven days ago
two weeks from now last week next April

In filosofy, these are called indexicals. Or so i thought, apparently, there is some difference according to Wikipedia. Deixis seems to be a bit broader.


Implicatures are different than entailments. An entailment cannot be can-
celled; it is logically necessary. Implicatures are also different than presupposi-
tions. They are the possible consequences of utterances in their context, whereas
presuppositions are situations that must exist for utterances to be appropriate in
context, in other words, to obey Grice’s Maxims. Further world knowledge may
cancel an implicature, but the utterances that led to it remain sensible and well-
formed, whereas further world knowledge that negates a presupposition—oh,
the team didn’t lose after all—renders the entire utterance inappropriate and in
violation of Grice’s Maxims.

To be fair, they only talked about deductive inferences or entailment before. But some entailment maybe be ‘cancelled’ by further information or premises as they are called in logic. Logics where new information can make an inference worse or better are called non-monotonic.


Chapter 6

Throughout several centuries English scholars have advocated spelling
reform. George Bernard Shaw complained that spelling was so inconsistent that
fish could be spelled ghoti—gh as in tough, o as in women, and ti as in nation.
Nonetheless, spelling reformers failed to change our spelling habits, and it took
phoneticians to invent an alphabet that absolutely guaranteed a one sound–one
symbol correspondence. There could be no other way to study the sounds of all
human languages scientifically.

It’s not their fault tho. Blame the politicians. As i have repeatedly shown, there are various good ways to reform english spelling. In fact, i’ve begun working on my own ultra minimalistic reform proposal. More on that later. :)


The sounds of all languages fall into two classes: consonants and vowels. Con-
sonants are produced with some restriction or closure in the vocal tract that
impedes the flow of air from the lungs. In phonetics, the terms consonant and
vowel refer to types of sounds, not to the letters that represent them. In speaking
of the alphabet, we may call “a” a vowel and “c” a consonant, but that means
only that we use the letter “a” to represent vowel sounds and the letter “c” to
represent consonant sounds.

Indeed. I recall that when i invented Lyddansk (my danish reform proposal) i had to make this distinction. I called them vowel-letters and consonant-letters (translated).


5.  The following are all English words written in a broad phonetic transcrip-
tion (thus omitting details such as nasalization and aspiration). Write the
words using normal English orthography.
a. [hit]
b. [strok]
c. [fez]
d. [ton]
e. [boni]
f. [skrim]
g. [frut]
h. [pritʃər]
i. [krak]
j. [baks]
k. [θæŋks]
l. [wɛnzde]
m. [krɔld]
n. [kantʃiɛntʃəs]
o. [parləmɛntæriən]
p. [kwəbɛk]
q. [pitsə]
r. [bərak obamə]
s. [dʒɔn məken]
t. [tu θaʊzənd ænd et]

I really, really dislike their strange choice of fonetical symbols. They don’t correspond to major dictionaries online nor the OED. Especially confusing is using /e/ for both /e/ and /eɪ/ as in eight, which they write as /et/ instead of the normal /eɪt/ found in pretty much all dictionaries (example: 1, 2, and the OED gives the same pronunciation).

To those that are wondering, here is what i think the correct answers are:

a. [hit] hit
b. [strok] stroke but their symbolism is confusing, they use /o/ to mean IPA /əʊ/
c. [fez] face? which shud be /feɪs/
d. [ton] it is tempting to guess ton until one thinks of their strange use of /o/ to mean /əʊ/, the correct word must be tone /təʊn/
e. [boni] bunny is tempting, but it seems to be boney /bəʊni/
f. [skrim] scrim
g. [frut] froot, they fail to indicate that the vowel is long i.e. /fru:t/
h. [pritʃər] preacher
i. [krak] crack
j. [baks] backs is tempting, but it appears to be barks i.e. /bɑːks/
k. [θæŋks] thanks
l. [wɛnzde] another strange one, i think it is wednesday i.e. /wʒnzdeɪ/
m. [krɔld] crawled
n. [kantʃiɛntʃəs] conscientious? i.e. /kɒnʃɪˈɛnʃəs/
o. [parləmɛntæriən] parliamentarian
p. [kwəbɛk] Quebec
q. [pitsə] pizza
r. [bərak obamə] Barack Obama
s. [dʒɔn məken] John McCain
t. [tu θaʊzənd ænd et] two thousand and eight, with eight which shud be /eɪt/.

In general, their introduction to fonetics is bad when it disagrees with pretty much all dictionaries. Learn fonetics somewhere else. I learned it from Wikipedia and using lots of dictionaries.


Chapter 7

Nothing interesting to note here.

Chapter 8

Some time after the age of one, the child begins to repeatedly use the same string
of sounds to mean the same thing. At this stage children realize that sounds are
related to meanings. They have produced their first true words. The age of the
child when this occurs varies and has nothing to do with the child’s intelligence.
(It is reported that Einstein did not start to speak until he was three or four
years old.)

It saddens me to see that a textbook with a chapter about children and learning spread this myth! It is not that hard to google it and discover it to be an urban myth. See:


[bərt]  “(Big) Bird”

Another annoying detail with their chosen fonetical symbols is that they fail to distinguish between schwa /ə/ which is an unstressed vowel, and the similar sounding but potentially stressed vowel /ɜ/. Again, they don’t use the same standards as used by dictionaries, which is annoying! But see: and


1.  Hans hat ein Buch gekauft. “Hans has a book bought.”
2.  Hans kauft ein Buch. “Hans is buying a book.”

I don’t get it. How can a linguistics textbook get the translation wrong? The correct translation of (2) is “Hans buys a book.”.


Another experimental technique, called the naming task, asks the subject to
read aloud a printed word. (A variant of the naming task is also used in stud-
ies of people with aphasia, who are asked to name the object shown in a pic-
ture.) Subjects read irregularly spelled words like dough and steak just slightly
more slowly than regularly spelled words like doe and stake, but still faster than
invented strings like cluff. This suggests that people can do two different things
in the naming task. They can look for the string in their mental lexicon, and if
they find it (i.e., if it is a real word), they can pronounce the stored phonologi-
cal representation for it. They can also “sound it out,” using their knowledge
of how certain letters or letter sequences (e.g., “gh,” “oe”) are most commonly
pronounced. The latter is obviously the only way to come up with a pronuncia-
tion for a nonexisting word.
The fact that irregularly spelled words are read more slowly than regularly
spelled real words suggests that the mind “notices” the irregularity. This may be
because the brain is trying to do two tasks—lexical look-up and sounding out
the word—in parallel in order to perform naming as fast as possible. When the
two approaches yield inconsistent results, a conflict arises that takes some time
to resolve.

 This is very interesting! I didn’t know that badly spelled words were read more slowly. That’s good news, or bad news, depending. :P It is good in that i may now have another argument for spelling reform: it makes people more efficient readers. It is also testable between populations+languages. Everything else equal, are people that read a well-spelled language faster readers than people that read a horribly spelled language (like english and danish)? That’s an interesting question actually. It sounds sufficiently simple and obvius that someone must have done the study. As for the bad news part, if they are right, it means i’m being inefficient becus i’m reading in a bad language. Worse, the entire world is being inefficient becus of its ‘choice’ of world language (i.e. english).

Chapter 9

Some systems draw on formal logic for semantic representations. You put up
the switch would be represented in a function/argument form, which is its logi-
cal form:


where PUT UP is a “two-place predicate,” in the jargon of logicians, and the
arguments are YOU and THE SWITCH. The lexicon indicates the appropriate
relationships between the arguments of the predicate PUT UP.

I really, really dislike the term argument when used to mean the thing that one puts into functions or predicates. It is really a very, very bad choice of words for the context (logic). argument already has a rather precise meaning in that context. I prefer the term variable but there is another and better term that i prefer more, but i can’t seem to recall it right now.


 A keyword as general as bird may return far more information than could be
read in ten lifetimes if a thorough search of the Web occurs. (A search on the
day of this writing produced 200 million hits, compared to 122 million four
years prior.) [...]

I re-did the search. 1,100 million hits.


 Chapter 10

It is not always easy to decide whether the differences between two speech
communities reflect two dialects or two languages. Sometimes this rule-of-
thumb definition is used: When dialects become mutually unintelligible—when
the speakers of one dialect group can no longer understand the speakers of
another dialect group—these dialects become different languages.
However, this rule of thumb does not always jibe with how languages are
officially recognized, which is determined by political and social considerations.
For example, Danes speaking Danish and Norwegians speaking Norwegian and
Swedes speaking Swedish can converse with each other. Nevertheless, Danish
and Norwegian and Swedish are considered separate languages because they are
spoken in separate countries and because there are regular differences in their
grammars. Similarly, Hindi and Urdu are mutually intelligible “languages” spo-
ken in Pakistan and India, although the differences between them are not much
greater than those between the English spoken in America and the English spo-
ken in Australia.

Not citing any sources for such claims is bad. The mutual intelligibility is not that high between the scandinavian languages. It is much higher for written text between norwegian (bokmål) and danish. Etc. See the Wikipedia link.


English is the most widely spoken language in the world (as a first or second
language). It is the national language of several countries, including the United
States, large parts of Canada, the British Isles, Australia, and New Zealand. For
many years it was the official language in countries that were once colonies of
Britain, including India, Nigeria, Ghana, Kenya, and the other “anglophone”
countries of Africa. There are many other phonological differences in the vari-
ous dialects of English used around the globe.

This is certainly false. Look at Wikipedia. Mandarin is the most spoken native language. English is probably the most spoken non-native language.
ETA: But then later they write

The Sino-Tibetan family includes Mandarin, the most populous language in
the world, spoken by more than one billion Chinese. This family also includes
all of the Chinese languages, as well as Burmese and Tibetan.

So, i don’t know what they think.


Even though every language is a composite of dialects, many people talk and
think about a language as if it were a well-defined fixed system with various
dialects diverging from this norm. This is false, although it is a falsehood that is
widespread. One writer of books on language accused the editors of Webster’s
Third New International Dictionary, published in 1961, of confusing “to the
point of obliteration the older distinction between standard, substandard, collo-
quial, vulgar, and slang,” attributing to them the view that “good and bad, right
and wrong, correct and incorrect no longer exist.” In the next section we argue
that such criticisms are ill founded.

It’s time for the authors to again say negative things about language standardization, and promote a very relativistic view of languages and dialects. I will defend my views against their criticisms of such views.

I don’t know about a ‘fixed’ system, if they meant unchanging system, then i ofc don’t agree that there is any unchanging system of standard english (or standard danish etc.). However, there is a kind of danish that is the most standard. It may be a good idea to speak as normal a version of a language as possible, becus this makes it the easiest for the listeners to understand what one is saying. The general idea is to avoid things that are peculiar to a small minority of the speakers of the relevant language. This includes everything: syntax, grammar, word choice, pronunciation, etc. Speaking a language with in the most common way is the standard version of that language, nothing else. It is actually possible that there is no regional dialect that speaks that way, but that doesn’t matter. A standard version of a language need not be a regional dialect.

A standard version of a language is also a necessity if one wants a relatively fonetic spelling system without lots of alternative forms. The idea is that one spells after the sound of the standard version of the language.


No dialect, however, is more expressive, less corrupt, more logical, more
complex, or more regular than any other dialect or language. They are sim-
ply different. More precisely, dialects represent different set of rules or lexical
items represented in the minds of its speakers. Any judgments, therefore, as to
the superiority or inferiority of a particular dialect or language are social judg-
ments, which have no linguistic or scientific basis.
To illustrate the arbitrariness of “standard usage,” consider the English r-drop
rule discussed earlier. Britain’s prestigious RP accent omits the r in words such
as “car,” “far,” and “barn.” Thus an r-less pronunciation is thought to be better
than the less prestigious rural dialects that maintain the r. However, r-drop in the
northeast United States is generally considered substandard, and the more pres-
tigious dialects preserve the r, though this was not true in the past when r-drop
was considered more prestigious. This shows that there is nothing inherently bet-
ter or worse about one pronunciation over another, but simply that one variant is
perceived of as better or worse depending on a variety of social factors.

I don’t care about the typical purist stuff like ‘corruption’, but they are certainly wrong that some dialects are not more complex or regular than others. I really don’t know what makes people make these claims when they are so obviously false. I’ll give a very brief example. Consider a language that has a verb. As it happens, this verb is irregular in one dialect and not so in another. If everything else is equal, then clearly the one dialect is more regular than the other (and less complex), and indeed, better.

Their illustration is strange. First they say that they want to illustrate it, but then end up concluding that their example “shows that there is nothing inherently better or worse about one pronunciation over another, but simply that one variant is perceived of as better or worse depending on a variety of social factors” which is either trivially true becus of the clause about “social factors” (such clauses are almost never explained, in typical sociology fashion), or false becus these differences matter. If the difference is such that other speakers of the language from other dialects fail to understand one, then that is indeed worse, since the purpose of language is generally to be able to communicate. Obviously, if one is not trying to communicate with everyone using the language, this point is irrelevant.


Constructions with multiple negatives akin to AAE He don’t know nothing are
commonly found in languages of the world, including French, Italian, and the
Engl ish of Chaucer, as i l lustrated in the epigraph from The Canterbury Tales. The
multiple negatives of AAE are governed by rules of syntax and are not illogical.

While perhaps not ‘illogical’, they are redundant and so increase the complexity of a language without adding any increased expressiveness. This is a bad thing.


The authors spend some time discussing various differences between african american english (AAE) and standard american english (SAE). Some of these differences have relevance to complexity and expressive power, but i’m not knowledgeable enuf to comment on all of their points.


The first—the whole-word approach—teaches children to recognize a vocab-
ulary of some fifty to one hundred words by rote learning, often by seeing the
words used repeatedly in a story, for example, Run, Spot, Run from the Dick
and Jane series well-known to people who learned to read in the 1950s. Other
words are acquired gradually. This approach does not teach children to “sound
out” words according to the individual sounds that make up the words. Rather,
it treats the written language as though it were a logographic system, such as
Chinese, in which a single written character corresponds to a whole word or
word root. In other words, the whole-word approach fails to take advantage
of the fact that English (and the writing systems of most literate societies) is
based on an alphabet, in which the symbols correspond to the individual sounds
(roughly phonemes) of the language. This is ironic because alphabetic writing
systems are the easiest to learn and are maximally efficient for transcribing any
human language. (my bolding)

So much for their language relativism.


Chapter 12

Another simplification is that the “dead ends”—languages that evolved and
died leaving no offspring—are not included. We have already mentioned Hittite
and Tocharian as two such Indo-European languages. The family tree also fails
to show several intermediate stages that must have existed in the evolution of
modern languages. Languages do not evolve abruptly, which is why comparisons
with the genealogical trees of biology have limited usefulness. Finally, the dia-
gram fails to show some Indo-European languages because of lack of space.

The authors give the impression that in biology, species do somehow evolve abruptly. But they do no such thing. The analogy works fine in that area. The main problem with the analogy is that languages can share ‘genes’ (words, etc.) between ‘species’ and this does not generally happen in biology. (At least, except for in bacteria?)


 The term sound writing is sometimes used in place of alphabetic writing, but
it does not truly represent the principle involved in the use of alphabets. One-
sound ↔ one-letter is inefficient and unintuitive, because we do not need to
represent the [pʰ] in pit and the [p] in spit by two different letters. It is confusing
to represent nonphonemic differences in writing because the sounds are seldom
perceptible to speakers. Except for the phonetic alphabets, whose function is
to record the sounds of all languages for descriptive purposes, most, if not all,
alphabets have been devised on the phonemic principle.

This is a good observation. I hadn’t thought of that. I shud update my Lyddansk to fix the fonetic principle to the fonemic principle (in danish ofc). Another way of putting it in ordinary language is: one sound↔one symbol, but include only differences in sounds that are relevant.


If writing represented the spoken language perfectly, spelling reforms would
never have arisen. In chapter 6 we discussed some of the problems in the En  glish
orthographic system. These problems prompted George Bernard Shaw to observe

[I]t was as a reading and writing animal that Man achieved his human
eminence above those who are called beasts. Well, it is I and my like who
have to do the writing. I have done it professionally for the last sixty
years as well as it can be done with a hopelessly inadequate alphabet
devised centuries before the English language existed to record another
and very different language. Even this alphabet is reduced to absurdity
by a foolish orthography based on the notion that the business of spelling
is to represent the origin and history of a word instead of its sound and
meaning. Thus an intelligent child who is bidden to spell debt, and very
properly spells it d-e-t, is caned for not spelling it with a b because Julius
Caesar spelt the Latin word for it with a b.

The source of the quote is given as: Shaw, G. B. 1948. Preface to R. A. Wilson, The miraculous birth of language.

Anyway, this particular etymology is actually wrong too! There are many such false etymologies that people have based their spelling on. Very utterly foolish. Quoting Wikipedia:

From the 16th century onward, English writers who were scholars of Greek and Latin literature tried to link English words to their Graeco-Latin counterparts. They did this by adding silent letters to make the real or imagined links more obvious. Thus det became debt (to link it to Latin debitum), dout became doubt (to link it to Latin dubitare), sissors became scissors and sithe became scythe (as they were wrongly thought to come from Latin scindere), iland became island (as it was wrongly thought to come from Latin insula), ake became ache (as it was wrongly thought to come from Greek akhos), and so forth.[5][6]



Methods for discovering which language is the hardest to learn

Languages differ in how hard they are to learn, they are not just ‘different’ or some other relativistic nonsense. Aside from intuitve estimates, is there some systematic way to measure how hard a language is to learn? The answer is “yes”. The first time i talked about with this another person, i was amazed to hear that he thought it was impossible to rank languages in order of difficulty to learn, even conceptually. I can think of a number of ways to figure this out, some better than others.

Clarifying the question

But first it is a good idea to clarify exactly what we mean by “the hardest language to learn”. Are we talking about learning it as a native or a foreign language? I want to include both, but it is possible to separate them if one so wants.

Method 1: the National Virtual Translation Center-inspired method

The US collects data about how many weeks of training one needs to learn a foreign language good enough for working on an embassy in the country where it is spoken. The idea is simply that one can collect data like this for all other language combinations (or the ones we’re interesting in, anyway), and see which languages often end up in the “takes a long time” category etc.

If they do not teach the language to a particular skill level but just give people a course that takes the same amount of time, one can still use this method altho slightly modified. One can test these students after they have completed their training and see how good they are at the language. The better they are, the easier the language is to learn.

Method 2: Comparing the students of two languages learning each other’s language

For every two given languages, there are students that have the first or the second as their native language and is learning the other one. One can measure how good these students are at the language they are learning. One will need to correct for exposure to the language outside class, which is mostly a problem with english since it is the world language, but otherwise not generally a problem.

Method 3: Asking students of two languages learning each other’s language

Similar to the second method, one can ask these students which language they think is the hardest to learn: their native one or the one they are currently learning. I wud think that this wud point in the correct direction, even tho the native language has a bias in its favor (becus of earlier exposure). Still, for the easier languages, one will see more people responding that they think that the foreign language is easier.

The best thing about this method is that it is relatively easy to do and costs virtually no money. One can even do it with surveys on the internet. One will need a huge amount of data to draw conclusions, and obviously such data will be missing on smaller languages, but the largest, say 20 languages, in the world shud be easy to rank according to the data.

Method 4: carefully analyzed expert opinion

The idea is to use the same method as used in this paper1 to judge various languages about various things in them, such as writing script, fonology, (foneticness of the) orthografy, grammar (grammatical gender, inflections, etc.) and probably some other things i havent thought of. The method is designed to reduce bias and prejudice. This wud probably give some results that are close to truth.



1 -Drug harms in the UK a multicriteria decision analysis

Vocab size of non-native english speakers by country

Somewhat surprising Denmark and Norway (difference perhaps not statistically significant) are at the top. I was expecting to see France because most english words have french or latin (both romance languages) origin.


However, now that i see the data i can think of some reasons why scandinavians are good at english. First, the scandinavian languages are pretty similar to english (all are germanic languages), and they also share a large number of words with english, with danish sharing the most because danish was under german influence for multiple hundred years and many english words have german origin. They are also structurally and grammatically similar making the learning easier which perhaps makes people want to learn the language more.

Second, people that speak a language that relatively few other people speak (applies to all scandinavian languages) have a greater need to learn another language (i.e. english in this day of the internet) so that they can communicate with the outside world. People who speak french have a smaller need to learn english because there are so many people that speak french. Of the three large scandinavian languages, those which the fewest speakers are also those where the speakers at the best at english (i.e. danish and norwegian with about 5 million speakers each). There is no data about Foroe Islands, but i suspect that their situation is similar to that of Iceland (discussed below).

There are varius problems with my explanations:

First, my points above also apply to Icelandic, but the data shows that people from Iceland are not that good at english (10.9k). Why is that? Perhaps it is becus people from Iceland focus on learning a scandinavian language first (danish), and thus devote less time to learning english. This is not the case for danes, norwegians or swedes who all study english as their second language. Also, there is a very puritanistic language movement in Iceland which results in Icelandic introducing fewer loanwords which wud speed up vocabulary acquirement. (Wiki)

Second, Finland. Finnish is not related to english (or any scandivian language), and definitely not very similar to it and shares few words, but people from Finland are still great at english (about as good as swedes). Obviusly, my first first explanation above does not work, but the second still applies and applies even stronger actually. The reason is that finnish is only similar to hungarian while danish, swedish and norwegian are pretty similar to each other with some mutual intelligibility.

Review of New Spelling

Because of clarity, I feel that I should write this review in standard English. (How boring!) The book can be downloaded here: or from my mirror (in one file) here: New Spelling book

First, the book contains an unusual high number of mistakes. From their style, it seems that they are the result not of OCR software, but of the human typer writing it wrongly. Since the book is from the 1940s, it was not written on a computer (well, it is technically possible but unlikely). These errors should be fixed as they can sometimes throw the reader off. This is especially important because this is the kind of material where the spelling is important. (For an example see the two tables from the book, reproduced below).

Second, when the authors present the reformed/nonstandard/proposed new spelling of a word, they often do not write the old spelling which sometimes makes it very hard to guess which word they are referring to. This is pretty annoying.

Third, the book lacks phonetic symbolism making it somewhat harder to know which sounds the authors are referring to, especially given the second problem above. The book contains two summaries of the proposal. I will try to put them in phonetic symbols and with the standard spelling of the respelled words. Here they are first in unaltered form:

Table 1

Table 2

And here is my table:

Table 3 – New Spelling proposal (with extra examples by me)

Sound Spelling Examples
p p pin
b b bin
t t tin
d d din
k k kin, cat→kat, can→kan
g g got
f f fat
v v vat
s s set
z z zest
ʧ ch chat
ʤ j jet, jinn, ginger→jinjer
h h hot
l l lot
r r rot
w w win
wh wh whim
y y yet
m m met
n n net
ŋ ng (nk) sing (thank)
x kh Loch→lokh
ʃ sh shut
ʒ zh vision→vizhon
θ th thing
ð dh this→dhis
short vowels
æ a bat
ɛ e bet, net, let, fed
ɪ i pit, fit, nit, lit
ɒ o pot, lot, slot
ʌ u but, cut→kut
ʊ oo good, should→shood
ə(r) er sister, mister, lister
long vowels/diphthongs
ɑː aa father→faadher
ɑːr ar far, starry→stary
ae made→maed
ɛə ae fair→faer
ee feel, peel, seal→seel
ɪə ee fear→feer
ɔː au haul, draw→drau
ɔː (?) or short, horn, north
əʊ oe foe, low, go→goe, so→soe
uu noon→nuun, soon→suun
ʊə uu poor→puur
ie lie, fie, pie, cry→krie
aɪə ie lier, fire→fier, mire→mier
ou count→kount, town→toun
aʊə ou sour, our, flour, flower→flour
ɔɪ oi coin→koin
ɔɪə oi employer→emploir
juː ue hue, new→nue, few→fue
jʊə ue pure→puer,
ɜː ur fur, purr→pur, sir→sur

-OR-, -AU-, -AW-

Some of the proposed changes are strange, perhaps owing to the time gap between when this book was written and English (England-English) pronunciation today. I don’t agree that there is a distinction between the vowel in “short” and in “haul”, and OED agrees with me. Supposing that there is no important difference here, which spelling should be used? I think that it is best to settle on one of the proposed spelling, and simply choose the one that is used the most. From their data on p. 57, it seems one has to choose between -OR-, -AU-, -AW-. Each of them has problems but I think -OR- is by far the most commonest, both in terms of words with it and in terms of words with it used. The very common word “or” has it. Choosing either of the other proposals results in a respelling of that very common word, unless we want to keep its spelling inconsistent with the rest of the system (more on this later). Still, choosing -OR- results in lots of homographs “ore”→”or”, “awe”→”or” (results in things like “stand in or”). Perhaps context will clear it up, I suspect so, but it will be odd, very odd to begin with.

-OO- and -UU- and -U-

Then there is the deal with -OO- and -UU- and -U-. Some of the simplifications that are made in Cut Spelling cannot be made with this proposed, e.g. “should”→”shud”, “could”→”cud/kud”, “true”→”tru” “would”→”wud”. I’m not so sure it is a good idea to go for this maximalistic phonetic direction when there are some useful and already in use reformed spellings that we would have to discard. Perhaps it is just better to make -U- ambiguous between /ʊ/ and /ʌ/. This gives shorter spellings although less phonetic. Although this also results in things like “book”→”buk”. Perhaps in the future, after a such reform, one could introduce a diacritical sign to distinguish them if they are a problem. We need not go after a perfect system to begin with.

They, their, there

The proposed changes result in three homographs for some very commonly used words. This result seems particularly disturbing when reading and writing text in New Spelling. One gets things like “Dhae see dhe pursons oever dhaer and thaer animals”. I.e., New Spelling results in the same homograph with “there” and “their” as does Cut Spelling, and with “they” coming close as well. Perhaps it is best to adopt a word-sign/logograph for such words? Or just a strange spelling to avoid confusing homographs. Phonetics does not automatically weigh more than other trouble with a language. It is important to be pragmatic when reforming a language. The authors does in fact discuss adopting word-signs/logographs, see page 101 in the appendix.

-TH- and -DH-

The authors propose to distinguish in the spelling between these two very common sounds. They choose the least economic decision, i.e. to represent /ð/ with -DH- a completely new digraph. This results, ofc, in a staggering number of changes and especially to very common (the commonest?) word in english: “the”. This gives a very strong reason not to change anything here, or at least choose a more economic solution. The authors note when they note that this is the linguistically most optimal solution (page 29), but it is so staggering uneconomic that it has to be chosen as the last alternative. There are two other options worth considering: 1) Keep the current system. Yes, the spelling -TH- will be ambiguous. But is this really a problem? I seldom if ever come across someone who has chosen the wrong sound here. 2) Adopt the reverse change, i.e., use -DH- for /θ/, and keep -TH- for /ð/. This does not make much sense linguistically (especially not inre. to IPA), but it very economical and still results in a very phonetic system. In light of the above, I think it is best to keep the current system.

-S and -Z in plurals

The authors propose to use add different letters to words in their inflexion so that they fit with the distinction between /s/ and /z/. This seems to me to be completely unnecessary. I rather keep the current -S system no matter if the sound is /s/ or /z/. The preceding consonant will indicate whether the sound is voiced or not anyway.

Theoretical thoughts

Perhaps the best part of the proposal are the remarks made in the introduction (page 11-17) about how to reform a language. It almost exactly mirrors those remarks I laid out when I wrote my proposal about how to reform the Danish orthography.

Final remarks

There is much interesting material to be found in this reform proposal and that, and along with its useful theoretical remarks, and its short length, makes it a must read for any language reformer. It is especially useful to compare it with the Cut Spelling proposal.

The French languaj

Wikipedia pajes of interest: <– he is a prety kool guy. wich system is the best? I tend to kopy the US/EN system eeven tho in DA the usaj is reversed.


Reeding such material is wat i do wen i ‘shud’ be reeding up on my metafysiks exam due tomorow. I probably lerned mor from reeding Wiki than reeding Heidegger or watever trash they intend to talk about.