Thoughts about polymathy

Case in point, i’m taking a break from my reading of an introduction to relativity theory to write this post. I wrote this in a hurry becus i felt in the flow. I cleaned out the worst mistakes.

Q: How do i become a polymath?

There are various ways to learn stuff. The current method used in universities is a combination of verbal instructions and reading. One can do either in isolation if one wishes. Some people find reading too boring (i.e. they can’t keep focus) but still want to learn. Altho learning is generally much faster if one only reads, there is still hope for people to learn by listening only or nearly so.

Verbal learning

Recently there has been alot of focus on a new online learning service, Coursera. It is basically an online university where one can take classes on a wide variety (+growing) of things online for free.

www.coursera.org/

en.wikipedia.org/wiki/Coursera

There seems to be other who had have the same idea, altho with a slightly more community based system.

www.whatispolymath.com/

Other universities have been offering online courses for some time. AFAIK, these courses do not yet grant actual degrees or credit but that is just a matter of time.

One can also find alot of lectures and talks on torrent, and various video streaming sites.

torrentz.eu/search?q=lecture

www.ted.com/ <- site with lots of talks about pretty much everything

vimeo.com/user187904/videos <- example of an author who has put alot of his lectures on a video streaming site

Non-verbal learning

As for learning by reading, there are many options. Wikipedia is obviously a great place to start pretty much any quest for knowledge. Many people think that Wikipedia is rather unreliable, but that hasn’t been the case for years. Wikipedia is surprisingly good for being written by volunteers.

en.wikipedia.org/wiki/Reliability_of_Wikipedia

Probably the best choice after Wikipedia is textbooks in ebook format (becus ebooks are free). LessWrong (another good site to learn rationality focused stuff at) has an ongoing project about discovering which is the best textbook for any given subject. That is actually a good idea since one wants to get the most of one’s time when reading.

lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/

An intermediate between textbooks and Wikipedia is something like Oxford’s A very short introduction to… series. It consists of relatively short books (100-200 pages) that introduce the reader into some new area. They are generally quite good and can be found on torrent.

torrentz.eu/670c0dd5d40bb3e33ca082f2c1bf1ef290eafbdc

Another great source of learning is following good quality blogs. Blogs are a bit like following the thoughts of an expert of some field. A good way to keep up with the recent important papers and news in a given field. A good example of such a blog is Gene Expression who writes about behavioural genetics, population genetics, intelligence research, history, and presents alot of survey data.

blogs.discovermagazine.com/gnxp/ (i subscribe to lots of RSS feeds, 9 danish newspapers, 1 meta-newspaper, 7 sites about internet, general science, and technology related news, and a bunch of other things including 5 webcomics. I will post a list of these some day on the front page)

Other great resources are other online encyclopedias and Wiki’s that focus on some more narrow subject. I will use filosofy as an example becus i have experience with this.

plato.stanford.edu/

www.iep.utm.edu/ (down as of writing)

Yet another source is lecture notes from professors. Professors sometimes put their lecture notes on the internet for free. Sometimes also their books. Some examples:

www.sfu.ca/~swartz/ <- lecture notes

www.free-culture.cc/ <- free ebook

Autodidacticism

Some background knowledge

en.wikipedia.org/wiki/Autodidacticism

Becoming a polymath requires autodidacticism, in some sense or another. One needs to have an intellectual drive, curiosity and be hard-working. Acquiring so much information takes time and skill. One needs to have a good filter for good, bad or useless information: some information is good, it is good for understanding things, some information is bad in that it confuses one and makes understanding things harder, some information is useless and it not good for anything besides perhaps entertainment. Examples of the three things wud be: 1) statistics, 2) social constructivism, 3) much filosofy, especially about metafysics.

The key to autodidacticism is efficiency. One may want to stop certain habits that are time-consuming and unproductive. Such habits include excessive gaming (computer or otherwise), watching television (one shud probably get rid of it), watching mediocre films and series (many people follow something like 10 series which is very time consuming).

More links

www.projectpolymath.org/ <- a plan to create a university that focuses on interdisciplinary learning/polymathy. Good idea.

moreintelligentlife.com/content/edward-carr/last-days-polymath <- a good article about the history of polymaths. Also discussing whether or not it is actually possible to be a polymath in today’s world with all the specialization going on. The author seems rather skeptical. I disagree. It has never been easier to acquire information than it is today. He is correct about specialization, but this is somewhat offset by the fact i mentioned before. There is also the effect that comes from studying many different things. One will find that it makes it easier to grasp other things that seem unrelated. Another good thing about having a broad knowledge is that one might see similarities between fields that lead to new discoveries. Similarities that others missed becus they focused on one narrow field. I think that de Grey mentions a few examples of this in his book Ending Aging (de Grey himself being an autodidact and a bit of a polymath).

Some more background knowledge about polymaths:

en.wikipedia.org/wiki/Polymath

www.martinfrost.ws/htmlfiles/Polymath.html

en.wikipedia.org/wiki/List_of_people_who_have_been_called_a_%22polymath%22

I notice the sex ratio immediately. I did a search for “her” and “she” and found only one female on that list. It does not say how many people are on the list, but at least hundred.

A general reason to be a polymath rather than a monomath (focusing on one thing) is that the difference in being an expert in some field of study and a master of the study has rather small practical implications. This is just to say that the law of diminishing returns holds for information in any given field. At least in general.

en.wikipedia.org/wiki/Diminishing_returns

Thoughts re. A very short introduction to The Elements (Philip Ball)

What this really means is that the classical elements are familiar
representatives of the different physical states that matter can
adopt. Earth represents not just soil or rock, but all solids. Water is
the archetype of all liquids; air, of all gases and vapours. Fire is a
strange one, for it is indeed a unique and striking phenomenon.
Fire is actually a dancing plasma of molecules and molecular
fragments, excited into a glowing state by heat. It is not a substance
as such, but a variable combination of substances in a particular
and unusual state caused by a chemical reaction. In experiential
terms, fire is a perfect symbol of that other, intangible aspect of
reality: light.

The ancients saw things this way too: that elements were types, not
to be too closely identified with particular substances. When Plato
speaks of water the element, he does not mean the same thing as
the water that flows in rivers. River water is a manifestation of
elementary water, but so is molten lead. Elementary water is ‘that
which flows’. Likewise, elementary earth is not just the stuff in the
ground, but flesh, wood, metal.

I was first recently made aware of this interpretation before. It really makes the theory much more plausible and makes it more believable that bright people believed this to be true.

Lavoisier’s belief reveals that he still held a somewhat traditional
view of elements. They were generally regarded as being rather like
colours or spices, having intrinsic properties that remain evident in
a mixture. But this is not so. A single element can exhibit very
different characteristics depending on what it is combined with.
Chlorine is a corrosive, poisonous gas; combined with sodium in
table salt, it is completely harmless. Carbon, oxygen, and
nitrogen are the stuff of life, but carbon monoxide and cyanide
(a combination of carbon and nitrogen) are deadly. This was a hard
notion for chemists to accept. Lavoisier himself came under attack
for claiming that water was composed of oxygen and hydrogen: for
water puts out fires (it is ‘the most powerful antiphlogistic we
possess’, according to one critic), whereas hydrogen is hideously
flammable.

An early example of the fallacy of division/composition.

Thus there is nothing optimal or ideal about living on an oxygen-
rich planet; it is simply the way things turned out. Oxygen is, after
all, an extremely abundant element: the third most abundant in
the universe, and the most abundant (47 per cent of the total) in
the Earth’s crust. On the other hand, the living world (the
biosphere) has contrived to maintain the proportion of oxygen in
the atmosphere at more or less the perfect level for aerobic
(oxygen-breathing) organisms like us. If there was less than 17
per cent oxygen in the air, we would be asphyxiated. If there was
more than 25 per cent, all organic matter would be highly
flammable: it would combust at the slightest provocation, and
wildfires would be uncontrollable. A concentration of 35 per cent
oxygen would have been enough to destroy most life on Earth in
global fires in the past. (NASA switched to using normal air
rather than pure oxygen in their spacecrafts for this reason, after
the tragic and fatal conflagration during the first Apollo tests in
1967.) So the current proportion of 21 per cent achieves a good
compromise.

This constancy of the oxygen concentration in air lends support to
the hypothesis that the biological and geological systems of the
Earth conspire to adjust the atmosphere and environment so that
they are well suited to sustain life – the so-called Gaia hypothesis.
Oxygen levels have fluctuated since the air became oxygen rich, but
not by much. In addition, today’s proportion of atmospheric oxygen
is large enough to support the formation of the ozone layer in the
stratosphere, which protects life from the worst of the sun’s harmful
ultraviolet rays. Ozone is a UV-absorbing form of pure oxygen in
which the atoms are joined not in pairs, as in oxygen gas, but in
triplets.

This smells like the ‘backwards’ thinking that fuels the arguments from design. The reason that the oxygen level is ‘just right’ for organisms, is that.. they have evolved to fit the current (or recent ancestral) levels of oxygen in the air.

Also, the claims sound rather fishy, and i cudn’t either confirm or disconfirm when i tried with Wikipedia and Google.

And the crowning irony is that gold is the most useless of metals,
prized like a fashion model for its ability to look beautiful and do
nothing. Unlike metals such as iron, copper, magnesium,
manganese, and nickel, gold has no natural biological role. It is too
soft for making tools; it is inconveniently heavy. And yet people
have searched for it tirelessly, they have burrowed and blasted
through the earth and sifted through mountains of gravel to claim
an estimated 100,000 tonnes in the past five hundred years alone.
‘Gold’, says Jacob Bronowski, ‘is the universal prize in all countries,
in all cultures, in all ages.’

That doesn’t seem right to me. The symbolism section on Wikipedia seems to be pretty much only about indo-european cultures, and no data about, say, pre-contact African cultures. However, after my quick googling around, i didn’t find any more data about this.

The metals are the most familiar and recognizable of the chemical
elements to non-scientists – for everyone senses the uniqueness of
stolid iron, soft and ruddy copper, mercury’s liquid mirror. And
among these ponderous substances no element has more resonance
and rich associations than gold. It is an enduring symbol of
eminence and purity. The best athletes win gold metals (in a trio of
metals that echoes that of the oldest coinage); the best rock bands
win golden discs. A band of gold seals the wedding vows, and fifty
years later the metal valorizes the most exalted anniversary of
married bliss. Associations of gold sell everything from credit cards
to coffee. Platinum is rarer and more expensive, and some attempts
have been made to give it even grander status than gold. But it will
not work, because there are no legends or myths to support it. There
can be no other element than gold whose chemical characteristics
have been so responsible for lodging it firmly in our cultural
traditions.

Yes, they do. I have seen many such examples. The first three that came to mind are: 1) In Crash Bandicoot games, the player is rewarded with a platinum relic which is better than the gold relic (random video of this), 2) In Starcraft 2, the Platinum league is higher (better) than the Gold league, 3) in music sales classification, platinum is better than gold. I’m sure that there are tons of more examples.

 So how many elements are there? I do not know, and neither does
anyone else. Oh, they can tell you how many natural elements there
are – how many we can expect to find at large in the universe. That
series stops around uranium, element number 92.* But as to how
many elements are possible – well, name a number. We have no idea
what the limit might be.

* Elements slightly heavier than uranium, produced by radioactive decay (see
later), are found in tiny amounts in natural uranium ores. Plutonium (element 94)
has also been found in nature, a product of the element-forming processes that
happen in dying stars. So it is a tricky matter to put a precise number on the
natural elements.

I thought that only atoms up to Uranium were natural, but apparently not. Wikipedia lists 98 elements that are currently known to occur naturally, either on Earth or in some distant star. I had also conflated natural elements with elements that have at least one stable isotope. However, on reflection i see that i was just wrong, since Radon (Rn) occurs naturally but doesn’t have a stable isotope. There are a few other natural elements that also lack a stable isotope (as far as we know, anyway).

Polar ice contains tiny bubbles of trapped ancient air, within which
scientists can measure the amounts of minor (‘trace’) gases such as
carbon dioxide and methane. These are greenhouse gases, which
warm the planet by absorbing heat radiated from the Earth’s
surface. The ice cores show that levels of greenhouse gases in the
atmosphere, controlled in the past by natural processes such as
plant growth on land and in the sea, have risen and fallen in near-
perfect synchrony with temperature changes. This provides strong
evidence that the greenhouse effect regulates the Earth’s climate,
and helps us to anticipate the magnitude of the changes we might
expect by adding further greenhouse gases to the atmosphere.

No it doesn’t. The causal relation cud be some entirely other way. He might be right, but simply reasoning like that is a causal reasoning fallacy. Also, for good fun, here are two funnies:

 An isotope of the rare element technetium, denoted 99mTc, is widely
used to form images of the heart, brain, lungs, spleen, and other
organs. Here the ‘m’ indicates that the isotope, formed by decay of a
radioactive molybdenum isotope created by bombardment with
neutrons, is ‘metastable’, meaning only transiently stable. It decays
to ‘normal’ “Tc by emitting two gamma rays, with a half-life of six
hours. This is a nuclear process that does not change either the
atomic number or the atomic mass of the nucleus – it just sheds
some excess energy.

As a compound of 99mTc spreads through the body, the gamma
radiation produces an image of where the radioisotope has travelled.
Because the two gamma rays are emitted simultaneously and in
different directions, their paths can be traced back to locate the
emitting atom precisely at the point of crossing. This enables three-
dimensional images of organs to be constructed (Fig. 16). Scientists
are devising new technetium compounds that remain localized in
specific organs. Eventually, the technetium is simply excreted in urine

This is cool. Never heard of metastable isotopes before.

 

Re. Thinking in foreign language makes decisions more rational (Ars Technica)

arstechnica.com/science/news/2012/04/thinking-in-foreign-language-makes-decisions-more-rational.ars

This surely sounds like one of those dubious psychology experiments. U know the type, the type with a single or perhaps two small experiments that were slightly below p<0.05 and therefore publishable. So, i decided to take a look.

The Foreign-Language Effect Thinking in a Foreign Tongue Reduces Decision Biases

In general, there is not much to note about the study, except that technically speaking in a small detail, their study actually shows the opposite of what they think. The reason for this is that they gave the percentages as 33.3% and 66.6% instead of the correct 66.7%. Technically, this wud make the secure option always the best one by a small margin.

Anyway, i’d like to see some other people reproduce this effect in a larger sample size. I’m not keen on these 40-200 sample sizes in psychology.

The implications are interesting. So, suppose we now know that one makes more rational decisions (at least with regards to one cognitive bias) when thinking in a foreign language, what do we use this knowledge to? It is perhaps the most intresting reason to learn a foreign language i’ve heard of so far. Promoting rational thinking thru.. learning a new language. Strange, but it seems legit. Anyway, one cud force people to consider financial decisions in a foreign language, such as when taking a loan. I’d like to see some more research about this effect with regards to other biases.

Thoughts about: An Introduction to Language (Fromkin et al)

Victoria Fromkin, Robert Rodman, Nina Hyams – An Introduction to Language

I thought i better read a linguistics textbook before i start studying it formally. Who wud want to look like a noob? ;)

I have not read any other textbook on this subject, but i think it was a fairly typical okish textbook. Many of the faults with it are mentioned below in this long ‘review’.

Chapter 1

In the Renaissance a new middle class emerged who wanted their children
to speak the dialect of the “upper” classes. This desire led to the publication of
many prescriptive grammars. In 1762 Bishop Robert Lowth wrote A Short Intro-
duction to English Grammar with Critical Notes. Lowth prescribed a number
of new rules for English, many of them influenced by his personal taste. Before
the publication of his grammar, practically everyone—upper-class, middle-class,
and lower-class—said I don’t have none and You was wrong about that. Lowth,
however, decided that “two negatives make a positive” and therefore one should
say I don’t have any; and that even when you is singular it should be followed by
the plural were. Many of these prescriptive rules were based on Latin grammar
and made little sense for English. Because Lowth was influential and because
the rising new class wanted to speak “properly,” many of these new rules were
legislated into English grammar, at least for the prestige dialect—that variety of
the language spoken by people in positions of power.
The view that dialects that regularly use double negatives are inferior can-
not be justified if one looks at the standard dialects of other languages in the
world. Romance languages, for example, use double negatives, as the following
examples from French and Italian show:

French: Je ne veux parler avec personne.
I not want speak with no-one.

Italian: Non voglio parlare con nessuno.
not I-want speak with no-one.

English translation: “I don’t want to speak with anyone.”

Lowth seems to have done a good thing with his reasoning, which was obviously inspired from math: multiplying two negatives does give a positive (-1*-1=+1). The reason is logic, altho predicate logic which wasnt invented at his time (i.e., in the 1700s).

Formalizing the negro english sentence “I don’t have none” yields something like this: ¬∃x¬Hix — it is not the case that there is something such that i dont have it. which is equivalent with: ∀xHix — For any thing, i have that thing [i.e. i have everything]. Ofc, it may seem that with this remark im begging the question, but the formalization wud be closer to the natural language which is always a good thing. Im not begging the question with that remark.

Furthermore, his rule made the language simpler as one no longer had to needlessly inflect the frase “anyone” into its negative form “no one”. Simpler languages are better if they have the same expressive power. Doing away with a needless inflection is good per definition makes the language simpler without losing expressive power.

He was wrong about the thing with “you was”. It wud have been nice if it had stayed that way. Then english cud have begun moving towards the simplicity of verb conjugation in scandinavian.

When we say in later chapters that a sentence is grammatical we mean that it
conforms to the rules of the mental grammar (as described by the linguist); when
we say that it is ungrammatical, we mean it deviates from the rules in some way.
If, however, we posit a rule for English that does not agree with your intuitions
as a speaker, then the grammar we are describing differs in some way from the
mental grammar that represents your linguistic competence; that is, your lan-
guage is not the one described. No language or variety of a language (called a
dialect) is superior to any other in a linguistic sense. Every grammar is equally
complex, logical, and capable of producing an infinite set of sentences to express
any thought. If something can be expressed in one language or one dialect, it
can be expressed in any other language or dialect. It might involve different
means and different words, but it can be expressed. We will have more to say
about dialects in chapter 10. This is true as well for languages of technologically
underdeveloped cultures. The grammars of these languages are not primitive or
ill formed in any way. They have all the richness and complexity of the gram-
mars of languages spoken in technologically advanced cultures.

Stupid relativism. Of course some dialects and languages are superior to others! The awful german grammar system is much inferior to the simpler scandinavian systems or the english system. More difficult it is to say which of those systems are superior to which. English has gotten rid of grammatical gender (good!) but returns pointless verb conjugations (bad!) in scandinavian there are grammatical genders (bad, but only 2 not 3 as in german) but much less pointless verb conjugation (good!).

Why do the authors write this relativism nonsense? They dislike language puritanists:

Today our bookstores are populated with books by language purists attempt-
ing to “save the English language.” They criticize those who use  enormity to
mean “enormous” instead of “monstrously evil.” But languages change in the
course of time and words change meaning. Language change is a natural pro-
cess, as we discuss in chapter 11. Over time enormity was used more and more
in the media to mean “enormous,” and we predict that now that President
Barack Obama has used it that way (in his victory speech of November 4, 2008),
that usage will gain acceptance. Still, the “saviors” of the English language will
never disappear. They will continue to blame television, the schools, and even
the National Council of Teachers of English for failing to preserve the standard
language, and are likely to continue to dis (oops, we mean disparage) anyone
who suggests that African American English (AAE)4 and other dialects are via-
ble, complete languages.
In truth, human languages are without exception fully expressive, complete,
and logical, as much as they were two hundred or two thousand years ago.
Hopefully (another frowned-upon usage), this book will convince you that all
languages and dialects are rule-governed, whether spoken by rich or poor, pow-
erful or weak, learned or illiterate. Grammars and usages of particular groups
in society may be dominant for social and political reasons, but from a linguistic
(scientific) perspective they are neither superior nor inferior to the grammars
and usages of less prestigious members of society.

They are right to be annoyed at the purists, they are wrong to completely abandon prescriptive grammar because of it. (Baby, bathtub)

To hold that animals communicate by systems qualitatively different from
human language systems is not to claim human superiority. Humans are not
inferior to the one-celled amoeba because they cannot reproduce by splitting
in two; they are just different sexually. They are not inferior to hunting dogs,
whose sense of smell is far better than that of human animals. As we will discuss
in the next chapter, the human language ability is rooted in the human brain,
just as the communication systems of other species are determined by their bio-
logical structure. All the studies of animal communication systems, including
those of primates, provide evidence for Descartes’ distinction between other ani-
mal communication systems and the linguistic creative ability possessed by the
human animal.

More relativism. So, humans are not inferior to dogs with regards to smelling.. they are just.. olfactory challenged?

With thing with reproduction is harder. Asexual and (bi)sexual reproduction both have some advantages and disadvantages. Cellular division wud obviously not work for humans (we are too complex), but asexual reproduction might work somewhat. We get to try it out soon when we start cloning people. Im looking forward to when we start digging up the graves of past geniuses to make a clone of them i.e., harvest some DNA and insert it into an egg, and put that egg into a woman.

In our understanding of the world we are certainly not “at the mercy of what-
ever language we speak,” as Sapir suggested. However, we may ask whether the
language we speak influences our cognition in some way. In the domain of color
categorization, for example, it has been shown that if a language lacks a word
for red, say, then it’s harder for speakers to reidentify red objects. In other words,
having a label seems to make it easier to store or access information in memory.
Similarly, experiments show that Russian speakers are better at discriminating
light blue (goluboy) and dark blue (siniy) objects than English speakers, whose
language does not make a lexical distinction between these categories. These
results show that words can influence simple perceptual tasks in the domain
of color discrimination. Upon reflection, this may not be a surprising finding.
Colors exist on a continuum, and the way we segment into “different” colors
happens at arbitrary points along this spectrum.
Because there is no physical
motivation for these divisions, this may be the kind of situation where language
could show an effect.

But this is simply not true. The segmentations are not at all arbitrary. It is strange that the authors claim this as they just reviewed information form a language that segments colors into two categories: light and dark colors. These are not arbitrary categories. I learned about this from Lakoff’s Women, Fire, Dangerous Things (which is hosted somewhere on my site), but see also: en.wikipedia.org/wiki/Linguistic_relativity_and_the_color_naming_debate.

Chapter 2

Additional evidence regarding hemispheric specialization is drawn from Japa-
nese readers. The Japanese language has two main writing systems. One system,
kana, is based on the sound system of the language; each symbol corresponds to
a syllable. The other system, kanji, is ideographic; each symbol corresponds to
a word. (More about this in chapter 12 on writing systems.) Kanji is not based
on the sounds of the language. Japanese people with left-hemisphere damage
are impaired in their ability to read kana, whereas people with right-hemisphere
damage are impaired in their ability to read kanji. Also, experiments with unim-
paired Japanese readers show that the right hemisphere is better and faster than
the left hemisphere at reading kanji, and vice versa.

This is pretty cool! Even better, it fits with the data from the last book i read:

Visual memory is not normally tested in intelligence tests. There have been four studies of the
visual memory of the Japanese, the results of which are summarized in Table 10.7. Row 1
gives a Japanese IQ of 107 for 5-10-year-olds on the MFFT calculated from error scores com-
pared with an American sample numbering 2,676. The MFFT consists of the presentation of
drawings of a series of objects, e.g., a boat, hen, etc. that have to be matched to an identical
drawing among several that are closely similar. The task entails the memorization of the de-
tails of the drawings in order to find the perfect match. Performance on the task correlates
0.38 with the performance scale of the WISC (Plomin and Buss, 1973), so that it is a weak
test of visualization ability and general intelligence as well as a test of visual memory. Row 2
gives a visual memory IQ of 105 for ethnic  Japanese Americans compared with American
Europeans on two tests of visual memory consisting of the presentation of 20 objects for 25
seconds and then removed, and the task was to remember and rearrange their positions. Row 3
shows a visual memory IQ of 110 obtained by comparing a sample of Japanese high school
and university students with a sample of 52 European students at University College, Dublin.
Row 4 shows a visual memory IQ of 113 for the visual reproduction subtests of the Wechsler
Memory Scale-Revised obtained from the Japanese standardization of the test compared with
the American standardization sample. The test involves the drawing from memory of geomet-
ric designs presented for 10 seconds. The authors suggest that the explanation for the Japanese
superiority may be that Japanese children learn kanji, the Japanese idiographic script, and this
develops visual memory capacity. However, this hypothesis was apparently disproved by the
Flaherty and Connolly study (1996) whose results are shown in row 2. Some of the ethnic
Japanese American participants had a knowledge of kanji, while others did not, and there was
no difference in visual memory between those who knew and those who did not know kanji,
disproving the theory that the advantage of East Asians on visualization tasks arises from their
practice on visualizing idiographic scripts. (Richard Lynn, Race differences in intelligence, p. 94)

It fits. Why else wud those people choose a very visual language instead of a more sound (i.e. verbal) focused one? Tests also show that east asians are worse at verbal tasks. This makes perfectly sense with their writing system.

Chapter 3

In the foregoing dialogue, Humpty Dumpty is well aware that the prefix un-
means “not,” as further shown in the following pairs of words:
A —————– B
desirable —— undesirable
likely ———- unlikely
inspired ——- uninspired
happy ——— unhappy
developed—– undeveloped
sophisticated – unsophisticated

Thousands of English adjectives begin with un-. If we assume that the most
basic unit of meaning is the word, what do we say about parts of words like
un-, which has a fixed meaning? In all the words in the B column, un- means
the same thing—“not.” Undesirable means “not desirable,” unlikely means “not
likely,” and so on. All the words in column B consist of at least two meaningful
units: un + desirable, un + likely, un + inspired, and so on.

The authors are again wrong. The un prefix does not mean “not” in these examples! An undesirable person is more than just someone that isnt desirable, it is someone who is, well, positively undesirable; that one wants to avoid. Similarly for likely+unlikely. When one says that something is unlikely, one is saying more than just saying that it is not likely. One is saying that it has a low probability of happening. The difference here is that the event cud be neither likely or unlikely, i.e. having a probability around .5 (or whatever, depends on context). An unhappy person is someone who is sad or depressed, not just someone who isnt happy. A neutral person is neither happy or unhappy. An example of a word where the un prefix has the simple meaning of negation, is something like unmarried which really only does mean “not married”. The un prefix in many if not all of their examples has the function of reversing the quality in question, not negating it.

I have pointed this out before, but it was in a forum post on FRDB where i am now banned and therefore cannot search using the built-in search tool.

Chapter 4

Whether a verb takes a complement or not depends on the properties of the
verb. For example, the verb find is a transitive verb. A transitive verb requires an
NP complement (direct object), as in The boy found the ball, but not *The boy
found, or *The boy found in the house. Some verbs like eat are optionally tran-
sitive. John ate and John ate a sandwich are both grammatical.
Verbs select different kinds of complements. For example, verbs like put and
give take both an NP and a PP complement, but cannot occur with either alone:

Sam put the milk in the refrigerator.
*Sam put the milk.
Robert gave the film to his client.
*Robert gave to his client.

Sleep is an intransitive verb; it cannot take an NP complement.
Michael slept.
*Michael slept a fish.

What about: “Sam puts out.” (see meaning #6) That lacks a NP and is grammatical. And how about: “Robert gave a talk.” (see meaning #2) That lacks a PP and is grammatical. It seems that the authors shud have chosen some better example verbs.

Chapter 5

For most sentences it does not make sense to say that they are always true
or always false. Rather, they are true or false in a given situation, as we pre-
viously saw with  Jack swims. But a restricted number of sentences are indeed
always true regardless of the circumstances. They are called  tautologies. (The
term analytic is also used for such sentences.) Examples of tautologies are sen-
tences like Circles are round or A person who is single is not married. Their
truth is guaranteed solely by the meaning of their parts and the way they are
put together. Similarly, some sentences are always false. These are called contra-
dictions. Examples of contradictions are sentences like Circles are square or A
bachelor is married.

Not entirely correct. Analytic sentences are noncontingent sentences, not just noncontingetly true sentences.

plato.stanford.edu/entries/analytic-synthetic/

Later on they write:

The following sentences are either tautologies (analytic), contradictions, or
situationally true or false.

Indicating that they think analytic refers only to noncontingetly true propositions/sentences. Also, they shud perhaps have studied some more filosofy, so that they wudn’t have to rely on the homemade term situationally true when there already exists a standard term for this, namely contingently true.

Much of what we know is deduced from what people say alongside our obser-
vations of the world. As we can deduce from the quotation, Sherlock Holmes
took deduction to the ultimate degree. Often, deductions can be made based on
language alone.

Sadly, the authors engage in the common practice of referring to what Sherlock Holmes did as “deduction”. It wasn’t. It was mostly abduction aka. inference to the best explanation.

plato.stanford.edu/entries/abduction/

Generally, entailment goes only in one direction. So while the sentence Jack
swims beautifully entails Jack swims, the reverse is not true. Knowing merely that
Jack swims is true does not necessitate the truth of Jack swims beautifully. Jack
could be a poor swimmer. On the other hand, negating both sentences reverses
the entailment. Jack doesn’t swim entails Jack doesn’t swim beautifully.

They are not negating it properly. They are using what i before called short-form negation. Compare:

“Jack doesn’t swim” (∃!x)x=j∧¬Sj
with
“It is not the case that Jack swims” ¬(∃!x)x=j∧Sj

These two do not mean the same, strictly speaking. And the distinction does sometimes matter. The one entails that Jack exists and the second does not. This matters when one is talking about sentences such as “The current king of France is bald”.  I have explained this before.

The notion of entailment can be used to reveal knowledge that we have about
other meaning relations. For example, omitting tautologies and contradictions,
two sentences are  synonymous (or paraphrases) if they are both true or both
false with respect to the same situations. Sentences like Jack put off the meeting
and Jack postponed the meeting are synonymous, because when one is true the
other must be true; and when one is false the other must also be false. We can
describe this pattern in a more concise way by using the notion of entailment:
Two sentences are synonymous if they entail each other.

The authors conflate ‘meaning the same’ with ‘having the same truth-value’. These are not the same. Some sentences always have the same truth-value (they belong to the same equivalence class) but do not mean the same. Examples are e.g.:

“Canada is north of the US”
and
“The US is south of Canada”

These two don’t mean the same, but they belong to the same equivalence class. The relation among the entities is reversed in the other sentence i.e. “… is north of …” and “… is south of …” do not mean the same. They mean the opposite of each other.

See Swartz and Bradley (1979:35ff) for more examples and a more thoro discussion.

The semantic theory of sentence meaning that we just sketched is not the
only possible one, and it is also incomplete, as shown by the paradoxical sen-
tence This sentence is false. The sentence cannot be true, else it’s false; it cannot
be false, else it’s true. Therefore it has no truth value, though it certainly has
meaning. This notwithstanding, compositional truth-conditional semantics has
proven to be an extremely powerful and useful tool for investigating the seman-
tic properties of natural languages.

Obviously, i’m not going to let this one fly! :) Things are not nearly as simple as they write. I will just point to my friend’s, Benjamin Burgis, recent dissertation (ph.d.) about the liar paradox and other related problems.

One point tho. Note the authors strange inference from to “Therefore, it has no truth value”.

In the previous sections we saw that semantic rules compute sentence meaning
compositionally based on the meanings of words and the syntactic structure that
contains them. There are, however, interesting cases in which compositionality
breaks down, either because there is a problem with words or with the semantic
rules. If one or more words in a sentence do not have a meaning, then obviously
we will not be able to compute a meaning for the entire sentence.
Moreover,
even if the individual words have meaning but cannot be combined together as
required by the syntactic structure and related semantic rules, we will also not
get to a meaning. We refer to these situations as semantic anomaly. Alternatively,
it might require a lot of creativity and imagination to derive a meaning. This is
what happens in metaphors. Finally, some expressions—called idioms—have a
fixed meaning, that is, a meaning that is not compositional. Applying composi-
tional rules to idioms gives rise to funny or inappropriate meanings.

A bit of clarification is needed here. They are right if they mean the word is used in the sentence. They are wrong if they mean the word is mentioned in the sentence. The unclear frasing “in a sentence” won’t do here. See plato.stanford.edu/entries/quotation/#2.2

The semantic properties of words determine what other words they can be com-
bined with. A sentence widely used by linguists that we encountered in chapter
4 illustrates this fact:

Colorless green ideas sleep furiously.

The sentence obeys all the syntactic rules of English. The subject is  colorless
green ideas and the predicate is sleep furiously. It has the same syntactic struc-
ture as the sentence

Dark green leaves rustle furiously.

but there is obviously something semantically wrong with the sentence. The
meaning of  colorless  includes the semantic feature “without color,” but it is
combined with the adjective green, which has the feature “green in color.” How
can something be both “without color” and “green in color”? Other semantic
violations occur in the sentence. Such sentences are semantically anomalous.

The authors seem to be saying that all sentences that involves contradictions are semantically anomalous. But that is not true, if by that they mean that such sentences are meaningless. Self-contradictory sentences are meaningless alright. Otherwise, their negations (which are necessarily true) wud be meaningless too. A grammatically correct placed negation can never make a sentence meaningful or meaningless.

I have discussed this before. See this essay, and this post (by the good doctor Burgis) and the comments section below.

The authors however do mention later that:

The well-known colorless green ideas sleep furiously is semantically
anomalous because ideas (colorless or not) are not animate.

So, i’m not sure what they think. Perhaps they think that the chomsky is anomalous for both reasons, i.e. 1) that it is self-contradictory, and 2) it involves a category error with the verb sleep and the subject ideas.

Another part of the meaning of the words baby and child is that they are
“young.” (We will continue to indicate words by using italics and semantic fea-
tures by double quotes.) The word father has the properties “male” and “adult”
as do uncle and bachelor.

(I have restored the authors italicization in the above quote)

First, it bothers me when authors want to put a given word in quotation marks but then include something that doesn’t belong in there with it, typically a comma or a dot. Very annoying!

Second, they are wrong about these semantic features. The word father has the features “parent” and “male”. It has no feature about adulthood altho that it is often the case. There is nothing semantically strange or anomalous about calling a person who is 15 years old a father if he has a child. Similar things hold about their other example uncle.

Generally, the count/mass distinction corresponds to the difference between
discrete objects and homogeneous substances. But it would be incorrect to say
that this distinction is grounded in human perception, because different lan-
guages may treat the same object differently. For example, in English the words
hair, furniture, and spaghetti are mass nouns. We say Some hair is curly, Much
furniture is poorly made, John loves spaghetti. In Italian, however, these words
are count nouns, as illustrated in the following sentences:

Ivano ha mangiato molti spaghetti ieri sera.
Ivano ate many spaghettis last evening.
Piero ha comprato un mobile.
Piero bought a furniture.
Luisella ha pettinato i suoi capelli.
Luisella combed her hairs.

We would have to assume a radical form of linguistic determinism (remem-
ber the Sapir-Whorf hypothesis from chapter 1) to say that Italian and English
speakers have different perceptions of hair, furniture, and spaghetti. It is more
reasonable to assume that languages can differ to some extent in the semantic
features they assign to words with the same referent, somewhat independently
of the way they conceptualize that referent. Even within a particular language
we can have different words—count and mass—to describe the same object or
substance. For example, in English we have shoes (count) and footwear (mass),
coins (count) and change (mass).

But what about a nonperfect correlation? The data mentioned above does not disprove the existence of a such thing. It wud be interesting to do a cross-language study to see if there was a correlation. I wud be very surprised if there was no such correlation. I will bet money that something like this is the case: The more discrete an entity is, the higher the chance that the thing will be a countable noun. It is not surprising that their examples involves things that almost always but not always come in bundles. But i’d wager that no language has car as a noncountable noun. The entity is too discrete for that to make sense. Likely, i’d be surprised if any language had water as a countable noun. Generally, words for fluids are probably always (or nearly so) noncountable nouns. Even if the words for the entities that these fluids are made of are countable nouns e.g. a molecule.

In all languages, the reference of certain words and expressions relies entirely
on the situational context of the utterance, and can only be understood in light
of these circumstances. This aspect of pragmatics is called deixis (pronounced
“dike-sis”). Pronouns are deictic. Their reference (or lack of same) is ultimately
context dependent.
Expressions such as

this person
that man
these women
those children

are also deictic, because they require situational information for the listener to
make a referential connection and understand what is meant. These examples
illustrate person deixis. They also show that the demonstrative articles like this
and that are deictic.
We also have  time deixis and place deixis. The following examples are all
deictic expressions of time:

now then tomorrow
this time that time seven days ago
two weeks from now last week next April

In filosofy, these are called indexicals. Or so i thought, apparently, there is some difference according to Wikipedia. Deixis seems to be a bit broader.

Implicatures are different than entailments. An entailment cannot be can-
celled; it is logically necessary. Implicatures are also different than presupposi-
tions. They are the possible consequences of utterances in their context, whereas
presuppositions are situations that must exist for utterances to be appropriate in
context, in other words, to obey Grice’s Maxims. Further world knowledge may
cancel an implicature, but the utterances that led to it remain sensible and well-
formed, whereas further world knowledge that negates a presupposition—oh,
the team didn’t lose after all—renders the entire utterance inappropriate and in
violation of Grice’s Maxims.

To be fair, they only talked about deductive inferences or entailment before. But some entailment maybe be ‘cancelled’ by further information or premises as they are called in logic. Logics where new information can make an inference worse or better are called non-monotonic.

Chapter 6

Throughout several centuries English scholars have advocated spelling
reform. George Bernard Shaw complained that spelling was so inconsistent that
fish could be spelled ghoti—gh as in tough, o as in women, and ti as in nation.
Nonetheless, spelling reformers failed to change our spelling habits, and it took
phoneticians to invent an alphabet that absolutely guaranteed a one sound–one
symbol correspondence. There could be no other way to study the sounds of all
human languages scientifically.

It’s not their fault tho. Blame the politicians. As i have repeatedly shown, there are various good ways to reform english spelling. In fact, i’ve begun working on my own ultra minimalistic reform proposal. More on that later. :)

The sounds of all languages fall into two classes: consonants and vowels. Con-
sonants are produced with some restriction or closure in the vocal tract that
impedes the flow of air from the lungs. In phonetics, the terms consonant and
vowel refer to types of sounds, not to the letters that represent them. In speaking
of the alphabet, we may call “a” a vowel and “c” a consonant, but that means
only that we use the letter “a” to represent vowel sounds and the letter “c” to
represent consonant sounds.

Indeed. I recall that when i invented Lyddansk (my danish reform proposal) i had to make this distinction. I called them vowel-letters and consonant-letters (translated).

5.  The following are all English words written in a broad phonetic transcrip-
tion (thus omitting details such as nasalization and aspiration). Write the
words using normal English orthography.
a. [hit]
b. [strok]
c. [fez]
d. [ton]
e. [boni]
f. [skrim]
g. [frut]
h. [pritʃər]
i. [krak]
j. [baks]
k. [θæŋks]
l. [wɛnzde]
m. [krɔld]
n. [kantʃiɛntʃəs]
o. [parləmɛntæriən]
p. [kwəbɛk]
q. [pitsə]
r. [bərak obamə]
s. [dʒɔn məken]
t. [tu θaʊzənd ænd et]

I really, really dislike their strange choice of fonetical symbols. They don’t correspond to major dictionaries online nor the OED. Especially confusing is using /e/ for both /e/ and /eɪ/ as in eight, which they write as /et/ instead of the normal /eɪt/ found in pretty much all dictionaries (example: 1, 2, and the OED gives the same pronunciation).

To those that are wondering, here is what i think the correct answers are:

a. [hit] hit
b. [strok] stroke but their symbolism is confusing, they use /o/ to mean IPA /əʊ/
c. [fez] face? which shud be /feɪs/
d. [ton] it is tempting to guess ton until one thinks of their strange use of /o/ to mean /əʊ/, the correct word must be tone /təʊn/
e. [boni] bunny is tempting, but it seems to be boney /bəʊni/
f. [skrim] scrim
g. [frut] froot, they fail to indicate that the vowel is long i.e. /fru:t/
h. [pritʃər] preacher
i. [krak] crack
j. [baks] backs is tempting, but it appears to be barks i.e. /bɑːks/
k. [θæŋks] thanks
l. [wɛnzde] another strange one, i think it is wednesday i.e. /wʒnzdeɪ/
m. [krɔld] crawled
n. [kantʃiɛntʃəs] conscientious? i.e. /kɒnʃɪˈɛnʃəs/
o. [parləmɛntæriən] parliamentarian
p. [kwəbɛk] Quebec
q. [pitsə] pizza
r. [bərak obamə] Barack Obama
s. [dʒɔn məken] John McCain
t. [tu θaʊzənd ænd et] two thousand and eight, with eight which shud be /eɪt/.

In general, their introduction to fonetics is bad when it disagrees with pretty much all dictionaries. Learn fonetics somewhere else. I learned it from Wikipedia and using lots of dictionaries.

Chapter 7

Nothing interesting to note here.

Chapter 8

Some time after the age of one, the child begins to repeatedly use the same string
of sounds to mean the same thing. At this stage children realize that sounds are
related to meanings. They have produced their first true words. The age of the
child when this occurs varies and has nothing to do with the child’s intelligence.
(It is reported that Einstein did not start to speak until he was three or four
years old.)

It saddens me to see that a textbook with a chapter about children and learning spread this myth! It is not that hard to google it and discover it to be an urban myth. See: www.learninginfo.org/einstein-learning-disability.htm

[bərt]  “(Big) Bird”

Another annoying detail with their chosen fonetical symbols is that they fail to distinguish between schwa /ə/ which is an unstressed vowel, and the similar sounding but potentially stressed vowel /ɜ/. Again, they don’t use the same standards as used by dictionaries, which is annoying! But see: en.wikipedia.org/wiki/Schwa and en.wikipedia.org/wiki/Mid-central_vowel

1.  Hans hat ein Buch gekauft. “Hans has a book bought.”
2.  Hans kauft ein Buch. “Hans is buying a book.”

I don’t get it. How can a linguistics textbook get the translation wrong? The correct translation of (2) is “Hans buys a book.”.

Another experimental technique, called the naming task, asks the subject to
read aloud a printed word. (A variant of the naming task is also used in stud-
ies of people with aphasia, who are asked to name the object shown in a pic-
ture.) Subjects read irregularly spelled words like dough and steak just slightly
more slowly than regularly spelled words like doe and stake, but still faster than
invented strings like cluff. This suggests that people can do two different things
in the naming task. They can look for the string in their mental lexicon, and if
they find it (i.e., if it is a real word), they can pronounce the stored phonologi-
cal representation for it. They can also “sound it out,” using their knowledge
of how certain letters or letter sequences (e.g., “gh,” “oe”) are most commonly
pronounced. The latter is obviously the only way to come up with a pronuncia-
tion for a nonexisting word.
The fact that irregularly spelled words are read more slowly than regularly
spelled real words suggests that the mind “notices” the irregularity. This may be
because the brain is trying to do two tasks—lexical look-up and sounding out
the word—in parallel in order to perform naming as fast as possible. When the
two approaches yield inconsistent results, a conflict arises that takes some time
to resolve.

 This is very interesting! I didn’t know that badly spelled words were read more slowly. That’s good news, or bad news, depending. :P It is good in that i may now have another argument for spelling reform: it makes people more efficient readers. It is also testable between populations+languages. Everything else equal, are people that read a well-spelled language faster readers than people that read a horribly spelled language (like english and danish)? That’s an interesting question actually. It sounds sufficiently simple and obvius that someone must have done the study. As for the bad news part, if they are right, it means i’m being inefficient becus i’m reading in a bad language. Worse, the entire world is being inefficient becus of its ‘choice’ of world language (i.e. english).

Chapter 9

Some systems draw on formal logic for semantic representations. You put up
the switch would be represented in a function/argument form, which is its logi-
cal form:

PUT UP (YOU, THE SWITCH)

where PUT UP is a “two-place predicate,” in the jargon of logicians, and the
arguments are YOU and THE SWITCH. The lexicon indicates the appropriate
relationships between the arguments of the predicate PUT UP.

I really, really dislike the term argument when used to mean the thing that one puts into functions or predicates. It is really a very, very bad choice of words for the context (logic). argument already has a rather precise meaning in that context. I prefer the term variable but there is another and better term that i prefer more, but i can’t seem to recall it right now.

 A keyword as general as bird may return far more information than could be
read in ten lifetimes if a thorough search of the Web occurs. (A search on the
day of this writing produced 200 million hits, compared to 122 million four
years prior.) [...]

I re-did the search. 1,100 million hits.

 Chapter 10

It is not always easy to decide whether the differences between two speech
communities reflect two dialects or two languages. Sometimes this rule-of-
thumb definition is used: When dialects become mutually unintelligible—when
the speakers of one dialect group can no longer understand the speakers of
another dialect group—these dialects become different languages.
However, this rule of thumb does not always jibe with how languages are
officially recognized, which is determined by political and social considerations.
For example, Danes speaking Danish and Norwegians speaking Norwegian and
Swedes speaking Swedish can converse with each other. Nevertheless, Danish
and Norwegian and Swedish are considered separate languages because they are
spoken in separate countries and because there are regular differences in their
grammars. Similarly, Hindi and Urdu are mutually intelligible “languages” spo-
ken in Pakistan and India, although the differences between them are not much
greater than those between the English spoken in America and the English spo-
ken in Australia.

Not citing any sources for such claims is bad. The mutual intelligibility is not that high between the scandinavian languages. It is much higher for written text between norwegian (bokmål) and danish. Etc. See the Wikipedia link.

English is the most widely spoken language in the world (as a first or second
language). It is the national language of several countries, including the United
States, large parts of Canada, the British Isles, Australia, and New Zealand. For
many years it was the official language in countries that were once colonies of
Britain, including India, Nigeria, Ghana, Kenya, and the other “anglophone”
countries of Africa. There are many other phonological differences in the vari-
ous dialects of English used around the globe.

This is certainly false. Look at Wikipedia. Mandarin is the most spoken native language. English is probably the most spoken non-native language.
ETA: But then later they write

The Sino-Tibetan family includes Mandarin, the most populous language in
the world, spoken by more than one billion Chinese. This family also includes
all of the Chinese languages, as well as Burmese and Tibetan.

So, i don’t know what they think.

Even though every language is a composite of dialects, many people talk and
think about a language as if it were a well-defined fixed system with various
dialects diverging from this norm. This is false, although it is a falsehood that is
widespread. One writer of books on language accused the editors of Webster’s
Third New International Dictionary, published in 1961, of confusing “to the
point of obliteration the older distinction between standard, substandard, collo-
quial, vulgar, and slang,” attributing to them the view that “good and bad, right
and wrong, correct and incorrect no longer exist.” In the next section we argue
that such criticisms are ill founded.

It’s time for the authors to again say negative things about language standardization, and promote a very relativistic view of languages and dialects. I will defend my views against their criticisms of such views.

I don’t know about a ‘fixed’ system, if they meant unchanging system, then i ofc don’t agree that there is any unchanging system of standard english (or standard danish etc.). However, there is a kind of danish that is the most standard. It may be a good idea to speak as normal a version of a language as possible, becus this makes it the easiest for the listeners to understand what one is saying. The general idea is to avoid things that are peculiar to a small minority of the speakers of the relevant language. This includes everything: syntax, grammar, word choice, pronunciation, etc. Speaking a language with in the most common way is the standard version of that language, nothing else. It is actually possible that there is no regional dialect that speaks that way, but that doesn’t matter. A standard version of a language need not be a regional dialect.

A standard version of a language is also a necessity if one wants a relatively fonetic spelling system without lots of alternative forms. The idea is that one spells after the sound of the standard version of the language.

No dialect, however, is more expressive, less corrupt, more logical, more
complex, or more regular than any other dialect or language. They are sim-
ply different. More precisely, dialects represent different set of rules or lexical
items represented in the minds of its speakers. Any judgments, therefore, as to
the superiority or inferiority of a particular dialect or language are social judg-
ments, which have no linguistic or scientific basis.
To illustrate the arbitrariness of “standard usage,” consider the English r-drop
rule discussed earlier. Britain’s prestigious RP accent omits the r in words such
as “car,” “far,” and “barn.” Thus an r-less pronunciation is thought to be better
than the less prestigious rural dialects that maintain the r. However, r-drop in the
northeast United States is generally considered substandard, and the more pres-
tigious dialects preserve the r, though this was not true in the past when r-drop
was considered more prestigious. This shows that there is nothing inherently bet-
ter or worse about one pronunciation over another, but simply that one variant is
perceived of as better or worse depending on a variety of social factors.

I don’t care about the typical purist stuff like ‘corruption’, but they are certainly wrong that some dialects are not more complex or regular than others. I really don’t know what makes people make these claims when they are so obviously false. I’ll give a very brief example. Consider a language that has a verb. As it happens, this verb is irregular in one dialect and not so in another. If everything else is equal, then clearly the one dialect is more regular than the other (and less complex), and indeed, better.

Their illustration is strange. First they say that they want to illustrate it, but then end up concluding that their example “shows that there is nothing inherently better or worse about one pronunciation over another, but simply that one variant is perceived of as better or worse depending on a variety of social factors” which is either trivially true becus of the clause about “social factors” (such clauses are almost never explained, in typical sociology fashion), or false becus these differences matter. If the difference is such that other speakers of the language from other dialects fail to understand one, then that is indeed worse, since the purpose of language is generally to be able to communicate. Obviously, if one is not trying to communicate with everyone using the language, this point is irrelevant.

Constructions with multiple negatives akin to AAE He don’t know nothing are
commonly found in languages of the world, including French, Italian, and the
Engl ish of Chaucer, as i l lustrated in the epigraph from The Canterbury Tales. The
multiple negatives of AAE are governed by rules of syntax and are not illogical.

While perhaps not ‘illogical’, they are redundant and so increase the complexity of a language without adding any increased expressiveness. This is a bad thing.

The authors spend some time discussing various differences between african american english (AAE) and standard american english (SAE). Some of these differences have relevance to complexity and expressive power, but i’m not knowledgeable enuf to comment on all of their points.

The first—the whole-word approach—teaches children to recognize a vocab-
ulary of some fifty to one hundred words by rote learning, often by seeing the
words used repeatedly in a story, for example, Run, Spot, Run from the Dick
and Jane series well-known to people who learned to read in the 1950s. Other
words are acquired gradually. This approach does not teach children to “sound
out” words according to the individual sounds that make up the words. Rather,
it treats the written language as though it were a logographic system, such as
Chinese, in which a single written character corresponds to a whole word or
word root. In other words, the whole-word approach fails to take advantage
of the fact that English (and the writing systems of most literate societies) is
based on an alphabet, in which the symbols correspond to the individual sounds
(roughly phonemes) of the language. This is ironic because alphabetic writing
systems are the easiest to learn and are maximally efficient for transcribing any
human language. (my bolding)

So much for their language relativism.

Chapter 12

Another simplification is that the “dead ends”—languages that evolved and
died leaving no offspring—are not included. We have already mentioned Hittite
and Tocharian as two such Indo-European languages. The family tree also fails
to show several intermediate stages that must have existed in the evolution of
modern languages. Languages do not evolve abruptly, which is why comparisons
with the genealogical trees of biology have limited usefulness. Finally, the dia-
gram fails to show some Indo-European languages because of lack of space.

The authors give the impression that in biology, species do somehow evolve abruptly. But they do no such thing. The analogy works fine in that area. The main problem with the analogy is that languages can share ‘genes’ (words, etc.) between ‘species’ and this does not generally happen in biology. (At least, except for in bacteria?)

 The term sound writing is sometimes used in place of alphabetic writing, but
it does not truly represent the principle involved in the use of alphabets. One-
sound ↔ one-letter is inefficient and unintuitive, because we do not need to
represent the [pʰ] in pit and the [p] in spit by two different letters. It is confusing
to represent nonphonemic differences in writing because the sounds are seldom
perceptible to speakers. Except for the phonetic alphabets, whose function is
to record the sounds of all languages for descriptive purposes, most, if not all,
alphabets have been devised on the phonemic principle.

This is a good observation. I hadn’t thought of that. I shud update my Lyddansk to fix the fonetic principle to the fonemic principle (in danish ofc). Another way of putting it in ordinary language is: one sound↔one symbol, but include only differences in sounds that are relevant.

If writing represented the spoken language perfectly, spelling reforms would
never have arisen. In chapter 6 we discussed some of the problems in the En  glish
orthographic system. These problems prompted George Bernard Shaw to observe
that:

[I]t was as a reading and writing animal that Man achieved his human
eminence above those who are called beasts. Well, it is I and my like who
have to do the writing. I have done it professionally for the last sixty
years as well as it can be done with a hopelessly inadequate alphabet
devised centuries before the English language existed to record another
and very different language. Even this alphabet is reduced to absurdity
by a foolish orthography based on the notion that the business of spelling
is to represent the origin and history of a word instead of its sound and
meaning. Thus an intelligent child who is bidden to spell debt, and very
properly spells it d-e-t, is caned for not spelling it with a b because Julius
Caesar spelt the Latin word for it with a b.

The source of the quote is given as: Shaw, G. B. 1948. Preface to R. A. Wilson, The miraculous birth of language.

Anyway, this particular etymology is actually wrong too! There are many such false etymologies that people have based their spelling on. Very utterly foolish. Quoting Wikipedia:

From the 16th century onward, English writers who were scholars of Greek and Latin literature tried to link English words to their Graeco-Latin counterparts. They did this by adding silent letters to make the real or imagined links more obvious. Thus det became debt (to link it to Latin debitum), dout became doubt (to link it to Latin dubitare), sissors became scissors and sithe became scythe (as they were wrongly thought to come from Latin scindere), iland became island (as it was wrongly thought to come from Latin insula), ake became ache (as it was wrongly thought to come from Greek akhos), and so forth.[5][6]

 

Minimum virable human population size?

Question from /sci/:
what is the smaller number of people necessary (male / female) required to create a stable genictic base for a civilization to grow from?

planned breeding programs considered

 

en.wikipedia.org/wiki/Population_bottleneck

As can be seen from the examples. Extremely small bottlenecks are possible without being extinct. However, this says nothing about the probability of being extinct given a small bottleneck. I’m not a geneticist, but I’d guess that around 1000 would be fine for humans. Especially with eugenic forces in mind. In fact, with those to help, it should be possible with rather few individuals. Perhaps 100.

 

en.wikipedia.org/wiki/Minimum_viable_population

Since humans are very much K-strategists, perhaps the estimate has to be increased. It depends upon how much eugenics we use and what kind.

Sweden’s crazy social experiment with gender

www.slate.com/articles/double_x/doublex/2012/04/hen_sweden_s_new_gender_neutral_pronoun_causes_controversy_.html

Read if u dare. These people are I also predict that this will not change sex-based stereotypes. Men and women are simply different due to genetics (and epigenetics etc.).

Altho, as a linguist, i like the idea of gender free pronouns even tho they reasons they want to introduce it are insane. The linguist mentioned in the article is right that the EN he/she, DA han/hun, SV han/hon clutters up sentences.

The free lifestyle magazine, Nöjesguiden, which is distributed in major Swedish cities and is similar to the Village Voice, recently released an issue using hen throughout. In his column, writer Kawa Zolfagari says, “It can be hard to handle the male ego sometimes. I myself tend to get a stinging feeling when a female friend has had it with sexism or has got hurt because of some guy and desperately blurts out some generalisation about men. Sometimes I think ‘Hen knows me, hen knows I am not an idiot, why does hen speak that way of all men?’ Nöjesguiden‘s editor, Margret Atladottir, said hen ought to be included in the dictionary of the Swedish Academy, the body that awards the Nobel Prize in literature.

Why is he not complaining that she is being sexist? Oh the double standards. It is fair to criticize men, but not women. Even men think like this, after all, women and children first is pretty much a built-in feature of men due to men’s disposability.


Generally, this women has alot of good videos about feminism.

Thoughts about bogus and or pseudoscientific educational/intelligence theories

en.wikipedia.org/wiki/Theory_of_multiple_intelligences

Multiple Intelligences, the Mozart Effect, and Emotional Intelligence A Critical Review

Inadequate evidence for Multiple Intelligences, Mozart Effect, and Emotional Intelligence Theories

The entire issue of that journal seems to be about this topic, but my university does not have online access to the journal, so i can’t post the other papers. Too bad. To be fair, it is best to read the papers of both sides. However, the issue seems to be so one-sided that there is no need in this case.

 

Multiple intelligences (MI) theory (Gardner, 1983), the Mo-

zart effect (ME) theory (Rauscher, Shaw, & Ky, 1993), and

emotional intelligence (EI) theory (Salovey & Mayer, 1990)

have had widespread circulation in education. All three theo-

ries have been recommended for improving classroom learn-

ing (Armstrong, 1994; Campbell, 2000; Gardner, 2004;

Glennon, 2000; Rettig, 2005), and all three theories have

been applied in classroom activities (Elksnin & Elksnin,

2003; Graziano, Peterson, & Shaw, 1999; Hoerr, 2003).

AlthoughMI theory (Gardner, 1983) and EI theory (Salovey

& Mayer, 1990) were proposed before the emergence of public

Internet use and the ME was postulated just as Internet use be-

gan to flourish (Rauscher et al., 1993), education (.edu) Web

sites representing these theories have increased at 10 times the

rate of increase of professional journal articles on these theories.

Table 1 reports a 3-year, six time point snapshot of the increase

in both professional journal articles and Web sites. Between

June 1, 2003 and December 1, 2005 Google™-accessed MI

.eduWeb sites increased from 25,200 to 258,000,ME .eduWeb

sites increased from 1,082 to 12,700, and EI .edu Web sites in-

creased from 14,700 to 220,000. By contrast, between these

same two dates, Pubmed database accessed professional journal

articles did not even double:MI articles increased from 12 to 17,

ME articles increased from 33 to 41, and articles on EI in-

creased from 464 to 801.

In addition to the increase in Web sites and articles out-

lined on Table 1, there has also been an increase in the num-

ber of education workshops on these three theories. In the

6-month period between June 1, 2005 andDecember 1, 2005,

Google™ site:edu workshops identified for MI increased

from 10,600 to 48,300,ME workshops increased from 124 to

192, and EI workshops increased from 9,180 to 45,100.

Because these three theories have wide currency in educa-

tion they should be soundly supported by empirical evidence.

However, unfortunately, each theory has serious problems in

empirical support. This article reviews evidence for each the-

ory and concludes that MI theory has no validating data, that

the ME theory has more negative than positive findings, and

that EI theory lacks a unitary empirically supported con-

struct. Each theory is compared to theory counterparts in

cognitive psychology and cognitive neuroscience that have

better empirical support. The article considers possible rea-

sons for the appeal of these three theories and closes with a

brief rationale for examining theories of cognition in the light

of cognitive neuroscience research findings.

 

From this alone, it is very clear that there is something fishy going on.

Thoughts about SEP’s Naturalism

plato.stanford.edu/entries/naturalism/

The term ‘naturalism’ has no very precise meaning in contemporary philosophy. Its current usage derives from debates in America in the first half of the last century. The self-proclaimed ‘naturalists’ from that period included John Dewey, Ernest Nagel, Sidney Hook and Roy Wood Sellars. These philosophers aimed to ally philosophy more closely with science. They urged that reality is exhausted by nature, containing nothing ‘supernatural’, and that the scientific method should be used to investigate all areas of reality, including the ‘human spirit’ (Krikorian 1944, Kim 2003).

So understood, ‘naturalism’ is not a particularly informative term as applied to contemporary philosophers. The great majority of contemporary philosophers would happily accept naturalism as just characterized—that is, they would both reject ‘supernatural’ entities, and allow that science is a possible route (if not necessarily the only one) to important truths about the ‘human spirit’.

Even so, this entry will not aim to pin down any more informative definition of ‘naturalism’. It would be fruitless to try to adjudicate some official way of understanding the term. Different contemporary philosophers interpret ‘naturalism’ differently. This disagreement about usage is no accident. For better or worse, ‘naturalism’ is widely viewed as a positive term in philosophical circles—few active philosophers nowadays are happy to announce themselves as ‘non-naturalists’.[1] This inevitably leads to a divergence in understanding the requirements of ‘naturalism’. Those philosophers with relatively weak naturalist commitments are inclined to understand ‘naturalism’ in a unrestrictive way, in order not to disqualify themselves as ‘naturalists’, while those who uphold stronger naturalist doctrines are happy to set the bar for ‘naturalism’ higher.[2]

Right about this.

Some extreme naturalists deny that a priori conceptual knowledge is so much as possible (Devitt

2005). They take Quine’s case against an analytic-synthetic distinction to show that all claims are

answerable to empirical data and so not purely analytic. This is not the place to assess Quine’s

arguments, but it seems unlikely that they can establish so strong a conclusion. (Suppose that a

certain group agrees, say, that they are going to use ‘Eve’ to refer to the most recent common

matrilineal ancestor of all extant humans. Then surely that know a priori that, given general

evolutionary assumptions, all contemporary humans are descended from Eve.)

Not quite. It is quite possible that there is no such most recent common matrilineal ancestor. One wud need (empirical) data to know that.

/lit/ on poetry

OP:

Why do authors say things through symbols? Especially people who are good at using the English language, I mean, if they want to say something, if they actually want to convey a specific message, then why don´t they just say it plainly? When you use a symbol to convey a message then people are going to interpret in all kinds of ways. People aren´t going to be sure of what you mean. Seems rather impractical to me. I guess it would be understandable if they thought the message would take to long to say and people would get bored before hearing all of it and it´s possible to sum the message up with a symbol. But when you´ve got a whole book to say it, an entire fucking book, and you´re assuming people will read all of it, then the length of a message becomes irrelevant.

Anon:

Heres how it works OP

Authors don’t really care about the symbols and they don’t set out to put the symbols in from the beginning they just throw them in to give their work the illusion of depth that isn’t really there.

Most authors who are not also philosophers don’t really say anything deep or important or profound with their symbols at all because they are much too dumb. Basically they want to pretend they are smart.

If they had anything real to say, they would just say it.

Me:

This is the reason I hate poetry. If they wanted to tell me something, they should use understandable language.

Of course, if they just want to play around with language etc., then poetry is fine. Just keep me out of it, don’t force me to read it.