Archive for the ‘Language’ Category

Cambridge.University.Press.Analyzing.Grammar.An.Introduction.Jun.2005 free pdf download

 

Overall, there is nothing much to say about this book. It covers most stuff. Neither particularly good, or interesting, or particularly bad or uninteresting, IMO.

Forexample, what is the meaning of the word hello? What information

does it convey? It is a very difficult word to define, but every speaker of

English knows how to use it: for greeting an acquaintance, answering the

telephone, etc. We might say that hello conveys the information that the

speaker wishes to acknowledge the presence of, or initiate a conversation

with, the hearer. But it would be very strange to answer the phone or greet

your best friend by saying “I wish to acknowledge your presence” or “I

wish to initiate a conversation with you.”What is important about the word

hello is not its information content (if any) but its use in social interaction.

In the Teochew language (a “dialect” of Chinese), there is no word for

‘hello’. The normal way for one friend to greet another is to ask: “Have you

already eaten or not?” The expected reply is: “I have eaten,” even if this is

not in fact true.

-

In our comparison of English with Teochew, we saw that both languages

employ a special formof sentence for expressing Yes–No questions. In fact,

most, if not all, languages have a special sentence pattern which is used for

asking such questions. This shows that the linguistic form of an utterance

is often closely related to its meaning and its function. On the other hand,

we noted that the grammatical features of a Yes–No question in English

are not the same as in Teochew. Different languages may use very different

grammatical devices to express the same basic concept. So understanding

the meaning and function of an utterance will not tell us everything we need

to know about its form.

interesting for me becus of my work on a logic of questions and answers.

-

Both of the hypotheses we have reached so far about Lotuko words are

based on the assumption that themeaning of a sentence is composed in some

regular way from the meanings of the individual words. That is, we have

been assuming that sentence meanings are compositional.Of course,

every language includes numerous expressions where this is not the case.

Idioms are one common example. The English phrase kick the bucket can

mean ‘die,’ even though none of the individual words has this meaning.

Nevertheless, the compositionality of meaning is an important aspect of the

structure of all human languages.

for more on compositionality see: plato.stanford.edu/entries/compositionality/

emilkirkegaard.dk/en/?p=3233

-

We have discussed three types of reasoning that can be used to

identify the meaningful elements of an utterance (whether parts of a word

or words in a sentence): minimal contrast, recurring partials, and pattern-

matching. In practice, when working on a new body of data, we often use

all three at once, without stopping to think which method we use for which

element. Sometimes, however, it is important to be able to state explicitly

the pattern of reasoning which we use to arrive at certain conclusions. For

example, suppose that one of our early hypotheses about the language is

contradicted by further data. We need to be able to go back and determine

what evidence that hypothesis was based on so that we can re-evaluate

that evidence in the light of additional information. This will help us to

decide whether the hypothesis can be modified to account for all the facts,

orwhether it needs to be abandoned entirely.Grammatical analysis involves

an endless process of “guess and check” – forming hypotheses, testing them

against further data, andmodifying or abandoning those which do not work.

quite a lot of science works like that. conjecture and refutation, pretty much (Popper)

-

What do we mean when we say that a certain form, such as Zapotec ka–,

is a “morpheme?” Charles Hockett (1958) gave a definition of this term

which is often quoted:

Morphemes are the smallest individually meaningful elements in the utter-

ances of a language.

There are two crucial aspects of this definition. First, a morpheme is mean-

ingful.A morpheme normally involves a consistent association of phono-

logical formwith some aspect ofmeaning, as seen in (7) where the form ˜ nee

was consistently associated with the concept ‘foot.’ However, this associ-

ation of form with meaning can be somewhat flexible. We will see various

ways in which the phonological shape of a morpheme may be altered to

some extent in particular environments, and there are some morphemes

whose meaning may depend partly on context.

obviously does not work for en.wikipedia.org/wiki/Cranberry_morpheme

what is the solution to this inconsistency in terminology?

-

In point (c) above we noted that a word which contains no plural marker

is always singular. The chart in (17) shows that the plural prefix is optional,

and that when it is present it indicates plurality; but it doesn’t say anything

about the significance of the lack of a prefix. One way to tidy up this loose

end is to assume that the grammar of the language includes a default

rule which says something like the following: “a countable noun which

contains no plural prefix is interpreted as being singular.”

Another possible way to account for the same fact is to assume that sin-

gular nouns carry an “invisible” (or null) prefix which indicates singular

number. That would mean that the number prefix is actually obligatory for

this class of noun. Under this approach, our chart would look something

like (18):

the default theory with en.wikipedia.org/wiki/Markedness is more plausible than positing invisible morphemes.

-

since the book contiues to use Malay as an ex. including the word <orang> i’m compelled to mention that it is not a coincidence that it is similar to <orangutan>. en.wikipedia.org/wiki/Orangutan#Etymology

The name “orangutan” (also written orang-utan, orang utan, orangutang, and ourang-outang) is derived from the Malay and Indonesian words orang meaning “person” and hutan meaning “forest”,[1] thus “person of the forest”.[2]Orang Hutan was originally not used to refer to apes, but to forest-dwelling humans. The Malay words used to refer specifically to the ape is maias and mawas, but it is unclear if those words refer to just orangutans, or to all apes in general. The first attestation of the word to name the Asian ape is in Jacobus Bontius‘ 1631 Historiae naturalis et medicae Indiae orientalis – he described that Malaysians had informed him the ape was able to talk, but preferred not to “lest he be compelled to labour”.[3] The word appeared in several German-language descriptions of Indonesian zoology in the 17th century. The likely origin of the word comes specifically from the Banjarese variety of Malay.[4]

The word was first attested in English in 1691 in the form orang-outang, and variants with -ng instead of -n as in the Malay original are found in many languages. This spelling (and pronunciation) has remained in use in English up to the present, but has come to be regarded as incorrect.[5][6][7] The loss of “h” in Utan and the shift from n to -ng has been taken to suggest that the term entered English through Portuguese.[4] In 1869, British naturalist Alfred Russel Wallace, co-creator of modern evolutionary theory, published his account of Malaysia’s wildlife: The Malay Archipelago: The Land of the Orang-Utan and the Bird of Paradise.[3]

-

Traditional definitions for parts of speech are based on “notional”

(i.e. semantic) properties such as the following:

(17) A noun is a word that names a person, place, or thing.

A verb is a word that names an action or event.

An adjective is a word that describes a state.

However, these characterizations fail to identify nouns like destruction,

theft, beauty, heaviness. They cannot distinguish between the verb love and

the adjective fond (of),or between the noun fool and the adjective foolish.

Note that there is very little semantic difference between the two sentences

in (18).

(18) They are fools.

They are foolish.

it is easy to fix 17a to include abstractions. all his counter-examples are abstractions.

<love> is both a noun and a verb, but 17 definitions, which is right.

the 18 ex. seems weak too. what about the possibility of interpreting 18b as claiming that they are foolish. this does not mean that they are fools. it may be a temporary situation (drunk perhaps), or isolated to specific areas of reality (ex. religion).

not that i’m especially happy about semantic definitions, it’s just that the argumentation above is not convincing.

-

Third, the head is more likely to be obligatory than the modifiers or other

non-head elements. For example, all of the elements of the subject noun

phrase in (22a) can be omitted except the head word pigs.If this word is

deleted, as in (22e), the result is ungrammatical.

(22) a [The three little pigs] eat truffles.

b [The three pigs] eat truffles.

c [The pigs] eat truffles.

d [Pigs] eat truffles.

e *[The three little] eat truffles.

not so quick. if the context makes it clear that they are speaking about pigs, or children, or whatever, 22e is perfectly understandable, since context ‘fiils out’ the missing information, grammatically speaking. but the author is right in that it is incomplete and without context to fill in, one would be forced to ask ”three little what?”. but still, that one will actually respond like this shows that the utterance was understood, at least in part.

-

Of course, English noun phrases do not always contain a head noun. In

certain contexts a previously mentioned head may be omitted because it is

“understood,” as in (23a). This process is called ellipsis . Moreover, in

English, and in many other languages, adjectives can sometimes be used

without any head noun to name classes of people, as in (23b,c). But, aside

from a few fairly restricted patterns like these, heads of phrases in English

tend to be obligatory.

(23) a [The third little pig] was smarter than [the second ].

b [the good], [the bad] and [the ugly]

c [The rich] get richer and [the poor] get children.

i was going to write the author doesn’t seem to understand the word ”obligatory”, but it another interpretation dawned upon me. i think he means that under must conditions, one cannot leave out the noun in a noun phrase (NP), but sometimes one can. confusing wording.

-

As we can already see from example (5), different predicates require

different numbers of arguments: hungry and snores require just one, loves

and slapping require two. Some predicates may not require any arguments

at all. For example, in many languages comments about the weather (e.g. It

is raining,or It is dark,or It is hot) could be expressed by a single word, a

bare predicate with no arguments.

it is worth mentioning that there is a name for this: en.wikipedia.org/wiki/Dummy_pronoun

-

It is important to remember that arguments can also be optional. For exam-

ple,many transitive verbs allowan optional beneficiary argument (18a), and

most transitive verbs of the agent–patient type allow an optional instrument

argument (18b). The crucial fact is that adjuncts are always optional. So

the inference “if obligatory then argument” is valid; but the inference “if

optional then adjunct” is not.

strictly speaking, this is using the terminology incorrectly. conditionals are not inferences. the author should have written ex ”the inference “obligatory, therefore, argument” is valid.”, or alternatively ”the conditional “if obligatory, then argument” is true.”.

confusing inferences with conditionals leads to all kinds of confusions in logic.

-

Another way of specifying the transitivity of a verb is to ask, how many

term (subject or object) arguments does it take? The number of terms, or

direct arguments, is sometimes referred to as the valence of the verb.

Since most verbs can be said to have a subject, the valence of a verb is

normally one greater than the number of objects it takes: an intransitive

verb has a valence of one, a transitive verb has a valence of two, and a

ditransitive verb has a valence of three.

the author is just talking about how many operands the expressed predicate has. there are also verbs which can express predicates with four operands. consider <transfer>. ex. ”Peter transfers 5USD from Mike to Jim.”. There Peter, subject, agent; 5USD, object, theme, a repicient, Jim, ?; Mike, antirecpient?, ?.

The distinctions between OBJ2 and OBL make little to no sense to me.

It is important to notice that the valence of the verb (in this sense) is not

the same as the number of arguments it takes. For example, the verb donate

takes three semantic arguments, as illustrated in (8).However, donate has70 Analyzing Grammar: An Introduction

avalence of two because it takes only two term arguments, SUBJ and

OBJ. With this predicate, the recipient is always expressed as an oblique

argument.

(8) a Michael Jackson donated his sunglasses to the National Museum.

b donate < agent, theme, recipient >

|| |

subj obj obl

Some linguists use the term “semantic valence” to refer to the number of

semantic arguments which a predicate takes, and “syntactic valence” to

specify the number of terms which a verb requires. In this book we will use

the term “valence” primarily in the latter (syntactic) sense.

doens’t help.

-

We have already seen that some verbs can be used in more than

one way. In chapter 4, for example, we saw that the verb give occurs in

two different clause patterns, as illustrated in (10).We can now see that

these two uses of the verb involve the same semantic roles but a different

assignment of Grammatical Relations, i.e. different subcategorization. This

difference is represented in (11). The lexical entry for give must allow for

both of these configurations.3

(10) a John gave Mary his old radio.

b John gave his old radio to Mary.

(11) a give < agent, theme, recipient >

|| |

subj obj2 obj

b give < agent, theme, recipient >

|| |

subj obj obl

it seems to me that there is something wholly wrong with a theory that treats 10a-b much different. those two sentences mean the same thing, and their structure is similar, and only one word makes the differnece. this word seems to just have the function of allowing for another order of the operands of the verb.

-

A number of languages have grammatical processes which, in effect,

“change” an oblique argument into an object. The result is a change in

the valence of the verb. This can be illustrated by the sentences in (19).

In (19a), the beneficiary argument is expressed as an OBL, but in (19b)

the beneficiary is expressed as an OBJ. So (19b) contains one more term

than (19a), and the valence of the verb has increased from two to three;

but there is no change in the number of semantic arguments. Grammatical

operations which increase or decrease the valence of a verb are a topic of

great interest to syntacticians. We will discuss a few of these operations in

chapter 14.

(19) a John baked a cake for Mary.

b John baked Mary a cake.

IMO, these two have the exact same number of operands, both have 3. for word <for> allows for a different ordering, i.e., it is a syntax-modifier.

at least, that’s one reading. 19a seems to be a less clear case of my alternative theory. one reading of 19a is that Mary was tasked with baking a cake, but John baked it for her. another reading has the same meaning as 19b.

-

(20) a #The young sausage likes the white dog.

b #Mary sings a white cake.

c #A small dog gives Mary to the young tree.

(21) a *John likes.

b *Mary gives the young boy.

c *The girl yawns Mary.

The examples in (20) are grammatical but semantically ill-formed –

they don’tmake sense.4

the footnote is: One reason for saying that examples like (20) and (22) are grammatical, even though

they sound so odd, is that it would often be possible to invent a context (e.g. in a fairy

tale or a piece of science fiction) in which these sentences would be quite acceptable.

This is not possible for ungrammatical sentences like those in (21).

i can think about several contexts where 21b makes sense. think of a situation where everybody is required to give something/someone to someone. after it is mentioned that several other people give this and that, 21b follows. in that context it makes sense just fine. however, it is because the repicient is implicit, since it is unnecessary (economic principle) to mention the recipient in every single sentence or clause.

21c is interpretable with if one considers ”the girl” an utterance, that Mary utters while yawning.

21a is almost common on Facebook. ”John likes this”, shortened to ”John likes”.

not that i think the author is wrong, i’m just being creative. :)

-

The famous example in (23) was used by Chomsky (1957) to show how

a sentence can be grammatical without being meaningful. What makes this

sentence so interesting is that it contains so many collocational clashes:

something which is green cannot be colorless; ideas cannot be green,or

any other color, but we cannot call themcolorless either; ideas cannot sleep;

sleeping is not the kind of thing one can do furiously; etc.

(23) #Colorless green ideas sleep furiously.

it is writings such as this that result in so much confusion. clear the different <cannot>’s in the above are not about the same kind of impossibility. let’s consider them:

<something which is green cannot be colorless> this is logical impossibility. these two predicates are logically incompatible, that is, they imply the lack of each other, that is, ∀xGreen(x)→¬Colorless(x). but actually this predicate has an internal negation. we can make it more explicit like this: ∀xGreen(x)→Colorful(x), and ∀xColorful(x)↔¬Colorless(x).

< ideas cannot be green,or any other color, but we cannot call themcolorless either; ideas cannot sleep;

sleeping is not the kind of thing one can do furiously> this is semantic impossibility. it concerns the meaning of the sentence. there is no meaning, and hence nothing expressed that can be true or false. from that it follows that there is nothing that can be impossible, since impossibility implies falsity. hence, if there is something connected with that sentence that is impossible, it has to be something else.

-

This kind of annotated tree diagramallows us to see at oncewhat iswrong

with the ungrammatical examples in (21) above: (21b) is incomplete, as

demonstrated in (34a), while (21c) is incoherent, as demonstrated in (34b).

a better set of terms are perhaps <undersaturated> and <oversaturated>.

there is nothing inconsistent about the second that isn’t also inconsitent in the first, and hence using that term is misleading. <incomplete> does capture an essential feature, which is that something is missing. the other ex. has something else too much. one could go for <incomplete> and <overcomplete> but it sounds odd. hence my choice of different terms.

-

The pro-formone can be used to refer to the head nounwhen it is followed

by an adjunct PP, as in (6a),but not when it is followed by a complement

PP as in (6b).

(6) a The [student] with short hair is dating the one with long hair.

b ∗The [student] of Chemistry was older than the one of Physics.

6b seems fine to me.

-

There is no fixed limit on howmanymodifiers can appear in such a sequence.

But in order to represent an arbitrarily long string of alternating adjectives

and intensifiers, it is necessary to treat each such pair as a single unit.

The “star” notation used in (15) is one way of representing arbitrarily

long sequences of the same category. For any category X, the symbol “X∗”

stands for “a sequence of any number (zero or more) of Xs.” So the symbol

“AP∗” stands for “a sequence of zero or more APs.” It is easy to mod-

ify the rule in (12b) to account for examples like (14b); this analysis is

shown in (15b). Under the analysis in (12a),wewould need to write a more

complex rule something like (15a).3 Because simplicity tends to be favored

in grammatical systems, (12b) and (15b) provide a better analysis for this

construction.

(15) aNP → Det ((Adv) A)

∗ N (PP)

bNP → Det AP∗ N (PP)

for those that are wondering where this use of asterisk comes from, it is from here: en.wikipedia.org/wiki/Regular_expression

-

In English, a possessor phrase functions as a kind of determiner. We

can see this because possessor phrases do not normally occur together with

other determiners in the same NP:

(19) a the new motorcycle

b Mary’s new motorcycle

c ∗Mary’s the new motorcycle

d ∗the Mary’s new motorcycle

looks more like it is because they are using proper nouns in their example. if one used a common noun, then it works just fine:

19e: The dog’s new bone.

-

Another kind of evidence comes fromthe fact that predicate complement

NPs cannot appear in certain constructions where direct objects can. For

example, an objectNP can become the subject of a passive sentence (44b) or

of certain adjectives (like hard, easy, etc.) which require a verbal or clausal

complement (44c).However, predicate complement NPs never occur in

these positions, as illustrated in (45).

(44) a Mary tickled an elephant.

b An elephant was tickled (by Mary).

c An elephant is hard (for Mary) to tickle.

(45) a Mary became an actress.

b *An actress was become (by Mary).

c *An actress is hard (for Mary) to become.

45c is grammatical with the optional element in place: An actress is hard for Mary to become. Altho it is ofc archaic in syntax.

-

mi amamas. ‘I am happy.’

yu amamas. ‘You (sg) are happy.’

em i amamas. ‘He/she is happy.’

yumi amamas. ‘We (incl.) are happy.’

mipela i amamas. ‘We (excl.) are happy.’

yupela i amamas. ‘You (pl) are happy.’

ol i amamas. ‘They are happy.’

it is difficult not to like this system, except for the arbitrary requirement of ”i” some places and not others. its clearly english-inspired. inclusive ”we” is interesting ”youme” :D

-

This constituent is normally labeled S’or S (pronounced “S-bar”). It con-

tains two daughters: COMP (for “complementizer”) and S (the complement

clause itself). This structure is illustrated in the tree diagram in (15), which

represents a sentence containing a finite clausal complement.

how to make this fit perfectly with the other use of N-bar terminology. in the case of noun phrases, we have NP on top, then N’ (with DET and adj) and then N at the bottom. it seems that we need to introduce some analogue to NP with S. the only level left is the entire sentence. SP sounds like a contradiction in terms or oxymoron though, ”sentence phrase”.

-

(from A natural history of negation)

i had been thinking about a similar idea. but these work fine as a beginning. a good hierarchy needs a lvl for approximate truth as well (like Newton’s laws), as well as actual truths. but also perhaps a dimension for the relevance of the information conveyed. a sentence can express a true proposition without that proposition being relevant the making of just about any real life decision. for instance, the true proposition expressed by “42489054329479823423 is larger than 37828234747″ will in all likelihood never, ever be relevant for any decision. also one can cote that the relevance dimension only begins when there is actually some information conveyed, that is, it doesnt work before level 2 and beyond, as those below are meaningless pieces of language.

and things that are inconsistent can also be very useful, so its not clear how the falseness, approximate truth, and truth related to usefulness. but i think that they closer it is the truth, the more likely that it is useful. naive set theory is fine for working with many proofs, even if it is an inconsistent system.

From here.

-

Kennethamy

Frankly I cannot answer your question about Lancan because I really don’t understand what he is saying. However, let me ask you, in turn, what you think about the following quotation from Wittgenstein’s Philosophical Investigations. I think it is relevant to this discussion.

We are under the illusion that what is peculiar, profound, essential in our investigation, resides in its trying to grasp the incomparable essence of language. That is, the order existing between the concepts of proposition; word, proof, truth, experience, and so on. This order is a super-order between – so to speak – super-concepts. Whereas, of course, if the words “language,” “experience,” “world,” have a use, it must be as humble a one as that of the words “table,” “lamp,” “door.” (p. 44e)

Emil

It is funny that you bring up W. in this, Ken, as he wrote most incomprehensibly! Perhaps he was doing analytic philosophy but it is certainly extremely hard to understand anything he wrote. It’s not like reading Hume which is also hard to understand. H. is hard to understand because the texts he wrote were written 250 years ago or so. W. wrote only some 70-50 years ago and yet I can’t understand it easily. I can understand other persons from the same era just fine (Clifford, W. James, Quine, Russell, etc.).

Kennethamy

W. wrote aphoristically (Like Lichtenstein) so you have to get used to his style. But what of the passage. Do you understand that?

Emil

No, I have no clue what it means. I didn’t read PI yet so maybe that is why. I read the Tractatus.

Kennethamy

Well, he says that philosophers should not think that words like, “knowledge” or “reality” have a different kind of meaning than, and need a different kind of understanding from, ordinary words like “lamp” and “table”. “Philosophical” words are not special. Their meanings are to be discovered in how they are ordinarily used. (That does not, I think, suppose you have read, PI).

Emil

Alright. Then why didn’t he just write what you just wrote? I suppose this is the paradigmatic thesis of the ordinary language philosophy.

Kennethamy

First of all it was in German. And second, it wasn’t his style. But I don’t think it was particularly hard to get that out of it. Yes, it is ordinary language philosophy. But, going beyond interpretation (I hope) don’t you think it is true? Why should “knowledge” (say) be treated differently from “lamp”?

Emil

I think it is. Especially for a person that hasn’t read much of W.’s works. You have read a lot more than I have.

I agree with it, yes.

Kennethamy

There are lots of people who think that words like “knowledge” and “information” are superconcepts which have a special philosophical meaning they do not have in ordinary discourse (and which it is beneath philosophy to treat like the word, “lamp”) That’s why they are interested in what some particular philosopher means by, “knowledge”. They think there is some “incomparable essence of language” that philosophers are “trying to grasp”.

Emil

Ok. But some words do have meanings in philosophical contexts that they do not have in other, normal contexts. Think of “valid” as an example.

Kennethamy

Yes, of course. But in that sense, “valid” is a technical term. “Knowledge” is not a technical term in the ordinary sense. It doesn’t have some deep philosophical meaning in addition to its ordinary meaning, nor is its ordinary meaning some deep meaning detached from its usual meaning. What meaning could Lacan find that was the real philosophical meaning? Where would that meaning even come from? Heidegger does the same thing. He ignores what a word means, and then finds (invents” a deep philosophical meaning for it. But he uses etymology to do that. It is wrong-headed from the word “go”. If you read Plato’s Cratylus you find how Socrates makes fun of this view of meaning (although, Plato here is making fun of himself, because he really originates this idea that the meaning of a word is its essence which is hidden).

Wittgenstein’s positive point is, of course, the ordinary language thing. But his negative point (which I think is more important for this discussion) is that terms like “knowledge” or “truth” do not have special meanings to be dug out by philosophers who are supposed to have some special factual for spying them. Lancan has no particular insight into the essence of knowledge hidden from the rest of us which, if we understand him, will provide us with philosophical enlightenment. Why should he?

Jeeprs

@kennethamy,
There is a risk in all of this that by excluding the idea of the ‘super concept’ in W’s sense, or insisting that it must simply have the same kind of meaning as ‘lamp’ or ‘table’ that you also exclude what is most distinctive about philosophy. Surely we can acknowledge that there is a distinction between abstract and concrete expression. ‘The lamp is on the table’ is a different kind of expression to ‘knowledge has limits’.

When we ‘discuss language’ we are on a different level of explanation to merely ‘using language’. I mean, using language, you can explain many things, especially concrete and specific things, like ‘this is how to fix a lamp’ or ‘this is how to build a table’. But when it comes to discussing language itself, we are up against a different order of problem, not least of which is that we are employing the subject of the analysis to conduct the analysis. (I have a feeling that Wittgenstein said this somewhere.)

So it is important to recognise what language is for and what it can and can’t do. There are some kinds of speculations which can be articulated and might be answerable. But there are others which you can say, but might not really be possible to answer, even though they seem very simple (such as, what is number/meaning/the nature of being). Of which Wittgenstein said, that of which we cannot speak, of that we must remain silent. So knowing what not to say must be part of this whole consideration.

Kennethamy

“Lamp” is a term for a concrete object. “Knowledge” is a term for an abstract object. But the central point is that neither has a hidden meaning that only a philosopher can ferret out. The meaning of both are their use(s) by fluent speakers of the language. It is not necessary to go to Lancan or Nietzsche to discover what “knowledge” really means anymore that it is to discover what “lamp” really means. As Wittgenstein wrote, “nothing is hidden”. Philosophy is not science. It is not necessary to go underneath the phenomena to discover what there really is. It is ironic that interpretationists accuse analytic philosophy of “scientism” when it is they who think that philosophy is a kind of science.

Reconstructo

@kennethamy,
I interpret Wittgenstein as saying that the philosophical language-game is not a privileged language game. To say that something isn’t hidden is not to say that everyone finds it. This is just figurative language. Wittgenstein should be read by the light of Wittgenstein. His game is one more game, the game of describing the game. I interpret him as shattering the hope (for himself and those whom he persuades) for some unified authority on meaning.
Also he stressed the relationship of language and social practice. He finally took a more holistic view of language, and dropped his reductive Tractatus views. (This is not to deny the greatness of the Tractatus. Witt is one of my favorites, early and late.)
I associate Wittgenstein with a confession of the impossibility of closure. I don’t think language is capable of tying itself up.

Kennethamy

To say that “nothing is hidden” is to say that words like “truth” or “knowledge” do not have, in addition to their ordinary everyday meanings, some secret meanings that only philosophers are able to discover. There are no secret meanings. There is no, “what the word ‘really means’” that Lacan or Heidegger has discovered.

————————-

Jeeprs

Well my reason is that a lot of what goes on in this life seems perfectly meaningless and in the true sense of the word, irrational. Many things which seem highly valued by a lot of people seem hardly worth the effort of pursuing, we live our three score years and ten, if we’re lucky, and then vanish into the oblivion from whence we came. None of it seems to make much sense to me. I am the outcome, or at least an expression, of a process which started billions of years ago inside some star somewhere. For what? Watch television? Work until I die?

That’s my reason.

Kennethamy

Just what are you questioning? (One sense of the word, “meaningless” may well be something like “irrational”. But that is not the true sense of the word. What about all the other senses of the word, “meaningless”? ). By the way, I think that “non-rational” would be a better term than “irrational”. And, just one more thing: what would it be for what goes on in this world to be rational? If you could tell me that, then I would have a better idea of what it is you are saying when you say it is irrational or it is non-rational. What is it that it is not? What would it be for you to discover that what goes on is rational?

Jeeprs

Have you ever looked out at life and thought ‘boy what does it all mean? Isn’t there more to it than just our little lives and personalities and the things we do and have?’ You know, asked The Big Questions. That’s really what I see philosophy as being. So now I am beginning to understand why we always seem to be arguing at cross purposes.

Dunno. Maybe I shouldn’t say this stuff. Maybe I am being too personal or too earnest.

Kennethamy

In my opinion, it is the belief that philosophers are supposed to ask only the Big Questions that partly fuels the view that philosophy gets nowhere and is a lot of nonsense, and is a big waste of time. And that would be right if that is what philosophy is.

Where would science have got if scientists had not rolled up their sleeves and asked many little questions.

Jeeprs

@kennethamy,
from what I know of Heidegger, I very much admire his philosophy. There are many philosophers I admire, and many of them do deal with profound questions; and I know there are many kindred spirits on the forum. But – each to his own, I don’t want to labour the point.

Kennethamy

How about “deal with seemingly profound questions”? But one of the philosopher’s seminal jobs is to ask whether a seemingly profound question is really all that profound, and what the question means, and supposes is true.Philosophers should have Hume’s “tincture of scepticism” even in regard to questions.

Fashionable Nonsense, Postmodern Intellectuals’ Abuse of Science – Alan Sokal, Jean Bricmont ebook download pdf free

 

The book contains the best single chapter on filosofy of science that iv com across. very much recommended, especially for those that dont like filosofers’ accounts of things. alot of the rest of the book is devoted to long quotes full of nonsens, and som explanations of why it is nonsens (if possible), or just som explanatory remarks about the fields invoked (say, relativity).

 

as such, this book is a must read for ppl who ar interested in the study of seudoscience, and those interested in meaningless language use. basically, it is a collection of case studies of that.

 

 

———-

 

 

[footnote] Bertrand Russell (1948, p. 196) tells the following amusing story: “I once received a

letter from an eminent logician, Mrs Christine Ladd Franklin, saying that she was a

solipsist, and was surprised that there were not others”. We learned this reference

from Devitt (1997, p. 64).

 

LOL!

 

-

 

The answer, of course, is that we have no proof; it is simply

a perfectly reasonable hypothesis. The most natural way to ex­

plain the persistence of our sensations (in particular, the un­

pleasant ones) is to suppose that they are caused by agents

outside our consciousness. We can almost always change at will

the sensations that are pure products of our imagination, but we

cannot stop a war, stave off a lion, or start a broken-down car

by pure thought alone. Nevertheless— and it is important to em­

phasize this—this argument does not refute solipsism. If anyone

insists that he is a “harpsichord playing solo” (Diderot), there is

no way to convince him of his error. However, we have never

met a sincere solipsist and we doubt that any exist.52 This illus­

trates an important principle that we shall use several times in

this chapter: the mere fact that an idea is irrefutable does not

imply that there is any reason to believe it is true.

 

i wonder how that epistemological point (that arguments from ignorance ar no good) works with intuitionism in math/logic?

 

-

 

The universality of Humean skepticism is also its weakness.

Of course, it is irrefutable. But since no one is systematically

skeptical (when he or she is sincere) with respect to ordinary

knowledge, one ought to ask why skepticism is rejected in that

domain and why it would nevertheless be valid when applied

elsewhere, for instance, to scientific knowledge. Now, the rea­

son why we reject systematic skepticism in everyday life is

more or less obvious and is similar to the reason we reject solip­

sism. The best way to account for the coherence of our experi­

ence is to suppose that the outside world corresponds, at least

approximately, to the image of it provided by our senses.54

 

54 4This hypothesis receives a deeper explanation with the subsequent development of

science, in particular of the biological theory of evolution. Clearly, the possession of

sensory organs that reflect more or less faithfully the outside world (or, at least,

some important aspects of it) confers an evolutionary advantage. Let us stress that

this argument does not refute radical skepticism, but it does increase the coherence

of the anti-skeptical worldview.

 

the authors ar surprisingly sofisticated filosofically, and i agree very much with their reasoning.

 

-

 

For my part, I have no doubt that, although progressive changes

are to be expected in physics, the present doctrines are likely to be

nearer to the truth than any rival doctrines now before the world.

Science is at no moment quite right, but it is seldom quite wrong,

and has, as a rule, a better chance of being right than the theories

of the unscientific. It is, therefore, rational to accept it

hypothetically.

—Bertrand Russell, My Philosophical Development

(1995 [1959], p. 13)

 

yes, the analogy is that: science is LIKE a limit function that goes towards 1 [approximates closer to truth] over time. at any given x, it is not quite at y=1 yet, but it gets closer. it might not be completely monotonic either (and i dont know if that completely breaks the limit function, probably doesnt).

 

plato.stanford.edu/entries/scientific-progress/#Tru

 

for a quick grafical illustration, try the function f(x)=1-(-1/x) on the interval [1;∞]. The truth line is f(x)=1 on the interval [0;∞]. in reality, the graf wud be mor unsteady and not completely monotonic corresponding to the varius theories as they com and go in science. it is not only a matter of evidence (which is not an infallible indicator of truth either), but it is primarily a function of that.

 

-

 

Once the general problems of solipsism and radical skepti­

cism have been set aside, we can get down to work. Let us sup­

pose that we are able to obtain some more-or-less reliable

knowledge of the world, at least in everyday life. We can then

ask: To what extent are our senses reliable or not? To answer

this question, we can compare sense impressions among them­

selves and vary certain parameters of our everyday experience.

We can map out in this way, step by step, a practiced rationality.

When this is done systematically and with sufficient precision,

science can begin.

 

For us, the scientific method is not radically different from

the rational attitude in everyday life or in other domains of hu­

man knowledge. Historians, detectives, and plumbers—indeed,

all human beings—use the same basic methods of induction,

deduction, and assessment of evidence as do physicists or bio­

chemists. Modem science tries to carry out these operations in

a more careful and systematic way, by using controls and sta­

tistical tests, insisting on replication, and so forth. Moreover,

scientific measurements are often much more precise than

everyday observations; they allow us to discover hitherto un­

known phenomena; and they often conflict with “common

sense”. But the conflict is at the level of conclusions, not the

basic approach.55 56

 

55For example: Water appears to us as a continuous fluid, but chemical and physical

experiments teach us that it is made of atoms.

 

56Throughout this chapter, we stress the methodological continuity between scientific

knowledge and everyday knowledge. This is, in our view, the proper way to respond

to various skeptical challenges and to dispel the confusions generated by radical

interpretations of correct philosophical ideas such as the underdetermination of

theories by data. But it would be naive to push this connection too far. Science—

particularly fundamental physics— introduces concepts that are hard to grasp

intuitively or to connect directly to common-sense notions. (For example: forces

acting instantaneously throughout the universe in Newtonian mechanics,

electromagnetic fields “vibrating” in vacuum in Maxwell’s theory, curved space-time

in Einstein’s general relativity.) And it is in discussions about the meaning o f these

theoretical concepts that various brands of realists and anti-realists (e.g.,

intrumentalists, pragmatists) tend to part company. Relativists sometimes tend to fall

back on instrumentalist positions when challenged, but there is a profound difference

between the two attitudes. Instrumentalists may want to claim either that we have no

way of knowing whether “unobservable” theoretical entities really exist, or that their

meaning is defined solely through measurable quantities; but this does not imply that

they regard such entities as “subjective” in the sense that their meaning would be

significantly influenced by extra-scientific factors (such as the personality of the

individual scientist or the social characteristics o f the group to which she belongs).

Indeed, instrumentalists may regard our scientific theories as, quite simply, the most

satisfactory way that the human mind, with its inherent biological limitations, is

capable of understanding the world.

 

right they ar

 

-

 

Having reached this point in the discussion, the radical skep­

tic or relativist will ask what distinguishes science from other

types of discourse about reality—religions or myths, for exam­

ple, or pseudo-sciences such as astrology—and, above all, what

criteria are used to make such a distinction. Our answer is nu-

anced. First of all, there are some general (but basically nega­

tive) epistemological principles, which go back at least to the

seventeenth century: to be skeptical of a priori arguments, rev­

elation, sacred texts, and arguments from authority. Moreover,

the experience accumulated during three centuries of scientific

practice has given us a series of more-or-less general method­

ological principles—for example, to replicate experiments, to

use controls, to test medicines in double-blind protocols—that

can be justified by rational arguments. However, we do not

claim that these principles can be codified in a definitive way,

nor that the list is exhaustive. In other words, there does not

exist (at least at present) a complete codification of scientific ra­

tionality, and we seriously doubt that one could ever exist. After

all, the future is inherently unpredictable; rationality is always

an adaptation to a new situation. Nevertheless—and this is the

main difference between us and the radical skeptics—we think

that well-developed scientific theories are in general supported

by good arguments, but the rationality of those arguments must

be analyzed case-by-case.60

 

60 It is also by proceeding on a case-by-case basis that one can appreciate the

immensity of the gulf separating the sciences from the pseudo-sciences.

 

Sokal and Bricmont might soon becom my new favorit filosofers of science.

 

-

 

Obviously, every induction is an inference from the observed to

the unobserved, and no such inference can be justified using

solely deductive logic. But, as we have seen, if this argument

were to be taken seriously—if rationality were to consist only

of deductive logic— it would imply also that there is no good

reason to believe that the Sun will rise tomorrow, and yet no one

really expects the Sun not to rise.

 

id like to add, like i hav don many times befor, that ther is no reason to think that induction shud be proveable with deduction. why require that? but now coms the interesting part. if one takes induction as the basis instead of deduction, one can inductivly prove deduction. <prove> in the ordinary, non-mathetical/logical sens. the method is enumerativ induction, which i hav discussed befor.

emilkirkegaard.dk/en/?p=3219

 

-

 

But one may go further. It is natural to introduce a hierarchy

in the degree of credence accorded to different theories, de­

pending on the quantity and quality of the evidence supporting

them.95 Every scientist—indeed, every human being—proceeds

in this way and grants a higher subjective probability to the

best-established theories (for instance, the evolution of species

or the existence of atoms) and a lower subjective probability to

more speculative theories (such as detailed theories of quantum

gravity). The same reasoning applies when comparing theories

in natural science with those in history or sociology. For exam­

ple, the evidence of the Earth’s rotation is vastly stronger than

anything Kuhn could put forward in support of his historical

theories. This does not mean, of course, that physicists are more

clever than historians or that they use better methods, but sim­

ply that they deal with less complex problems, involving a

smaller number of variables which, moreover, are easier to mea­

sure and to control. It is impossible to avoid introducing such a

hierarchy in our beliefs, and this hierarchy implies that there is

no conceivable argument based on the Kuhnian view of history

that could give succor to those sociologists or philosophers who

wish to challenge, in a blanket way, the reliability of scientific

results.

 

Sokal and Bricmont even get the epistemological point about the different fields right. color me very positivly surprised.

 

-

 

Bruno Latour and His Rules of Method

The strong programme in the sociology of science has found

an echo in France, particularly around Bruno Latour. His works

contain a great number of propositions formulated so ambigu­

ously that they can hardly be taken literally. And when one re­

moves the ambiguity— as we shall do here in a few

examples— one reaches the conclusion that the assertion is ei­

ther true but banal, or else surprising but manifestly false.

 

sound familiar? its the good old two-faced sentences again, those that Swartz and Bradley called Janus-sentences. they yield two different interpretations, one trivial and true, one nontrivial and false. their apparent plausibility is becus of this fact.

 

quoting from Possible Worlds:

 

Janus-faced sentences

The method of possible-worlds testing is not only an invaluable aid towards resolving ambiguity; it is also an effective weapon against a particular form of-linguistic sophistry.

Thinkers often deceive themselves and others into supposing that they have discovered a profound

truth about the universe when all they have done is utter what we shall call a “Janus-faced

sentence”. Janus, according to Roman mythology, was a god with two faces who was therefore able

to ‘face’ in two directions at once. Thus, by a “Janus-faced sentence” we mean a sentence which, like “In the evolutionary struggle for existence just the fittest species survive”, faces in two directions. It is ambiguous insofar as it may be used to express a noncontingent proposition, e.g., that in the struggle for existence just the surviving species survive, and may also be used to express a contingent proposition, e.g., the generalization that just the physically strongest species survive.

 

If a token of such a sentence-type is used to express a noncontingently true proposition then, of

course, the truth of that proposition is indisputable; but since, in that case, it is true in all possible

worlds, it does not tell us anything distinctive about the actual world. If, on the other hand, a token

of such a sentence-type is used to express a contingent proposition, then of course that proposition

does tell us something quite distinctive about the actual world; but in that case its truth is far from

indisputable. The sophistry lies in supposing that the indisputable credentials of the one proposition

can be transferred to the other just by virtue of the fact that one sentence-token might be used to

express one of these propositions and a different sentence-token of one and the same sentence-type

might be used to express the other of these propositions. For by virtue of the necessary truth of one

of these propositions, the truth of the other — the contingent one — can be made to seem

indisputable, can be made to seem, that is, as if it “stands to reason” that it should be true.

 

-

 

We could be accused here of focusing our attention on an

ambiguity of formulation and of not trying to understand what

Latour really means. In order to counter this objection, let us go

back to the section “Appealing (to) Nature” (pp. 94-100) where

the Third Rule is introduced and developed. Latour begins by

ridiculing the appeal to Nature as a way of resolving scientific

controversies, such as the one concerning solar neutrinos[121]:

A fierce controversy divides the astrophysicists who calcu­

late the number o f neutrinos coming out o f the sun and Davis,

the experimentalist who obtains a much smaller figure. It is

easy to distinguish them and put the controversy to rest. Just

let us see for ourselves in which camp the sun is really to be

found. Somewhere the natural sun with its true number o f

neutrinos will close the mouths o f dissenters and force them

to accept the facts no matter how well written these papers

were. (Latour 1987, p. 95)

 

 

Why does Latour choose to be ironic? The problem is to know

how many neutrinos are emitted by the Sun, and this question

is indeed difficult. We can hope that it will be resolved some day,

not because “the natural sun will close the mouths of dis­

senters”, but because sufficiently powerful empirical data will

become available. Indeed, in order to fill in the gaps in the cur­

rently available data and to discriminate between the currently

existing theories, several groups of physicists have recently

built detectors of different types, and they are now performing

the (difficult) measurements.122 It is thus reasonable to expect

that the controversy will be settled sometime in the next few

years, thanks to an accumulation of evidence that, taken to­

gether, will indicate clearly the correct solution. However, other

scenarios are in principle possible: the controversy could die

out because people stop being interested in the issue, or be­

cause the problem turns out to be too difficult to solve; and, at

this level, sociological factors undoubtedly play a role (if only

because of the budgetary constraints on research). Obviously,

scientists think, or at least hope, that if the controversy is re­

solved it will be because of observations and not because of

the literary qualities of the scientific papers. Otherwise, they

will simply have ceased to do science.

 

the footnode 121 is:

The nuclear reactions that power the Sun are expected to emit copious quantities

of the subatomic particle called the neutrino. By combining current theories of solar

structure, nuclear physics, and elementary-particle physics, it is possible to obtain

quantitative predictions for the flux and energy distribution of the solar neutrinos.

Since the late 1960s, experimental physicists, beginning with the pioneering work of

Raymond Davis, have been attempting to detect the solar neutrinos and measure their

flux. The solar neutrinos have in fact been detected; but their flux appears to be less

than one-third o f the theoretical prediction. Astrophysicists and elementary-particle

physicists are actively trying to determine whether the discrepancy arises from

experimental error or theoretical error, and if the latter, whether the failure is in the

solar models or in the elementary-particle models. For an introductory overview, see

Bahcall (1990).

 

this problem sounded familiar to me.

en.wikipedia.org/wiki/Solar_neutrino_problem:

The solar neutrino problem was a major discrepancy between measurements of the numbers of neutrinos flowing through the Earth and theoretical models of the solar interior, lasting from the mid-1960s to about 2002. The discrepancy has since been resolved by new understanding of neutrino physics, requiring a modification of the Standard Model of particle physics – specifically, neutrino oscillation. Essentially, as neutrinos have mass, they can change from the type that had been expected to be produced in the Sun’s interior into two types that would not be caught by the detectors in use at the time.

 

science seems to be working. Sokal and Bricmont predicted that it wud be resolved ”in the next few years”. this was written in 1997, about 5 years befor the data Wikipedia givs for the resolution. i advice one to read the Wiki article, as it is quite good.

 

-

 

In this quote and the previous one, Latour is playing con­

stantly on the confusion between facts and our knowledge of

them.123 The correct answer to any scientific question, solved or

not, depends on the state of Nature (for example, on the num­

ber of neutrinos that the Sun really emits). Now, it happens that,

for the unsolved problems, nobody knows the right answer,

while for the solved ones, we do know it (at least if the accepted

solution is correct, which can always be challenged). But there

is no reason to adopt a “relativist” attitude in one case and a “re­

alist” one in the other. The difference between these attitudes is

a philosophical matter, and is independent of whether the prob­

lem is solved or not. For the relativist, there is simply no unique

correct answer, independent of all social and cultural circum­

stances; this holds for the closed questions as well as for the

open ones. On the other hand, the scientists who seek the cor­

rect solution are not relativist, almost by definition. Of course

they do “use Nature as the external referee”: that is, they seek to

know what is really happening in Nature, and they design ex­

periments for that purpose.

 

the footnote 123 is:

An even more extreme example o f this confusion appears in a recent article by

Latour in La Recherche, a French monthly magazine devoted to the popularization of

science (Latour 1998). Here Latour discusses what he interprets as the discovery in

1976, by French scientists working on the mummy of the pharaoh Ramses II, that his

death (circa 1213 B.C.) was due to tuberculosis. Latour asks: “How could he pass

away due to a bacillus discovered by Robert Koch in 1882?” Latour notes, correctly,

that it would be an anachronism to assert that Rainses II was killed by machine-gun

fire or died from the stress provoked by a stock-market crash. But then, Latour

wonders, why isn’t death from tuberculosis likewise an anachronism? He goes so far

as to assert that “Before Koch, the bacillus has no real existence.” He dismisses the

common-sense notion that Koch discovered a pre-existing bacillus as “having only the

appearance o f common sense”. Of course, in the rest o f the article, Latour gives no

argument to justify these radical claims and provides no genuine alternative to the

common-sense answer. He simply stresses the obvious fact that, in order to discover

the cause of Ramses’ death, a sophisticated analysis in Parisian laboratories was

needed. But unless Latour is putting forward the truly radical claim that nothing we

discover ever existed prior to its “discovery”— in particular, that no murderer is a

murderer, in the sense that he committed a crime before the police “discovered” him

to be a murderer— he needs to explain what is special about bacilli, and this he has

utterly failed to do. The result is that Latour is saying nothing clear, and the article

oscillates between extreme banalities and blatant falsehoods.

 

?!

 

-

 

a quote from one of the crazy ppl:

 

The privileging o f solid over fluid mechanics, and indeed the

inability o f science to deal with turbulent flow at all, she at­

tributes to the association o f fluidity with femininity. Whereas

men have sex organs that protrude and become rigid, women

have openings that leak menstrual blood and vaginal fluids.

Although men, too, flow on occasion— when semen is emit­

ted, for example— this aspect o f their sexuality is not empha­

sized. It is the rigidity o f the male organ that counts, not its

complicity in fluid flow. These idealizations are reinscribed in

mathematics, which conceives o f fluids as laminated planes

and other modified solid forms. In the same way that women

are erased within masculinist theories and language, existing

only as not-men, so fluids have been erased from science, ex­

isting only as not-solids. From this perspective it is no wonder

that science has not been able to arrive at a successful model

for turbulence. The problem o f turbulent f low cannot be

solved because the conceptions o f fluids (and o f women)

have been formulated so as necessarily to leave unarticulated

remainders. (Hayles 1992, p. 17)

 

u cant make this shit up

 

-

 

Over the past three decades, remarkable progress has been

made in the mathematical theory of chaos, but the idea that

some physical systems may exhibit a sensitivity to initial con­

ditions is not new. Here is what James Clerk Maxwell said in

1877, after stating the principle of determinism ( “the same

causes will always produce the same effects”):

 

but thats not what determinism is. their quote seems to be from Hume’s Treatise.

 

en.wikipedia.org/wiki/Causality#After_the_Middle_Ages

 

it is mentioned in his discussion of causality, which is related to but not the same as, determinism.

 

Wikipedia givs a fine definition of <determinism>: ”Determinism is a philosophy stating that for everything that happens there are conditions such that, given those conditions, nothing else could happen.”

 

also SEP: Causal determinism is, roughly speaking, the idea that every event is necessitated by antecedent events and conditions together with the laws of nature.”

 

-

 

[T]he first difference between science and philosophy is their

respective attitudes toward chaos. Chaos is defined not so

much by its disorder as by the infinite speed with which every

form taking shape in it vanishes. It is a void that is not a noth­

ingness but a virtual, containing all possible particles and

drawing out all possible forms, which spring up only to dis­

appear immediately, without consistency or reference, with­

out consequence. Chaos is an infinite speed o f birth and dis­

appearance. (Deleuze and Guattari 1994, pp. 117-118, italics

in the original)

 

???

 

-

 

For what it’s worth, electrons, unlike photons, have a non-zero

mass and thus cannot move at the speed of light, precisely

because of the theory of relativity of which Virilio seems so

fond.

 

i think the authors did not mean what they wrote here. surely, relativity theory is not the reason why electrons cannot move at the speed of light. relativity theory is an explanation of how nature works, in this case, how objects with mass and velocity/speed works.

 

-

 

We met in Paris a student who, after having brilliantly fin­

ished his undergraduate studies in physics, began reading phi­

losophy and in particular Deleuze. He was trying to tackle

Difference and Repetition. Having read the mathematical ex­

cerpts examined here (pp. 161-164), he admitted he couldn’t

see what Deleuze was driving at. Nevertheless, Deleuze’s repu­

tation for profundity was so strong that he hesitated to draw the

natural conclusion: that if someone like himself, who had stud­

ied calculus for several years, was unable to understand these

texts, allegedly about calculus, it was probably because they

didn’t make much sense. It seems to us that this example should

have encouraged the student to analyze more critically the rest

of Deleuze’s writings.

 

i think the epistemological conditions of this kind of inference ar very interesting. under which conditions shud one conclude that a text is meaningless?

 

-

 

7. Ambiguity as subterfuge. We have seen in this book nu­

merous ambiguous texts that can be interpreted in two differ­

ent ways: as an assertion that is true but relatively banal, or as

one that is radical but manifestly false. And we cannot help

thinking that, in many cases, these ambiguities are deliberate.

Indeed, they offer a great advantage in intellectual battles: the

radical interpretation can serve to attract relatively inexperi­

enced listeners or readers; and if the absurdity of this version is

exposed, the author can always defend himself by claiming to

have been misunderstood, and retreat to the innocuous inter­

pretation.

 

mor on Janus-sentences.

 

-

 

 

Exam paper for Danish and Languages of the world

Negation_in_English_and_Other_Languages pdf download ebook free

This book is actually very advanced for its age. it contains lots of stuff of interest to logicians and linguists, even those reading it today. the thing that annoys me the most is the poor quality of the scan making reading a hazzle. second to that comes the untranslated quotes from other languages (german, french, greek, latin, danish altho DA isnt a problem for me ofc). third but small annoyance is the difficulty of the reference system used.

 

 

About the existence of double negatives

 

My own pet theory is that neither is right; logically one

negative suffices, but two or three in the same sentence cannot

be termed illogical; they are simply a redundancy, that

may be superfluous from a stylistic point of view, just as any

repetition in a positive sentence (every and any, always and

on all occasions, etc.), but is otherwise unobjectionable. Double

negation arises because under the influence of a strong feeling

the two tendencies specified above, one to attract the negative

to the verb as nexal negative, and the other to prefix it to

some other word capable of receiving this element, may both

be gratified in the same sentence. But repeated negation

seems to become a habitual phenomenon only in those languages

in which the ordinary negative element is comparatively

small in regard to phonetic bulk, as ne and n- in OE and Russian,

en and n- in MHG., on (sounded u) in Greek, s- or n- in

Magyar. The insignificance of these elements makes it desirable

to multiply them so as to prevent their being overlooked.

Hence also the comparative infrequency of this repetition in

English and German, after the fuller negatives not and nicht

have been thoroughly established – though, as already stated,

the logic of the schools and the influence of Latin has had some

share in restricting the tendency to this particular kind of

redundancy. It might, however, finally be said that it requires

greater mental energy to content oneself, with one negative,

which has to be remembered during the whole length of

the utterance both by the speaker and by the hearer, than

to repeat the negative idea (and have it repeated) whenever

an occasion offers itself.

 

seems legit

 

-

 

Jespersen came close to one of the gricean maxims

 

If we say, according to the general rule, that “not four” means “different from four”, this should be taken with a certain quahfication, for in practice it generally means, not whatever is above or below 4 in the scale, but only what is below 4. thus less than 4, something between 4 and 0, just as *”not everything” means something between everything and nothing (and as “not good” means ‘inferior’, but does not comprise ‘excellent’). Thus in “He does not read three books in a year” | “the hill is not two hundred feet high” | “his income is not 200 a year” | “he does not see her once a week”.

 

This explains how ‘not one’ comes to be the natural expression in many languages for ‘none, no’, and ‘not one thing’ for ‘nothing’, as in OE nan = ne-an, whence none and no, OE nanthing, whence nothing, ON eingi, whence Dan. ingen. G. k-ein etc. Cf. also Tennyson 261 That not one life shall be destroy ‘d . . . That not a worm is cloven in vain; see also p. 49. In French similarly: Pas im bruit n’interrompit le silence, etc.

 

When not + a numeral is exceptionally to be taken as ‘more than’, the numeral has to be strongly stressed, and generally to be followed by a more exact indication: “the hill is not ‘two hundred feet high, but three hundred” | “his income is not 200, but at least 300 a year” | Locke S. 321 Not one invention, but fifty – from a corkscrew to a machinegun | Defoe R. 342 not once, but two or three times | Gissing R. 149 books that well merit to be pored over, not once but many a time I Benson A. 220 he would bend to kiss her, not once, not once only.

 

But not once or twice always means ‘several times’, as in Tennyson 220 Not once or twice in our rough island-story The path of duty was the way to glory.

 

In Russian, on the other hand, ne raz ‘not (a) time’, thus really without a numeral, means ‘several times, sometimes’ and in the same way ne odin ‘not one’ means ‘more than one’; corresponding phenomena are found in other languages as well, see a valuable little article by Schuchardt, An Aug. Leskien zum 4. juli1894 (privately printed).He rightly con- nects this with the use in Russian of the stronger negative ni with a numeral to signify ‘lessthan’ : ni odin ‘not even one’.

 

hat the exact import is of a negative quantitative indication may in some instances depend on what is expected, or what is the direction of thought in each case. While the two sentences “he spends ” 200 a year” and “he lives on 200 a year” are practically synonymous, everything is changed if we add not: “he doesn’t spend 200 a year” means ‘less than’; “he doesn’t live on 200 a year” means ‘more than’; because in the former case we expect an indication of a maximum, and in the latter of a minimum.

 

and actually the discussion continues from here. it is worth reading.

 

also normal formulations of the maxim doesnt take account of the fenomenon pointed out in the last paragraf.

 

-

 

Negative words or formulas may in some combinations be used in such a way that the negative force is almost vanishing. There is scarcely any difference between questions like “Will you have a glass of beer ?” and “Won’t you have a glass of beer ?”, because the real question is “Will you, or will you not, have. . . . ” ; therefore in offering one a glass both formulas may be employed indifferently, though a marked tone of surprise can make the two sentences into distinct contrasts: “Will you have a glass of beer ?” then coming to mean ‘I am surprised at your wanting it’, and “Won’t you have a glass of beer ?” the reverse. (In this case really is often added.)

 

In the same way in Dan. “Vil De ha et glas 0I ?” and “Vil De ikke ha et glas 0I ?” A Dutch lady once told me how surprised she was at first in Denmark at having questions like “Vil De ikke raekke mig saltet ?” asked her at table in a boarding- house; she took the ikke literally and did not pass the salt. Ikke is also used in indirect (reported) questions, as in Faber Stegek. 28 saa bar madammen bedt Giovanni, om han ikke vil passe lidt paa barnet.

 

true, it dosent make a lot of sense. the <ikke> / <not> almost has no meaning. it seems to create a kind of ”please” meaning in the utterance.

 

-

 

In writing the forms in nH make their appearance about 1660 and are already frequent in Dryden’s, Congreve’s, and Farquhar’s comedies. Addison in the Spectator nr. 135 speaks of mayn’t, canH, sha’nH, won’t, and the like as having “very much imtxmed our language, and clogged it with consonants”. Swift also (inthe Tatler nr. 230)brands as examples of “the continual corruption of our English tongue” such forms as coii’dn’t, ha’n't, can’t, shan’t; but nevertheless he uses some of them very often in his Journal to Stella.

 

#theyoungpeoplearedestroyingenglish

 

-

 

 

Towardsabetterquantitativelogic (due to formatting)

docs.google.com/document/d/1vN7pFML8N_s8HUMVmpai1rXiP1dLzeB04A9OufssEkI/edit

This is another of those ideas that ive had independently, and that it turned out that others had thought of before me, by thousands of years in this case. The idea is that longer expressions of language as made out of smaller parts of language, and that the meaning of the whole is determined by the parts and their structure. This is rather close to the formulation used on SEP. Heres the introduction on SEP:

 

Anything that deserves to be called a language must contain meaningful expressions built up from other meaningful expressions. How are their complexity and meaning related? The traditional view is that the relationship is fairly tight: the meaning of a complex expression is fully determined by its structure and the meanings of its constituents—once we fix what the parts mean and how they are put together we have no more leeway regarding the meaning of the whole. This is the principle of compositionality, a fundamental presupposition of most contemporary work in semantics.

Proponents of compositionality typically emphasize the productivity and systematicity of our linguistic understanding. We can understand a large—perhaps infinitely large—collection of complex expressions the first time we encounter them, and if we understand some complex expressions we tend to understand others that can be obtained by recombining their constituents. Compositionality is supposed to feature in the best explanation of these phenomena. Opponents of compositionality typically point to cases when meanings of larger expressions seem to depend on the intentions of the speaker, on the linguistic environment, or on the setting in which the utterance takes place without their parts displaying a similar dependence. They try to respond to the arguments from productivity and systematicity by insisting that the phenomena are limited, and by suggesting alternative explanations.

 

SEP goes on to discuss some more formal versions of the general idea:

 

(C) The meaning of a complex expression is determined by its structure and the meanings of its constituents.

 

and

(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.

 

SEP goes on to disguish between a lot of different versions of this. See the article for details.

The thing i wanted to discuss was the counterexamples offered. I found none of them to be rather compelling. Based mostly on intuition pumps as far as i can tell, and im rather wary of such (cf. Every Thing Must Go, amazon).

 

Heres SEP’s first example, using chess notation (many other game notations wud also work, e.g. Taifho):

 

Consider the Algebraic notation for chess.[15] Here are the basics. The rows of the chessboard are represented by the numerals 1, 2, … , 8; the columns are represented by the lower case letters a, b, … , h. The squares are identified by column and row; for example b5 is at the intersection of the second column and the fifth row. Upper case letters represent the pieces: K stands for king, Q for queen, R for rook, B for bishop, and N for knight. Moves are typically represented by a triplet consisting of an upper case letter standing for the piece that makes the move and a sign standing for the square where the piece moves. There are five exceptions to this: (i) moves made by pawns lack the upper case letter from the beginning, (ii) when more than one piece of the same type could reach the same square, the sign for the square of departure is placed immediately in front of the sign for the square of arrival, (iii) when a move results in a capture an x is placed immediately in front of the sign for the square of arrival, (iv) the symbol 0-0 represents castling on the king’s side, (v) the symbol 0-0-0 represents castling on the queen’s side. + stands for check, and ++ for mate. The rest of the notation serves to make commentaries about the moves and is inessential for understanding it.

Someone who understands the Algebraic notation must be able to follow descriptions of particular chess games in it and someone who can do that must be able to tell which move is represented by particular lines within such a description. Nonetheless, it is clear that when someone sees the line Bb5 in the middle of such a description, knowing what B, b, and 5 mean will not be enough to figure out what this move is supposed to be. It must be a move to b5 made by a bishop, but we don’t know which bishop (not even whether it is white or black) and we don’t know which square it is coming from. All this can be determined by following the description of the game from the beginning, assuming that one knows what the initial configurations of figures are on the chessboard, that white moves first, and that afterwards black and white move one after the other. But staring at Bb5 itself will not help.

 

It is exacly the bold lines i dont accept. Why must one be able to know that from the meaning alone? Knowing the meaning of expressions does not always make it easy to know what a given noun (or NP) refers to. In this case “B” is a noun refering to a bishop, which one? Well, who knows. There are lots of examples of words refering to differnet things (people usually) when used in diffferent contexts. For instance, the word “me” refers to the source of the expression, but when an expression is used by different speakers, then “me” refers to different people, cf. indexicals (SEP and Wiki).

 

Ofc, my thoughts about are not particularly unique, and SEP mentions the defense that i also thought of:

 

The second moral is that—given certain assumptions about meaning in chess notation—we can have productive and systematic understanding of representations even if the system itself is not compositional. The assumptions in question are that (i) the description I gave in the first paragraph of this section fully determines what the simple expressions of chess notation mean and also how they can be combined to form complex expressions, and that (ii) the meaning of a line within a chess notation determines a move. One can reject (i) and argue, for example, that the meaning of B in Bb5 contains an indexical component and within the context of a description, it picks out a particular bishop moving from a particular square. One can also reject (ii) and argue, for example, that the meaning of Bb5 is nothing more than the meaning of ‘some bishop moves from somewhere to square b5’—utterances of Bb5 might carry extra information but that is of no concern for the semantics of the notation. Both moves would save compositionality at a price. The first complicates considerably what we have to say about lexical meanings; the second widens the gap between meanings of expressions and meanings of their utterances. Whether saving compositionality is worth either of these costs (or whether there is some other story to be told about our understanding of the Algebraic notation) is by no means clear. For all we know, Algebraic notation might be non-compositional.

 

I also dont agree that it widens the gap between meanings of expressions and meanings of utterances. It has to do with refering to stuff, not meaning in itself.

-

4.2.1 Conditionals

Consider the following minimal pair:

(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.

A good translation of (1) into a first-order language is (1′). But the analogous translation of (2) would yield (2′), which is inadequate. A good translation for (2) would be (2″) but it is unclear why. We might convert ‘¬∃’ to the equivalent ‘∀¬’ but then we must also inexplicably push the negation into the consequent of the embedded conditional.

(1′) ∀x(x works hard → x will succeed)
(2′) ¬∃
x (x goofs off → x will succeed)
(2″) ∀
x (x goofs off → ¬(x will succeed))

This gives rise to a problem for the compositionality of English, since is seems rather plausible that the syntactic structure of (1) and (2) is the same and that ‘if’ contributes some sort of conditional connective—not necessarily a material conditional!—to the meaning of (1). But it seems that it cannot contribute just that to the meaning of (2). More precisely, the interpretation of an embedded conditional clause appears to be sensitive to the nature of the quantifier in the embedding sentence—a violation of compositionality.[16]

One response might be to claim that ‘if’ does not contribute a conditional connective to the meaning of either (1) or (2)—rather, it marks a restriction on the domain of the quantifier, as the paraphrases under (1″) and (2″) suggest:[17]

(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.

But this simple proposal (however it may be implemented) runs into trouble when it comes to quantifiers like ‘most’. Unlike (3′), (3) says that those students (in the contextually given domain) who succeed if they work hard are most of the students (in the contextually relevant domain):

(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.

The debate whether a good semantic analysis of if-clauses under quantifiers can obey compositionality is lively and open.[18]

 

Doesnt seem particularly difficult to me. When i look at an “if-then” clause, the first thing i do before formalizing is turning it around so that “if” is first, and i also insert any missing “then”. With their example:

 

(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.

 

this results in:

 

(1)* If he works hard, then everyone will succeed.
(2)* If he goofs off, then no one will succeed.

 

Both “everyone” and “no one” express a universal quantifer, ∀. The second one has a negation as well. We can translate this to something like “all”, and “no” to “not”. Then we might get:

 

(1)** If he works hard, then all will succeed.
(2)** If he goofs off, then all will not succeed.

 

Then, we move the quantifier to the beginning and insert a pronoun, “he”, to match. Then we get something like:

 

(1)*** For any person, if he works hard, then he will succeed.
(2)*** For any person, if he goofs off, then he will not succeed.

 

These are equivalent with SEP’s

 

(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.

 

The difference between (3) and (3′) is interesting, not becus of relevance to my method about (i think), but since it deals with something beyond first-order logic. Quantification logic, i suppose? I did a brief Google and Wiki search, but didnt find something like that i was looking for. I also tried Graham Priest’s Introduction to non-classical logic, also without luck.

 

So here goes some system i just invented to formalize the sentences:

 

(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.

 

Capital greek letters are set variables. # is a function that returns the cardinality a set.

 

(3)* (∃Γ)(∃Δ)(∀x)(∀y)(Sx↔x∈Γ∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ)→(Wy→Uy))

 

In english: There is a set, gamma, and there is another set, delta, and for any x, and for any y, x is a student iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then (if y works hard, then y will succeed).

 

Quite complicated in writing, but the idea is not that complicated. It shud be possible to find some simplified writing convention for easier expression of this way of formalizing it.

 

(3′)* (∃Γ)(∃Δ)(∀x)(∀y)(((Sx∧Wx)↔x∈Γ)∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ→Uy))

 

In english: there is a set, gamma, and there is another set, delta, and for any x, and for any y, (x is a student and x works hard) iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then u will succeed.

 

To my logician intuition, these are not equivalent, but proving this is left as an exercise to the reader if he can figure out a way to do so in this set theory+predicate logic system (i might try later).

 

-

4.2.2 Cross-sentential anaphora

Consider the following minimal pair from Barbara Partee:

 

(4) I dropped ten marbles and found all but one of them. It is probably under the sofa.

(5) I dropped ten marbles and found nine of them. It is probably under the sofa.

 

There is a clear difference between (4) and (5)—the first one is unproblematic, the second markedly odd. This difference is plausibly a matter of meaning, and so (4) and (5) cannot be synonyms. Nonetheless, the first sentences are at least truth-conditionally equivalent. If we adopt a conception of meaning where truth-conditional equivalence is sufficient for synonymy, we have an apparent counterexample to compositionality.

 

I dont accept that premise either. I havent done so since i read Swartz and Bradley years ago. Sentences like

 

“Canada is north of Mexico”

“Mexico is south of Canada”

 

are logically equivalent, but are not synonymous. The concept of being north of, and the concept of being south of are not the same, even tho they stand in a kind reverse relation. That is to say, xR1y↔yR2x. Not sure what to call such relations. It’s symmetry+substitition of relations.

 

Sentences like

 

“Everything that is round, has a shape.”

“Nothing is not identical to itself.”

 

are logically equivalent but dont mean the same. And so on, cf. Swartz and Bradley 1979, and SEP on theories of meaning.

 

Interesting though these cases might be, it is not at all clear that we are faced with a genuine challenge to compositionality, even if we want to stick with the idea that meanings are just truth-conditions. For it is not clear that (5) lacks the normal reading of (4)—on reflection it seems better to say that the reading is available even though it is considerably harder to get. (Contrast this with an example due to—I think—Irene Heim: ‘They got married. She is beautiful.’ This is like (5) because the first sentence lacks an explicit antecedent for the pronoun in the second. Nonetheless, it is clear that the bride is said to be beautiful.) If the difference between (4) and (5) is only this, it is no longer clear that we must accept the idea that they must differ in meaning.

 

I agree that (4) and (5) mean the same, even if (5) is a rather bad way to express the thing one normally wud express with something like (4).

 

In their bride example, one can also consider homosexual weddings, where “he” and “she” similarly fails to refer to a specific person out of the two newlywed.

-

4.2.3 Adjectives

Suppose a Japanese maple leaf, turned brown, has been painted green. Consider someone pointing at this leaf uttering (6):

 

(6) This leaf is green.

 

The utterance could be true on one occasion (say, when the speaker is sorting leaves for decoration) and false on another (say, when the speaker is trying to identify the species of tree the leaf belongs to). The meanings of the words are the same on both occasions and so is their syntactic composition. But the meaning of (6) on these two occasions—what (6) says when uttered in these occasions—is different. As Charles Travis, the inventor of this example puts it: “…words may have all the stipulated features while saying something true, but also while saying something false.”[[20]

 

At least three responses offer themselves. One is to deny the relevant intuition. Perhaps the leaf really is green if it is painted green and (6) is uttered truly in both situations. Nonetheless, we might be sometimes reluctant to make such a true utterance for fear of being misleading. We might be taken to falsely suggest that the leaf is green under the paint or that it is not painted at all.[21] The second option is to point out that the fact that a sentence can say one thing on one occasion and something else on another is not in conflict with its meaning remaining the same. Do we have then a challenge to compositionality of reference, or perhaps to compositionality of content? Not clear, for the reference or content of ‘green’ may also change between the two situations. This could happen, for example, if the lexical representation of this word contains an indexical element.[22] If this seems ad hoc, we can say instead that although (6) can be used to make both true and false assertions, the truth-value of the sentence itself is determined compositionally.[23]

 

Im going to bite the bullet again, and just say that the sentence means the same on both occasions. What is different is that in different contexts, one might interpret the same sentence to express different propositions. This is not something new as it was already featured before as well, altho this time it is without indexicals. The reason is that altho the sentence means the same, one is guessing at which proposition the utterer meant to express with his sentence. Context helps with that.

-

4.2.4 Propositional attitudes

Perhaps the most widely known objection to compositionality comes from the observation that even if e and e′ are synonyms, the truth-values of sentences where they occur embedded within the clausal complement of a mental attitude verb may well differ. So, despite the fact that ‘eye-doctor’ and ‘ophthalmologist’ are synonyms (7) may be true and (8) false if Carla is ignorant of this fact:

 

(7) Carla believes that eye doctors are rich.
(8) Carla believes that ophthalmologists are rich.

 

So, we have a case of apparent violation of compositionality; cf. Pelletier (1994).

There is a sizable literature on the semantics of propositional attitude reports. Some think that considerations like this show that there are no genuine synonyms in natural languages. If so, compositionality (at least the language-bound version) is of course vacuously true. Some deny the intuition that (7) and (8) may differ in truth-conditions and seek explanations for the contrary appearance in terms of implicature.[24] Some give up the letter of compositionality but still provide recursive semantic clauses.[25] And some preserve compositionality by postulating a hidden indexical associated with ‘believe’.[26]

 

Im not entirely sure what to do about these propositional attitude reports, but im inclined to bite the bullet. Perhaps i will change my mind after i have read the two SEP articles about the matter.

 

Idiomatic language

The SEP article really didnt have a proper discussion of idiomatic language use. Say, frases like “dont mention it” which can either mean what it literally (i.e., by composition) means, or its idiomatic meaning: This is used as a response to being thanked, suggesting that the help given was no trouble (same source).

Depending on what one takes “complex expression” to mean. Recall the principle:

 

(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.

 

What is a complex expression? Is any given complex expression made up of either complex expressions themselves or simple expressions? Idiomatic expressions really just are expressions whose meaning is not determined by their parts. One might thus actually take them to be simple expressions themselves. If one does, then the composition principle is pretty close to trivially true.

 

If one does not take idiomatic expressions to be complex expressions or simple expressions, then the principle of composition is trivially false. I dont consider that a huge problem, it generally holds, and explains the things it is required to explain just fine when it isnt universally true.

 

One can also note that idiomatic expressions can be used as parts of larger expressions. Depending on which way to think about idiomatic expressions, and of constituents, then larger expressions which have idiomatic expressions as parts of them might be trivially non-compositional. This is the case if one takes constituents to mean smallest parts. If one does, then since the idiomatic expressions’ meanings cannot be determined from syntax+smallest parts, then neither can the larger expression. If one on the other hand takes constituents to mean smallest decompositional parts, then idiomatic expressions do not trivially make the larger expressions they are part of non-compositional. Consider the sentence:

 

“He is pulling your leg”

 

the sentence is compositional since its meaning is determinable from “he”, “is”, “pulling your leg”, the syntax, and the meaning function.

 

There is a reason i bring up this detail, and that is that there is another kind of idiomatic use of language that apparently hasnt been mentioned so much in the literature, judging from SEP not mentioning it. It is the use of prepositions. Surely, many prepositions are used in perfectly compositional ways with other words, like in

 

“the cat is on the mat”

 

where “on” has the usual meaning of being on top of (something), or being above and resting upon or somesuch (difficult to avoid circular definitions of prepositions).

 

However, consider the use of “on” in

 

“he spent all his time on the internet”

 

clearly “on” does not mean the same as above here, it doesnt seem to mean much, it is a kind of indefinite relationship. Apparently aware of this fact (and becus languages differ in which prepositions are used in such cases), the designer of esperanto added a preposition for any indefinite relation to the language (“je”). Some languages have lots of such idiomatic preposition+noun frases, and they have to be learned by heart exactly the same way as the idiomatic expressions mentioned earlier, exactly becus they are idiomatic expressions.

 

As an illustration, in danish if one is at an island, one is “på Fyn”, but if one is at the mainland, then one is “i Jylland”. I think such usage of prepositions shud be considered idiomatic.

 

 

I just wanted to look up some stuff on the questions that a teacher had posed. Since i dont actually have the book, and since one cant search properly in paper books, i googled around instead, and ofc ended up at Wikipedia…

 

and it took off as usual. Here are the tabs i ended up with (36 tabs):

 

en.wikipedia.org/wiki/Charles_F._Hockett

en.wikipedia.org/wiki/Functional_theories_of_grammar

en.wikipedia.org/wiki/Linguistic_typology

en.wikipedia.org/wiki/Ergative%E2%80%93absolutive_language

en.wikipedia.org/wiki/Ergative_verb#In_English

en.wikipedia.org/wiki/Morphosyntactic_alignment

en.wikipedia.org/wiki/Nominative%E2%80%93accusative_language

en.wikipedia.org/wiki/V2_word_order

en.wikipedia.org/wiki/Copenhagen_school_%28linguistics%29

en.wikipedia.org/wiki/Formal_grammar

en.wikipedia.org/wiki/Deep_structure

en.wikipedia.org/wiki/Linguistics_Wars

en.wikipedia.org/wiki/Logical_form_%28linguistics%29

en.wikipedia.org/wiki/Logical_form

en.wikipedia.org/wiki/Formal_science

en.wikipedia.org/wiki/Parse_tree

en.wikipedia.org/wiki/X-bar_theory

en.wikipedia.org/wiki/Compositional_semantics

en.wikipedia.org/wiki/Parsing

en.wikipedia.org/wiki/Automata_theory

en.wikipedia.org/wiki/N-tuple

en.wikipedia.org/wiki/Ordered_set

en.wikipedia.org/wiki/Ordered_pair

en.wikipedia.org/wiki/Formal_language_theory

en.wikipedia.org/wiki/History_of_linguistics

en.wikipedia.org/wiki/When_a_White_Horse_is_Not_a_Horse

en.wikipedia.org/wiki/List_of_unsolved_problems_in_linguistics

en.wikipedia.org/wiki/Translation#Fidelity_vs._transparency

en.wikipedia.org/wiki/Prosody_%28linguistics%29

en.wikipedia.org/wiki/Sign_%28linguistics%29

en.wikipedia.org/wiki/Ferdinand_de_Saussure

en.wikipedia.org/wiki/Laryngeal_theory

en.wikipedia.org/wiki/Hittite_texts

 

 

and with three more longer texts to consume over the next day or so:

plato.stanford.edu/entries/logical-form/

plato.stanford.edu/entries/compositionality/ (which i had discovered independently)

plato.stanford.edu/entries/meaning/ (long overdue)

 

And quite a few other longer texts in pdf form also to be read in the next few days.

plato.stanford.edu/entries/vienna-circle/

plato.stanford.edu/entries/logical-empiricism/

Vienna Circle

Despite its prominent position in the rich, if fragile, intellectual culture of inter-war Vienna and most likely due to its radical doctrines, the Vienna Circle found itself virtually isolated in most of German speaking philosophy. The one exception was its contact and cooperation with the Berlin Society for Empirical (later: Scientific) Philosophy (the other point of origin of logical empiricism). The members of the Berlin Society sported a broadly similar outlook and included, besides the philosopher Hans Reichenbach, the logicians Kurt Grelling and Walter Dubislav, the psychologist Kurt Lewin, the surgeon Friedrich Kraus and the mathematician Richard von Mises. (Its leading members Reichenbach, Grelling and Dubislav were listed in the Circle’s manifesto as sympathisers.) At the same time, members of the Vienna Circle also engaged directly, if selectively, with the Warsaw logicians (Tarski visited Vienna in 1930, Carnap later that year visited Warsaw and Tarski returned to Vienna in 1935). Probably partly because of its firebrand reputation, the Circle attracted also a series of visiting younger researchers and students including Carl Gustav Hempel from Berlin, Hasso Härlen from Stuttgart, Ludovico Geymonat from Italy, Jørgen Jørgensen, Eino Kaila, Arne Naess and Ake Petzall from Scandinavia, A.J. Ayer from the UK, Albert Blumberg, Charles Morris, Ernest Nagel and W.V.O. Quine from the USA, H.A. Lindemann from Argentina and Tscha Hung from China. (The reports and recollections of these former visitors—e.g. Nagel 1936—are of interest in complementing the Circle’s in-house histories and recollections which start with the unofficial manifesto—Carnap, Hahn and Neurath 1929—and extend through Neurath 1936, Frank 1941, 1949a and Feigl 1943 to the memoirs by Carnap 1963, Feigl 1969a, 1969b, Bergmann 1987, Menger 1994.)

Never heard of that danish guy. A Google search revealed this: www.denstoredanske.dk/Samfund,_jura_og_politik/Filosofi/Filosofi_og_filosoffer_-_1900-t./Filosoffer_1900-t._-_Norden_-_biografier/J%C3%B8rgen_J%C3%B8rgensen. He is somewhat cool. I dislike his communist ideas, obviously, but at least he is more interesting than Kierkegaard.

-

The synthetic statements of the empirical sciences meanwhile were held to be cognitively
meaningful if and only if they were empirically testable in some sense. They derived their
justification as knowledge claims from successful tests. Here the Circle appealed to a meaning
criterion the correct formulation of which was problematical and much debated (and will be
discussed in greater detail in section 3.1 below). Roughly, if synthetic statements failed testability in
principle they were considered to be cognitively meaningless and to give rise only to pseudo-
problems. No third category of significance besides that of a priori analytical and a posteriori
synthetic statements was admitted: in particular, Kant’s synthetic a priori was banned as having
been refuted by the progress of science itself. (The theory of relativity showed what had been held
to be an example of the synthetic a priori, namely Euclidean geometry, to be false as the geometry
of physical space.) Thus the Circle rejected the knowledge claims of metaphysics as being neither
analytic and a priori nor empirical and synthetic. (On related but different grounds, they also
rejected the knowledge claims of normative ethics: whereas conditional norms could be grounded in
means-ends relations, unconditional norms remained unprovable in empirical terms and so
depended crucially on the disputed substantive a priori intuition.)

I like this idea. I generally prefer to talk about cost/benefit analyses with stated goals instead of using moral language. See also Joshua D. Greene’s dissertation about this.

-

Given their empiricism, all of the members of the Vienna Circle also called into question the principled separation of the natural and the human sciences. They were happy enough to admit to differences in their object domains, but denied the categorical difference in both their overarching methodologies and ultimate goals in inquiry, which the historicist tradition in the still only emerging social sciences and the idealist tradition in philosophy insisted on. The Circle’s own methodologically monist position was sometimes represented under the heading of “unified science”. Precisely how such a unification of the sciences was to be effected or understood remained a matter for further discussion (see section 3.3 below).

I agree with this. There is no principled distinction between natural and social sciences. Only matters of degree and areas of study, and even those overlap.

-

As noted, the Vienna Circle did not last long: its philosophical revolution came at a cost. Yet what
was so socially, indeed politically, explosive about what appears on first sight to be a particularly
arid, if not astringent, doctrine of specialist scientific knowledge? To a large part, precisely what
made it so controversial philosophically: its claim to refute opponents not by proving their
statements to be false but by showing them to be (cognitively) meaningless. Whatever the niceties
of their philosophical argument here, the socio-political impact of the Vienna Circle’s philosophies
of science was obvious and profound. All of them opposed the increasing groundswell of radically
mistaken, indeed irrational, ways of thinking about thought and its place in the world. In their time
and place, the mere demand that public discourse be perspicuous, in particular, that reasoning be
valid and premises true—a demand implicit in their general ideal of reason—placed them in the
middle of crucial socio-political struggles. Some members and sympathisers of the Circle also
actively opposed the then increasingly popular völkisch supra-individual holism in social science as
a dangerous intellectual aberration. Not only did such ideas support racism and fascism in politics,
but such ideas themselves were supported only by radically mistaken arguments concerning the
nature and explanation of organic and unorganic matter. So the first thing that made all of the
Vienna Circle philosophies politically relevant was the contingent fact that in their day much
political discourse exhibited striking epistemic deficits. That some of the members of the Circle
went, without logical blunders, still further by arguing that socio-political considerations can play a
legitimate role in some instances of theory choice due to underdetermination is yet another matter.
Here this particular issue (see references at the end of section 2.1 above), as well as the general
topic of the Circle’s embedding in modernism and the discourse of modernity (see Putnam 1981b
for a reductionist, Galison 1990 for a foundationalist, Uebel 1996 for a constructivist reading of
their modernism), will not be pursued further.

VERY INTERESTING.

This also reminds me of the good book The March of Unreason. Written by a politician!

-

In the first place, this liberalization meant the accommodation of universally quantified statements
and the return, as it were, to salient aspects of Carnap’s 1928 conception. Everybody had noted that
the Wittgensteinian verificationist criterion rendered universally quantified statements meaningless.
Schlick (1931) thus followed Wittgenstein’s own suggestion to treat them instead as representing
rules for the formation of verifiable singular statements. (His abandonment of conclusive
verifiability is indicated only in Schlick 1936a.) By contrast, Hahn (1933, drawn from lectures in
1932) pointed out that hypotheses should be counted as properly meaningful as well and that the
criterion be weakened to allow for less than conclusive verifiability. But other elements played into
this liberalization as well. One that began to do so soon was the recognition of the problem of the
irreducibility of disposition terms to observation terms (more on this presently). A third element was
that disagreement arose as to whether the in-principle verifiability or support turned on what was
merely logically possible or on what was nomologically possible, as a matter of physical law etc. A
fourth element, finally, was that differences emerged as to whether the criterion of significance was
to apply to all languages or whether it was to apply primarily to constructed, formal languages.
Schlick retained the focus on logical possibility and natural languages throughout, but Carnap had
firmly settled his focus on nomological possibility and constructed languages by the mid-thirties.
Concerned with natural language, Schlick (1932, 1936a) deemed all statements meaningful for
which it was logically possible to conceive of a procedure of verification; concerned with
constructed languages only, Carnap (1936–37) deemed meaningful only statements for whom it was
nomologically possible to conceive of a procedure of confirmation of disconfirmation.

This distinction between logical and nomological possibility inre. verificationism i have encountered before. I know a fysicist who endorses verificationism. We have been discussing various problems for this view. His view has implications regarding quantum mechanics that i don’t like.

First, black holes have only 3 independent fysical properties according to standard theory: mass, charge, and angular momentum. However, how does one measure a black hole’s charge? Is it fysically possible? My idea was that it wasn’t, and thus his verificationist ideas imply that a specific part of standard theory about black holes is not just wrong, but meaningless. However, it seems that my proposed counter-example doesn’t work.

Second, another area of trouble is the future and the past. Sentences about the future and the past, are they fysically possible to verify? It seems not. If so, then it follows that all such sentences are meaningless. My fysicist friend sort of wants to buy the bullet here and go with that. I consider it a strong reason to not accept this particular kind of verificationism. The discussion then becomes complicated due to the possible truth of causal indeterminism. Future discussions await! (or maybe that sentence is just meaningless gibberish!)

Also, i consider the traditional view of laws of nature as confused, and agree with Norman Swartz about this.

-

Logical Empiricism

Richard von Mises (1883–1953)
Born in what is now the Ukraine, Richard von Mises is the brother of the economic and
political theorist Ludwig von Mises. Richard was a polymath who ranged over fields as
diverse as mathematics, aerodynamics, philosophy, and Rilke’s poetry. He finished his
doctorate in Vienna. He was simultaneously active in Berlin, where he was one of the
developers of the frequency theory of probability along with Reichenbach, and in Vienna,
where he participated in various discussion groups that constituted the Vienna Circle.
Eventually it was necessary to escape, first to Turkey, and eventually to MIT and Harvard.

Another polymath that i hadn’t heard about before.

-

Hilary Putnam (1926–)
This American philosopher of science, mathematics, mind and language earned his doctorate
under Reichenbach at UCLA and subsequently taught at Princeton, MIT, and Harvard. He was
originally a metaphysical realist, but then argued forcefully against it. He has continued the
pragmatist tradition and been politically active, especially in the 1960s and 70s.

I keep thinking this is a woman. Apparently, however, the female version of this name is spelled with 2 L’s according to Wiki:

Hilary or Hillary is a given and family name, derived from the Latin hilarius meaning “cheerful”, from hilaris, “cheerful, merry”[1] which comes from the Greek ἱλαρός (hilaros), “cheerful, merry”,[2] which in turn comes from ἵλαος (hilaos), “propitious, gracious”.[3] Historically (in America), the spelling Hilary has generally been used for men and Hillary for women, though there are exceptions, some of which are noted below. In modern times it has drastically declined in popularity as a name for men. Ilaria is the popular Italian and Spanish form. Ilariana and Ylariana (/aɪˌlɑːriˈɑːnə/ eye-LAH–ree-AH-nə) are two very rare feminine variants of the name.

It also reminds me that i really shud get around to reading his famous paper: en.wikipedia.org/wiki/Is_logic_empirical%3F