Archive for the ‘Meaning’ Category

(from A natural history of negation)

i had been thinking about a similar idea. but these work fine as a beginning. a good hierarchy needs a lvl for approximate truth as well (like Newton’s laws), as well as actual truths. but also perhaps a dimension for the relevance of the information conveyed. a sentence can express a true proposition without that proposition being relevant the making of just about any real life decision. for instance, the true proposition expressed by “42489054329479823423 is larger than 37828234747″ will in all likelihood never, ever be relevant for any decision. also one can cote that the relevance dimension only begins when there is actually some information conveyed, that is, it doesnt work before level 2 and beyond, as those below are meaningless pieces of language.

and things that are inconsistent can also be very useful, so its not clear how the falseness, approximate truth, and truth related to usefulness. but i think that they closer it is the truth, the more likely that it is useful. naive set theory is fine for working with many proofs, even if it is an inconsistent system.

Fashionable Nonsense, Postmodern Intellectuals’ Abuse of Science – Alan Sokal, Jean Bricmont ebook download pdf free


The book contains the best single chapter on filosofy of science that iv com across. very much recommended, especially for those that dont like filosofers’ accounts of things. alot of the rest of the book is devoted to long quotes full of nonsens, and som explanations of why it is nonsens (if possible), or just som explanatory remarks about the fields invoked (say, relativity).


as such, this book is a must read for ppl who ar interested in the study of seudoscience, and those interested in meaningless language use. basically, it is a collection of case studies of that.






[footnote] Bertrand Russell (1948, p. 196) tells the following amusing story: “I once received a

letter from an eminent logician, Mrs Christine Ladd Franklin, saying that she was a

solipsist, and was surprised that there were not others”. We learned this reference

from Devitt (1997, p. 64).






The answer, of course, is that we have no proof; it is simply

a perfectly reasonable hypothesis. The most natural way to ex­

plain the persistence of our sensations (in particular, the un­

pleasant ones) is to suppose that they are caused by agents

outside our consciousness. We can almost always change at will

the sensations that are pure products of our imagination, but we

cannot stop a war, stave off a lion, or start a broken-down car

by pure thought alone. Nevertheless— and it is important to em­

phasize this—this argument does not refute solipsism. If anyone

insists that he is a “harpsichord playing solo” (Diderot), there is

no way to convince him of his error. However, we have never

met a sincere solipsist and we doubt that any exist.52 This illus­

trates an important principle that we shall use several times in

this chapter: the mere fact that an idea is irrefutable does not

imply that there is any reason to believe it is true.


i wonder how that epistemological point (that arguments from ignorance ar no good) works with intuitionism in math/logic?




The universality of Humean skepticism is also its weakness.

Of course, it is irrefutable. But since no one is systematically

skeptical (when he or she is sincere) with respect to ordinary

knowledge, one ought to ask why skepticism is rejected in that

domain and why it would nevertheless be valid when applied

elsewhere, for instance, to scientific knowledge. Now, the rea­

son why we reject systematic skepticism in everyday life is

more or less obvious and is similar to the reason we reject solip­

sism. The best way to account for the coherence of our experi­

ence is to suppose that the outside world corresponds, at least

approximately, to the image of it provided by our senses.54


54 4This hypothesis receives a deeper explanation with the subsequent development of

science, in particular of the biological theory of evolution. Clearly, the possession of

sensory organs that reflect more or less faithfully the outside world (or, at least,

some important aspects of it) confers an evolutionary advantage. Let us stress that

this argument does not refute radical skepticism, but it does increase the coherence

of the anti-skeptical worldview.


the authors ar surprisingly sofisticated filosofically, and i agree very much with their reasoning.




For my part, I have no doubt that, although progressive changes

are to be expected in physics, the present doctrines are likely to be

nearer to the truth than any rival doctrines now before the world.

Science is at no moment quite right, but it is seldom quite wrong,

and has, as a rule, a better chance of being right than the theories

of the unscientific. It is, therefore, rational to accept it


—Bertrand Russell, My Philosophical Development

(1995 [1959], p. 13)


yes, the analogy is that: science is LIKE a limit function that goes towards 1 [approximates closer to truth] over time. at any given x, it is not quite at y=1 yet, but it gets closer. it might not be completely monotonic either (and i dont know if that completely breaks the limit function, probably doesnt).


for a quick grafical illustration, try the function f(x)=1-(-1/x) on the interval [1;∞]. The truth line is f(x)=1 on the interval [0;∞]. in reality, the graf wud be mor unsteady and not completely monotonic corresponding to the varius theories as they com and go in science. it is not only a matter of evidence (which is not an infallible indicator of truth either), but it is primarily a function of that.




Once the general problems of solipsism and radical skepti­

cism have been set aside, we can get down to work. Let us sup­

pose that we are able to obtain some more-or-less reliable

knowledge of the world, at least in everyday life. We can then

ask: To what extent are our senses reliable or not? To answer

this question, we can compare sense impressions among them­

selves and vary certain parameters of our everyday experience.

We can map out in this way, step by step, a practiced rationality.

When this is done systematically and with sufficient precision,

science can begin.


For us, the scientific method is not radically different from

the rational attitude in everyday life or in other domains of hu­

man knowledge. Historians, detectives, and plumbers—indeed,

all human beings—use the same basic methods of induction,

deduction, and assessment of evidence as do physicists or bio­

chemists. Modem science tries to carry out these operations in

a more careful and systematic way, by using controls and sta­

tistical tests, insisting on replication, and so forth. Moreover,

scientific measurements are often much more precise than

everyday observations; they allow us to discover hitherto un­

known phenomena; and they often conflict with “common

sense”. But the conflict is at the level of conclusions, not the

basic approach.55 56


55For example: Water appears to us as a continuous fluid, but chemical and physical

experiments teach us that it is made of atoms.


56Throughout this chapter, we stress the methodological continuity between scientific

knowledge and everyday knowledge. This is, in our view, the proper way to respond

to various skeptical challenges and to dispel the confusions generated by radical

interpretations of correct philosophical ideas such as the underdetermination of

theories by data. But it would be naive to push this connection too far. Science—

particularly fundamental physics— introduces concepts that are hard to grasp

intuitively or to connect directly to common-sense notions. (For example: forces

acting instantaneously throughout the universe in Newtonian mechanics,

electromagnetic fields “vibrating” in vacuum in Maxwell’s theory, curved space-time

in Einstein’s general relativity.) And it is in discussions about the meaning o f these

theoretical concepts that various brands of realists and anti-realists (e.g.,

intrumentalists, pragmatists) tend to part company. Relativists sometimes tend to fall

back on instrumentalist positions when challenged, but there is a profound difference

between the two attitudes. Instrumentalists may want to claim either that we have no

way of knowing whether “unobservable” theoretical entities really exist, or that their

meaning is defined solely through measurable quantities; but this does not imply that

they regard such entities as “subjective” in the sense that their meaning would be

significantly influenced by extra-scientific factors (such as the personality of the

individual scientist or the social characteristics o f the group to which she belongs).

Indeed, instrumentalists may regard our scientific theories as, quite simply, the most

satisfactory way that the human mind, with its inherent biological limitations, is

capable of understanding the world.


right they ar




Having reached this point in the discussion, the radical skep­

tic or relativist will ask what distinguishes science from other

types of discourse about reality—religions or myths, for exam­

ple, or pseudo-sciences such as astrology—and, above all, what

criteria are used to make such a distinction. Our answer is nu-

anced. First of all, there are some general (but basically nega­

tive) epistemological principles, which go back at least to the

seventeenth century: to be skeptical of a priori arguments, rev­

elation, sacred texts, and arguments from authority. Moreover,

the experience accumulated during three centuries of scientific

practice has given us a series of more-or-less general method­

ological principles—for example, to replicate experiments, to

use controls, to test medicines in double-blind protocols—that

can be justified by rational arguments. However, we do not

claim that these principles can be codified in a definitive way,

nor that the list is exhaustive. In other words, there does not

exist (at least at present) a complete codification of scientific ra­

tionality, and we seriously doubt that one could ever exist. After

all, the future is inherently unpredictable; rationality is always

an adaptation to a new situation. Nevertheless—and this is the

main difference between us and the radical skeptics—we think

that well-developed scientific theories are in general supported

by good arguments, but the rationality of those arguments must

be analyzed case-by-case.60


60 It is also by proceeding on a case-by-case basis that one can appreciate the

immensity of the gulf separating the sciences from the pseudo-sciences.


Sokal and Bricmont might soon becom my new favorit filosofers of science.




Obviously, every induction is an inference from the observed to

the unobserved, and no such inference can be justified using

solely deductive logic. But, as we have seen, if this argument

were to be taken seriously—if rationality were to consist only

of deductive logic— it would imply also that there is no good

reason to believe that the Sun will rise tomorrow, and yet no one

really expects the Sun not to rise.


id like to add, like i hav don many times befor, that ther is no reason to think that induction shud be proveable with deduction. why require that? but now coms the interesting part. if one takes induction as the basis instead of deduction, one can inductivly prove deduction. <prove> in the ordinary, non-mathetical/logical sens. the method is enumerativ induction, which i hav discussed befor.




But one may go further. It is natural to introduce a hierarchy

in the degree of credence accorded to different theories, de­

pending on the quantity and quality of the evidence supporting

them.95 Every scientist—indeed, every human being—proceeds

in this way and grants a higher subjective probability to the

best-established theories (for instance, the evolution of species

or the existence of atoms) and a lower subjective probability to

more speculative theories (such as detailed theories of quantum

gravity). The same reasoning applies when comparing theories

in natural science with those in history or sociology. For exam­

ple, the evidence of the Earth’s rotation is vastly stronger than

anything Kuhn could put forward in support of his historical

theories. This does not mean, of course, that physicists are more

clever than historians or that they use better methods, but sim­

ply that they deal with less complex problems, involving a

smaller number of variables which, moreover, are easier to mea­

sure and to control. It is impossible to avoid introducing such a

hierarchy in our beliefs, and this hierarchy implies that there is

no conceivable argument based on the Kuhnian view of history

that could give succor to those sociologists or philosophers who

wish to challenge, in a blanket way, the reliability of scientific



Sokal and Bricmont even get the epistemological point about the different fields right. color me very positivly surprised.




Bruno Latour and His Rules of Method

The strong programme in the sociology of science has found

an echo in France, particularly around Bruno Latour. His works

contain a great number of propositions formulated so ambigu­

ously that they can hardly be taken literally. And when one re­

moves the ambiguity— as we shall do here in a few

examples— one reaches the conclusion that the assertion is ei­

ther true but banal, or else surprising but manifestly false.


sound familiar? its the good old two-faced sentences again, those that Swartz and Bradley called Janus-sentences. they yield two different interpretations, one trivial and true, one nontrivial and false. their apparent plausibility is becus of this fact.


quoting from Possible Worlds:


Janus-faced sentences

The method of possible-worlds testing is not only an invaluable aid towards resolving ambiguity; it is also an effective weapon against a particular form of-linguistic sophistry.

Thinkers often deceive themselves and others into supposing that they have discovered a profound

truth about the universe when all they have done is utter what we shall call a “Janus-faced

sentence”. Janus, according to Roman mythology, was a god with two faces who was therefore able

to ‘face’ in two directions at once. Thus, by a “Janus-faced sentence” we mean a sentence which, like “In the evolutionary struggle for existence just the fittest species survive”, faces in two directions. It is ambiguous insofar as it may be used to express a noncontingent proposition, e.g., that in the struggle for existence just the surviving species survive, and may also be used to express a contingent proposition, e.g., the generalization that just the physically strongest species survive.


If a token of such a sentence-type is used to express a noncontingently true proposition then, of

course, the truth of that proposition is indisputable; but since, in that case, it is true in all possible

worlds, it does not tell us anything distinctive about the actual world. If, on the other hand, a token

of such a sentence-type is used to express a contingent proposition, then of course that proposition

does tell us something quite distinctive about the actual world; but in that case its truth is far from

indisputable. The sophistry lies in supposing that the indisputable credentials of the one proposition

can be transferred to the other just by virtue of the fact that one sentence-token might be used to

express one of these propositions and a different sentence-token of one and the same sentence-type

might be used to express the other of these propositions. For by virtue of the necessary truth of one

of these propositions, the truth of the other — the contingent one — can be made to seem

indisputable, can be made to seem, that is, as if it “stands to reason” that it should be true.




We could be accused here of focusing our attention on an

ambiguity of formulation and of not trying to understand what

Latour really means. In order to counter this objection, let us go

back to the section “Appealing (to) Nature” (pp. 94-100) where

the Third Rule is introduced and developed. Latour begins by

ridiculing the appeal to Nature as a way of resolving scientific

controversies, such as the one concerning solar neutrinos[121]:

A fierce controversy divides the astrophysicists who calcu­

late the number o f neutrinos coming out o f the sun and Davis,

the experimentalist who obtains a much smaller figure. It is

easy to distinguish them and put the controversy to rest. Just

let us see for ourselves in which camp the sun is really to be

found. Somewhere the natural sun with its true number o f

neutrinos will close the mouths o f dissenters and force them

to accept the facts no matter how well written these papers

were. (Latour 1987, p. 95)



Why does Latour choose to be ironic? The problem is to know

how many neutrinos are emitted by the Sun, and this question

is indeed difficult. We can hope that it will be resolved some day,

not because “the natural sun will close the mouths of dis­

senters”, but because sufficiently powerful empirical data will

become available. Indeed, in order to fill in the gaps in the cur­

rently available data and to discriminate between the currently

existing theories, several groups of physicists have recently

built detectors of different types, and they are now performing

the (difficult) measurements.122 It is thus reasonable to expect

that the controversy will be settled sometime in the next few

years, thanks to an accumulation of evidence that, taken to­

gether, will indicate clearly the correct solution. However, other

scenarios are in principle possible: the controversy could die

out because people stop being interested in the issue, or be­

cause the problem turns out to be too difficult to solve; and, at

this level, sociological factors undoubtedly play a role (if only

because of the budgetary constraints on research). Obviously,

scientists think, or at least hope, that if the controversy is re­

solved it will be because of observations and not because of

the literary qualities of the scientific papers. Otherwise, they

will simply have ceased to do science.


the footnode 121 is:

The nuclear reactions that power the Sun are expected to emit copious quantities

of the subatomic particle called the neutrino. By combining current theories of solar

structure, nuclear physics, and elementary-particle physics, it is possible to obtain

quantitative predictions for the flux and energy distribution of the solar neutrinos.

Since the late 1960s, experimental physicists, beginning with the pioneering work of

Raymond Davis, have been attempting to detect the solar neutrinos and measure their

flux. The solar neutrinos have in fact been detected; but their flux appears to be less

than one-third o f the theoretical prediction. Astrophysicists and elementary-particle

physicists are actively trying to determine whether the discrepancy arises from

experimental error or theoretical error, and if the latter, whether the failure is in the

solar models or in the elementary-particle models. For an introductory overview, see

Bahcall (1990).


this problem sounded familiar to me.

The solar neutrino problem was a major discrepancy between measurements of the numbers of neutrinos flowing through the Earth and theoretical models of the solar interior, lasting from the mid-1960s to about 2002. The discrepancy has since been resolved by new understanding of neutrino physics, requiring a modification of the Standard Model of particle physics – specifically, neutrino oscillation. Essentially, as neutrinos have mass, they can change from the type that had been expected to be produced in the Sun’s interior into two types that would not be caught by the detectors in use at the time.


science seems to be working. Sokal and Bricmont predicted that it wud be resolved ”in the next few years”. this was written in 1997, about 5 years befor the data Wikipedia givs for the resolution. i advice one to read the Wiki article, as it is quite good.




In this quote and the previous one, Latour is playing con­

stantly on the confusion between facts and our knowledge of

them.123 The correct answer to any scientific question, solved or

not, depends on the state of Nature (for example, on the num­

ber of neutrinos that the Sun really emits). Now, it happens that,

for the unsolved problems, nobody knows the right answer,

while for the solved ones, we do know it (at least if the accepted

solution is correct, which can always be challenged). But there

is no reason to adopt a “relativist” attitude in one case and a “re­

alist” one in the other. The difference between these attitudes is

a philosophical matter, and is independent of whether the prob­

lem is solved or not. For the relativist, there is simply no unique

correct answer, independent of all social and cultural circum­

stances; this holds for the closed questions as well as for the

open ones. On the other hand, the scientists who seek the cor­

rect solution are not relativist, almost by definition. Of course

they do “use Nature as the external referee”: that is, they seek to

know what is really happening in Nature, and they design ex­

periments for that purpose.


the footnote 123 is:

An even more extreme example o f this confusion appears in a recent article by

Latour in La Recherche, a French monthly magazine devoted to the popularization of

science (Latour 1998). Here Latour discusses what he interprets as the discovery in

1976, by French scientists working on the mummy of the pharaoh Ramses II, that his

death (circa 1213 B.C.) was due to tuberculosis. Latour asks: “How could he pass

away due to a bacillus discovered by Robert Koch in 1882?” Latour notes, correctly,

that it would be an anachronism to assert that Rainses II was killed by machine-gun

fire or died from the stress provoked by a stock-market crash. But then, Latour

wonders, why isn’t death from tuberculosis likewise an anachronism? He goes so far

as to assert that “Before Koch, the bacillus has no real existence.” He dismisses the

common-sense notion that Koch discovered a pre-existing bacillus as “having only the

appearance o f common sense”. Of course, in the rest o f the article, Latour gives no

argument to justify these radical claims and provides no genuine alternative to the

common-sense answer. He simply stresses the obvious fact that, in order to discover

the cause of Ramses’ death, a sophisticated analysis in Parisian laboratories was

needed. But unless Latour is putting forward the truly radical claim that nothing we

discover ever existed prior to its “discovery”— in particular, that no murderer is a

murderer, in the sense that he committed a crime before the police “discovered” him

to be a murderer— he needs to explain what is special about bacilli, and this he has

utterly failed to do. The result is that Latour is saying nothing clear, and the article

oscillates between extreme banalities and blatant falsehoods.






a quote from one of the crazy ppl:


The privileging o f solid over fluid mechanics, and indeed the

inability o f science to deal with turbulent flow at all, she at­

tributes to the association o f fluidity with femininity. Whereas

men have sex organs that protrude and become rigid, women

have openings that leak menstrual blood and vaginal fluids.

Although men, too, flow on occasion— when semen is emit­

ted, for example— this aspect o f their sexuality is not empha­

sized. It is the rigidity o f the male organ that counts, not its

complicity in fluid flow. These idealizations are reinscribed in

mathematics, which conceives o f fluids as laminated planes

and other modified solid forms. In the same way that women

are erased within masculinist theories and language, existing

only as not-men, so fluids have been erased from science, ex­

isting only as not-solids. From this perspective it is no wonder

that science has not been able to arrive at a successful model

for turbulence. The problem o f turbulent f low cannot be

solved because the conceptions o f fluids (and o f women)

have been formulated so as necessarily to leave unarticulated

remainders. (Hayles 1992, p. 17)


u cant make this shit up




Over the past three decades, remarkable progress has been

made in the mathematical theory of chaos, but the idea that

some physical systems may exhibit a sensitivity to initial con­

ditions is not new. Here is what James Clerk Maxwell said in

1877, after stating the principle of determinism ( “the same

causes will always produce the same effects”):


but thats not what determinism is. their quote seems to be from Hume’s Treatise.


it is mentioned in his discussion of causality, which is related to but not the same as, determinism.


Wikipedia givs a fine definition of <determinism>: ”Determinism is a philosophy stating that for everything that happens there are conditions such that, given those conditions, nothing else could happen.”


also SEP: Causal determinism is, roughly speaking, the idea that every event is necessitated by antecedent events and conditions together with the laws of nature.”




[T]he first difference between science and philosophy is their

respective attitudes toward chaos. Chaos is defined not so

much by its disorder as by the infinite speed with which every

form taking shape in it vanishes. It is a void that is not a noth­

ingness but a virtual, containing all possible particles and

drawing out all possible forms, which spring up only to dis­

appear immediately, without consistency or reference, with­

out consequence. Chaos is an infinite speed o f birth and dis­

appearance. (Deleuze and Guattari 1994, pp. 117-118, italics

in the original)






For what it’s worth, electrons, unlike photons, have a non-zero

mass and thus cannot move at the speed of light, precisely

because of the theory of relativity of which Virilio seems so



i think the authors did not mean what they wrote here. surely, relativity theory is not the reason why electrons cannot move at the speed of light. relativity theory is an explanation of how nature works, in this case, how objects with mass and velocity/speed works.




We met in Paris a student who, after having brilliantly fin­

ished his undergraduate studies in physics, began reading phi­

losophy and in particular Deleuze. He was trying to tackle

Difference and Repetition. Having read the mathematical ex­

cerpts examined here (pp. 161-164), he admitted he couldn’t

see what Deleuze was driving at. Nevertheless, Deleuze’s repu­

tation for profundity was so strong that he hesitated to draw the

natural conclusion: that if someone like himself, who had stud­

ied calculus for several years, was unable to understand these

texts, allegedly about calculus, it was probably because they

didn’t make much sense. It seems to us that this example should

have encouraged the student to analyze more critically the rest

of Deleuze’s writings.


i think the epistemological conditions of this kind of inference ar very interesting. under which conditions shud one conclude that a text is meaningless?




7. Ambiguity as subterfuge. We have seen in this book nu­

merous ambiguous texts that can be interpreted in two differ­

ent ways: as an assertion that is true but relatively banal, or as

one that is radical but manifestly false. And we cannot help

thinking that, in many cases, these ambiguities are deliberate.

Indeed, they offer a great advantage in intellectual battles: the

radical interpretation can serve to attract relatively inexperi­

enced listeners or readers; and if the absurdity of this version is

exposed, the author can always defend himself by claiming to

have been misunderstood, and retreat to the innocuous inter­



mor on Janus-sentences.





Towardsabetterquantitativelogic (due to formatting)

This is another of those ideas that ive had independently, and that it turned out that others had thought of before me, by thousands of years in this case. The idea is that longer expressions of language as made out of smaller parts of language, and that the meaning of the whole is determined by the parts and their structure. This is rather close to the formulation used on SEP. Heres the introduction on SEP:


Anything that deserves to be called a language must contain meaningful expressions built up from other meaningful expressions. How are their complexity and meaning related? The traditional view is that the relationship is fairly tight: the meaning of a complex expression is fully determined by its structure and the meanings of its constituents—once we fix what the parts mean and how they are put together we have no more leeway regarding the meaning of the whole. This is the principle of compositionality, a fundamental presupposition of most contemporary work in semantics.

Proponents of compositionality typically emphasize the productivity and systematicity of our linguistic understanding. We can understand a large—perhaps infinitely large—collection of complex expressions the first time we encounter them, and if we understand some complex expressions we tend to understand others that can be obtained by recombining their constituents. Compositionality is supposed to feature in the best explanation of these phenomena. Opponents of compositionality typically point to cases when meanings of larger expressions seem to depend on the intentions of the speaker, on the linguistic environment, or on the setting in which the utterance takes place without their parts displaying a similar dependence. They try to respond to the arguments from productivity and systematicity by insisting that the phenomena are limited, and by suggesting alternative explanations.


SEP goes on to discuss some more formal versions of the general idea:


(C) The meaning of a complex expression is determined by its structure and the meanings of its constituents.



(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


SEP goes on to disguish between a lot of different versions of this. See the article for details.

The thing i wanted to discuss was the counterexamples offered. I found none of them to be rather compelling. Based mostly on intuition pumps as far as i can tell, and im rather wary of such (cf. Every Thing Must Go, amazon).


Heres SEP’s first example, using chess notation (many other game notations wud also work, e.g. Taifho):


Consider the Algebraic notation for chess.[15] Here are the basics. The rows of the chessboard are represented by the numerals 1, 2, … , 8; the columns are represented by the lower case letters a, b, … , h. The squares are identified by column and row; for example b5 is at the intersection of the second column and the fifth row. Upper case letters represent the pieces: K stands for king, Q for queen, R for rook, B for bishop, and N for knight. Moves are typically represented by a triplet consisting of an upper case letter standing for the piece that makes the move and a sign standing for the square where the piece moves. There are five exceptions to this: (i) moves made by pawns lack the upper case letter from the beginning, (ii) when more than one piece of the same type could reach the same square, the sign for the square of departure is placed immediately in front of the sign for the square of arrival, (iii) when a move results in a capture an x is placed immediately in front of the sign for the square of arrival, (iv) the symbol 0-0 represents castling on the king’s side, (v) the symbol 0-0-0 represents castling on the queen’s side. + stands for check, and ++ for mate. The rest of the notation serves to make commentaries about the moves and is inessential for understanding it.

Someone who understands the Algebraic notation must be able to follow descriptions of particular chess games in it and someone who can do that must be able to tell which move is represented by particular lines within such a description. Nonetheless, it is clear that when someone sees the line Bb5 in the middle of such a description, knowing what B, b, and 5 mean will not be enough to figure out what this move is supposed to be. It must be a move to b5 made by a bishop, but we don’t know which bishop (not even whether it is white or black) and we don’t know which square it is coming from. All this can be determined by following the description of the game from the beginning, assuming that one knows what the initial configurations of figures are on the chessboard, that white moves first, and that afterwards black and white move one after the other. But staring at Bb5 itself will not help.


It is exacly the bold lines i dont accept. Why must one be able to know that from the meaning alone? Knowing the meaning of expressions does not always make it easy to know what a given noun (or NP) refers to. In this case “B” is a noun refering to a bishop, which one? Well, who knows. There are lots of examples of words refering to differnet things (people usually) when used in diffferent contexts. For instance, the word “me” refers to the source of the expression, but when an expression is used by different speakers, then “me” refers to different people, cf. indexicals (SEP and Wiki).


Ofc, my thoughts about are not particularly unique, and SEP mentions the defense that i also thought of:


The second moral is that—given certain assumptions about meaning in chess notation—we can have productive and systematic understanding of representations even if the system itself is not compositional. The assumptions in question are that (i) the description I gave in the first paragraph of this section fully determines what the simple expressions of chess notation mean and also how they can be combined to form complex expressions, and that (ii) the meaning of a line within a chess notation determines a move. One can reject (i) and argue, for example, that the meaning of B in Bb5 contains an indexical component and within the context of a description, it picks out a particular bishop moving from a particular square. One can also reject (ii) and argue, for example, that the meaning of Bb5 is nothing more than the meaning of ‘some bishop moves from somewhere to square b5’—utterances of Bb5 might carry extra information but that is of no concern for the semantics of the notation. Both moves would save compositionality at a price. The first complicates considerably what we have to say about lexical meanings; the second widens the gap between meanings of expressions and meanings of their utterances. Whether saving compositionality is worth either of these costs (or whether there is some other story to be told about our understanding of the Algebraic notation) is by no means clear. For all we know, Algebraic notation might be non-compositional.


I also dont agree that it widens the gap between meanings of expressions and meanings of utterances. It has to do with refering to stuff, not meaning in itself.


4.2.1 Conditionals

Consider the following minimal pair:

(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.

A good translation of (1) into a first-order language is (1′). But the analogous translation of (2) would yield (2′), which is inadequate. A good translation for (2) would be (2″) but it is unclear why. We might convert ‘¬∃’ to the equivalent ‘∀¬’ but then we must also inexplicably push the negation into the consequent of the embedded conditional.

(1′) ∀x(x works hard → x will succeed)
(2′) ¬∃
x (x goofs off → x will succeed)
(2″) ∀
x (x goofs off → ¬(x will succeed))

This gives rise to a problem for the compositionality of English, since is seems rather plausible that the syntactic structure of (1) and (2) is the same and that ‘if’ contributes some sort of conditional connective—not necessarily a material conditional!—to the meaning of (1). But it seems that it cannot contribute just that to the meaning of (2). More precisely, the interpretation of an embedded conditional clause appears to be sensitive to the nature of the quantifier in the embedding sentence—a violation of compositionality.[16]

One response might be to claim that ‘if’ does not contribute a conditional connective to the meaning of either (1) or (2)—rather, it marks a restriction on the domain of the quantifier, as the paraphrases under (1″) and (2″) suggest:[17]

(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.

But this simple proposal (however it may be implemented) runs into trouble when it comes to quantifiers like ‘most’. Unlike (3′), (3) says that those students (in the contextually given domain) who succeed if they work hard are most of the students (in the contextually relevant domain):

(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.

The debate whether a good semantic analysis of if-clauses under quantifiers can obey compositionality is lively and open.[18]


Doesnt seem particularly difficult to me. When i look at an “if-then” clause, the first thing i do before formalizing is turning it around so that “if” is first, and i also insert any missing “then”. With their example:


(1) Everyone will succeed if he works hard.
(2) No one will succeed if he goofs off.


this results in:


(1)* If he works hard, then everyone will succeed.
(2)* If he goofs off, then no one will succeed.


Both “everyone” and “no one” express a universal quantifer, ∀. The second one has a negation as well. We can translate this to something like “all”, and “no” to “not”. Then we might get:


(1)** If he works hard, then all will succeed.
(2)** If he goofs off, then all will not succeed.


Then, we move the quantifier to the beginning and insert a pronoun, “he”, to match. Then we get something like:


(1)*** For any person, if he works hard, then he will succeed.
(2)*** For any person, if he goofs off, then he will not succeed.


These are equivalent with SEP’s


(1″) Everyone who works hard will succeed.
(2″) No one who goofs off will succeed.


The difference between (3) and (3′) is interesting, not becus of relevance to my method about (i think), but since it deals with something beyond first-order logic. Quantification logic, i suppose? I did a brief Google and Wiki search, but didnt find something like that i was looking for. I also tried Graham Priest’s Introduction to non-classical logic, also without luck.


So here goes some system i just invented to formalize the sentences:


(3) Most students will succeed if they work hard.
(3′) Most students who work hard will succeed.


Capital greek letters are set variables. # is a function that returns the cardinality a set.


(3)* (∃Γ)(∃Δ)(∀x)(∀y)(Sx↔x∈Γ∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ)→(Wy→Uy))


In english: There is a set, gamma, and there is another set, delta, and for any x, and for any y, x is a student iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then (if y works hard, then y will succeed).


Quite complicated in writing, but the idea is not that complicated. It shud be possible to find some simplified writing convention for easier expression of this way of formalizing it.


(3′)* (∃Γ)(∃Δ)(∀x)(∀y)(((Sx∧Wx)↔x∈Γ)∧Δ⊆Γ∧#Δ>(#Γ/2)∧(y∈Δ→Uy))


In english: there is a set, gamma, and there is another set, delta, and for any x, and for any y, (x is a student and x works hard) iff x is in gamma, and delta is a subset of gamma, and the cardinality of delta is larger than half the cardinality of gamma, and if y is in delta, then u will succeed.


To my logician intuition, these are not equivalent, but proving this is left as an exercise to the reader if he can figure out a way to do so in this set theory+predicate logic system (i might try later).



4.2.2 Cross-sentential anaphora

Consider the following minimal pair from Barbara Partee:


(4) I dropped ten marbles and found all but one of them. It is probably under the sofa.

(5) I dropped ten marbles and found nine of them. It is probably under the sofa.


There is a clear difference between (4) and (5)—the first one is unproblematic, the second markedly odd. This difference is plausibly a matter of meaning, and so (4) and (5) cannot be synonyms. Nonetheless, the first sentences are at least truth-conditionally equivalent. If we adopt a conception of meaning where truth-conditional equivalence is sufficient for synonymy, we have an apparent counterexample to compositionality.


I dont accept that premise either. I havent done so since i read Swartz and Bradley years ago. Sentences like


“Canada is north of Mexico”

“Mexico is south of Canada”


are logically equivalent, but are not synonymous. The concept of being north of, and the concept of being south of are not the same, even tho they stand in a kind reverse relation. That is to say, xR1y↔yR2x. Not sure what to call such relations. It’s symmetry+substitition of relations.


Sentences like


“Everything that is round, has a shape.”

“Nothing is not identical to itself.”


are logically equivalent but dont mean the same. And so on, cf. Swartz and Bradley 1979, and SEP on theories of meaning.


Interesting though these cases might be, it is not at all clear that we are faced with a genuine challenge to compositionality, even if we want to stick with the idea that meanings are just truth-conditions. For it is not clear that (5) lacks the normal reading of (4)—on reflection it seems better to say that the reading is available even though it is considerably harder to get. (Contrast this with an example due to—I think—Irene Heim: ‘They got married. She is beautiful.’ This is like (5) because the first sentence lacks an explicit antecedent for the pronoun in the second. Nonetheless, it is clear that the bride is said to be beautiful.) If the difference between (4) and (5) is only this, it is no longer clear that we must accept the idea that they must differ in meaning.


I agree that (4) and (5) mean the same, even if (5) is a rather bad way to express the thing one normally wud express with something like (4).


In their bride example, one can also consider homosexual weddings, where “he” and “she” similarly fails to refer to a specific person out of the two newlywed.


4.2.3 Adjectives

Suppose a Japanese maple leaf, turned brown, has been painted green. Consider someone pointing at this leaf uttering (6):


(6) This leaf is green.


The utterance could be true on one occasion (say, when the speaker is sorting leaves for decoration) and false on another (say, when the speaker is trying to identify the species of tree the leaf belongs to). The meanings of the words are the same on both occasions and so is their syntactic composition. But the meaning of (6) on these two occasions—what (6) says when uttered in these occasions—is different. As Charles Travis, the inventor of this example puts it: “…words may have all the stipulated features while saying something true, but also while saying something false.”[[20]


At least three responses offer themselves. One is to deny the relevant intuition. Perhaps the leaf really is green if it is painted green and (6) is uttered truly in both situations. Nonetheless, we might be sometimes reluctant to make such a true utterance for fear of being misleading. We might be taken to falsely suggest that the leaf is green under the paint or that it is not painted at all.[21] The second option is to point out that the fact that a sentence can say one thing on one occasion and something else on another is not in conflict with its meaning remaining the same. Do we have then a challenge to compositionality of reference, or perhaps to compositionality of content? Not clear, for the reference or content of ‘green’ may also change between the two situations. This could happen, for example, if the lexical representation of this word contains an indexical element.[22] If this seems ad hoc, we can say instead that although (6) can be used to make both true and false assertions, the truth-value of the sentence itself is determined compositionally.[23]


Im going to bite the bullet again, and just say that the sentence means the same on both occasions. What is different is that in different contexts, one might interpret the same sentence to express different propositions. This is not something new as it was already featured before as well, altho this time it is without indexicals. The reason is that altho the sentence means the same, one is guessing at which proposition the utterer meant to express with his sentence. Context helps with that.


4.2.4 Propositional attitudes

Perhaps the most widely known objection to compositionality comes from the observation that even if e and e′ are synonyms, the truth-values of sentences where they occur embedded within the clausal complement of a mental attitude verb may well differ. So, despite the fact that ‘eye-doctor’ and ‘ophthalmologist’ are synonyms (7) may be true and (8) false if Carla is ignorant of this fact:


(7) Carla believes that eye doctors are rich.
(8) Carla believes that ophthalmologists are rich.


So, we have a case of apparent violation of compositionality; cf. Pelletier (1994).

There is a sizable literature on the semantics of propositional attitude reports. Some think that considerations like this show that there are no genuine synonyms in natural languages. If so, compositionality (at least the language-bound version) is of course vacuously true. Some deny the intuition that (7) and (8) may differ in truth-conditions and seek explanations for the contrary appearance in terms of implicature.[24] Some give up the letter of compositionality but still provide recursive semantic clauses.[25] And some preserve compositionality by postulating a hidden indexical associated with ‘believe’.[26]


Im not entirely sure what to do about these propositional attitude reports, but im inclined to bite the bullet. Perhaps i will change my mind after i have read the two SEP articles about the matter.


Idiomatic language

The SEP article really didnt have a proper discussion of idiomatic language use. Say, frases like “dont mention it” which can either mean what it literally (i.e., by composition) means, or its idiomatic meaning: This is used as a response to being thanked, suggesting that the help given was no trouble (same source).

Depending on what one takes “complex expression” to mean. Recall the principle:


(C′) For every complex expression e in L, the meaning of e in L is determined by the structure of e in L and the meanings of the constituents of e in L.


What is a complex expression? Is any given complex expression made up of either complex expressions themselves or simple expressions? Idiomatic expressions really just are expressions whose meaning is not determined by their parts. One might thus actually take them to be simple expressions themselves. If one does, then the composition principle is pretty close to trivially true.


If one does not take idiomatic expressions to be complex expressions or simple expressions, then the principle of composition is trivially false. I dont consider that a huge problem, it generally holds, and explains the things it is required to explain just fine when it isnt universally true.


One can also note that idiomatic expressions can be used as parts of larger expressions. Depending on which way to think about idiomatic expressions, and of constituents, then larger expressions which have idiomatic expressions as parts of them might be trivially non-compositional. This is the case if one takes constituents to mean smallest parts. If one does, then since the idiomatic expressions’ meanings cannot be determined from syntax+smallest parts, then neither can the larger expression. If one on the other hand takes constituents to mean smallest decompositional parts, then idiomatic expressions do not trivially make the larger expressions they are part of non-compositional. Consider the sentence:


“He is pulling your leg”


the sentence is compositional since its meaning is determinable from “he”, “is”, “pulling your leg”, the syntax, and the meaning function.


There is a reason i bring up this detail, and that is that there is another kind of idiomatic use of language that apparently hasnt been mentioned so much in the literature, judging from SEP not mentioning it. It is the use of prepositions. Surely, many prepositions are used in perfectly compositional ways with other words, like in


“the cat is on the mat”


where “on” has the usual meaning of being on top of (something), or being above and resting upon or somesuch (difficult to avoid circular definitions of prepositions).


However, consider the use of “on” in


“he spent all his time on the internet”


clearly “on” does not mean the same as above here, it doesnt seem to mean much, it is a kind of indefinite relationship. Apparently aware of this fact (and becus languages differ in which prepositions are used in such cases), the designer of esperanto added a preposition for any indefinite relation to the language (“je”). Some languages have lots of such idiomatic preposition+noun frases, and they have to be learned by heart exactly the same way as the idiomatic expressions mentioned earlier, exactly becus they are idiomatic expressions.


As an illustration, in danish if one is at an island, one is “på Fyn”, but if one is at the mainland, then one is “i Jylland”. I think such usage of prepositions shud be considered idiomatic.

Vienna Circle

Despite its prominent position in the rich, if fragile, intellectual culture of inter-war Vienna and most likely due to its radical doctrines, the Vienna Circle found itself virtually isolated in most of German speaking philosophy. The one exception was its contact and cooperation with the Berlin Society for Empirical (later: Scientific) Philosophy (the other point of origin of logical empiricism). The members of the Berlin Society sported a broadly similar outlook and included, besides the philosopher Hans Reichenbach, the logicians Kurt Grelling and Walter Dubislav, the psychologist Kurt Lewin, the surgeon Friedrich Kraus and the mathematician Richard von Mises. (Its leading members Reichenbach, Grelling and Dubislav were listed in the Circle’s manifesto as sympathisers.) At the same time, members of the Vienna Circle also engaged directly, if selectively, with the Warsaw logicians (Tarski visited Vienna in 1930, Carnap later that year visited Warsaw and Tarski returned to Vienna in 1935). Probably partly because of its firebrand reputation, the Circle attracted also a series of visiting younger researchers and students including Carl Gustav Hempel from Berlin, Hasso Härlen from Stuttgart, Ludovico Geymonat from Italy, Jørgen Jørgensen, Eino Kaila, Arne Naess and Ake Petzall from Scandinavia, A.J. Ayer from the UK, Albert Blumberg, Charles Morris, Ernest Nagel and W.V.O. Quine from the USA, H.A. Lindemann from Argentina and Tscha Hung from China. (The reports and recollections of these former visitors—e.g. Nagel 1936—are of interest in complementing the Circle’s in-house histories and recollections which start with the unofficial manifesto—Carnap, Hahn and Neurath 1929—and extend through Neurath 1936, Frank 1941, 1949a and Feigl 1943 to the memoirs by Carnap 1963, Feigl 1969a, 1969b, Bergmann 1987, Menger 1994.)

Never heard of that danish guy. A Google search revealed this:,_jura_og_politik/Filosofi/Filosofi_og_filosoffer_-_1900-t./Filosoffer_1900-t._-_Norden_-_biografier/J%C3%B8rgen_J%C3%B8rgensen. He is somewhat cool. I dislike his communist ideas, obviously, but at least he is more interesting than Kierkegaard.


The synthetic statements of the empirical sciences meanwhile were held to be cognitively
meaningful if and only if they were empirically testable in some sense. They derived their
justification as knowledge claims from successful tests. Here the Circle appealed to a meaning
criterion the correct formulation of which was problematical and much debated (and will be
discussed in greater detail in section 3.1 below). Roughly, if synthetic statements failed testability in
principle they were considered to be cognitively meaningless and to give rise only to pseudo-
problems. No third category of significance besides that of a priori analytical and a posteriori
synthetic statements was admitted: in particular, Kant’s synthetic a priori was banned as having
been refuted by the progress of science itself. (The theory of relativity showed what had been held
to be an example of the synthetic a priori, namely Euclidean geometry, to be false as the geometry
of physical space.) Thus the Circle rejected the knowledge claims of metaphysics as being neither
analytic and a priori nor empirical and synthetic. (On related but different grounds, they also
rejected the knowledge claims of normative ethics: whereas conditional norms could be grounded in
means-ends relations, unconditional norms remained unprovable in empirical terms and so
depended crucially on the disputed substantive a priori intuition.)

I like this idea. I generally prefer to talk about cost/benefit analyses with stated goals instead of using moral language. See also Joshua D. Greene’s dissertation about this.


Given their empiricism, all of the members of the Vienna Circle also called into question the principled separation of the natural and the human sciences. They were happy enough to admit to differences in their object domains, but denied the categorical difference in both their overarching methodologies and ultimate goals in inquiry, which the historicist tradition in the still only emerging social sciences and the idealist tradition in philosophy insisted on. The Circle’s own methodologically monist position was sometimes represented under the heading of “unified science”. Precisely how such a unification of the sciences was to be effected or understood remained a matter for further discussion (see section 3.3 below).

I agree with this. There is no principled distinction between natural and social sciences. Only matters of degree and areas of study, and even those overlap.


As noted, the Vienna Circle did not last long: its philosophical revolution came at a cost. Yet what
was so socially, indeed politically, explosive about what appears on first sight to be a particularly
arid, if not astringent, doctrine of specialist scientific knowledge? To a large part, precisely what
made it so controversial philosophically: its claim to refute opponents not by proving their
statements to be false but by showing them to be (cognitively) meaningless. Whatever the niceties
of their philosophical argument here, the socio-political impact of the Vienna Circle’s philosophies
of science was obvious and profound. All of them opposed the increasing groundswell of radically
mistaken, indeed irrational, ways of thinking about thought and its place in the world. In their time
and place, the mere demand that public discourse be perspicuous, in particular, that reasoning be
valid and premises true—a demand implicit in their general ideal of reason—placed them in the
middle of crucial socio-political struggles. Some members and sympathisers of the Circle also
actively opposed the then increasingly popular völkisch supra-individual holism in social science as
a dangerous intellectual aberration. Not only did such ideas support racism and fascism in politics,
but such ideas themselves were supported only by radically mistaken arguments concerning the
nature and explanation of organic and unorganic matter. So the first thing that made all of the
Vienna Circle philosophies politically relevant was the contingent fact that in their day much
political discourse exhibited striking epistemic deficits. That some of the members of the Circle
went, without logical blunders, still further by arguing that socio-political considerations can play a
legitimate role in some instances of theory choice due to underdetermination is yet another matter.
Here this particular issue (see references at the end of section 2.1 above), as well as the general
topic of the Circle’s embedding in modernism and the discourse of modernity (see Putnam 1981b
for a reductionist, Galison 1990 for a foundationalist, Uebel 1996 for a constructivist reading of
their modernism), will not be pursued further.


This also reminds me of the good book The March of Unreason. Written by a politician!


In the first place, this liberalization meant the accommodation of universally quantified statements
and the return, as it were, to salient aspects of Carnap’s 1928 conception. Everybody had noted that
the Wittgensteinian verificationist criterion rendered universally quantified statements meaningless.
Schlick (1931) thus followed Wittgenstein’s own suggestion to treat them instead as representing
rules for the formation of verifiable singular statements. (His abandonment of conclusive
verifiability is indicated only in Schlick 1936a.) By contrast, Hahn (1933, drawn from lectures in
1932) pointed out that hypotheses should be counted as properly meaningful as well and that the
criterion be weakened to allow for less than conclusive verifiability. But other elements played into
this liberalization as well. One that began to do so soon was the recognition of the problem of the
irreducibility of disposition terms to observation terms (more on this presently). A third element was
that disagreement arose as to whether the in-principle verifiability or support turned on what was
merely logically possible or on what was nomologically possible, as a matter of physical law etc. A
fourth element, finally, was that differences emerged as to whether the criterion of significance was
to apply to all languages or whether it was to apply primarily to constructed, formal languages.
Schlick retained the focus on logical possibility and natural languages throughout, but Carnap had
firmly settled his focus on nomological possibility and constructed languages by the mid-thirties.
Concerned with natural language, Schlick (1932, 1936a) deemed all statements meaningful for
which it was logically possible to conceive of a procedure of verification; concerned with
constructed languages only, Carnap (1936–37) deemed meaningful only statements for whom it was
nomologically possible to conceive of a procedure of confirmation of disconfirmation.

This distinction between logical and nomological possibility inre. verificationism i have encountered before. I know a fysicist who endorses verificationism. We have been discussing various problems for this view. His view has implications regarding quantum mechanics that i don’t like.

First, black holes have only 3 independent fysical properties according to standard theory: mass, charge, and angular momentum. However, how does one measure a black hole’s charge? Is it fysically possible? My idea was that it wasn’t, and thus his verificationist ideas imply that a specific part of standard theory about black holes is not just wrong, but meaningless. However, it seems that my proposed counter-example doesn’t work.

Second, another area of trouble is the future and the past. Sentences about the future and the past, are they fysically possible to verify? It seems not. If so, then it follows that all such sentences are meaningless. My fysicist friend sort of wants to buy the bullet here and go with that. I consider it a strong reason to not accept this particular kind of verificationism. The discussion then becomes complicated due to the possible truth of causal indeterminism. Future discussions await! (or maybe that sentence is just meaningless gibberish!)

Also, i consider the traditional view of laws of nature as confused, and agree with Norman Swartz about this.


Logical Empiricism

Richard von Mises (1883–1953)
Born in what is now the Ukraine, Richard von Mises is the brother of the economic and
political theorist Ludwig von Mises. Richard was a polymath who ranged over fields as
diverse as mathematics, aerodynamics, philosophy, and Rilke’s poetry. He finished his
doctorate in Vienna. He was simultaneously active in Berlin, where he was one of the
developers of the frequency theory of probability along with Reichenbach, and in Vienna,
where he participated in various discussion groups that constituted the Vienna Circle.
Eventually it was necessary to escape, first to Turkey, and eventually to MIT and Harvard.

Another polymath that i hadn’t heard about before.


Hilary Putnam (1926–)
This American philosopher of science, mathematics, mind and language earned his doctorate
under Reichenbach at UCLA and subsequently taught at Princeton, MIT, and Harvard. He was
originally a metaphysical realist, but then argued forcefully against it. He has continued the
pragmatist tradition and been politically active, especially in the 1960s and 70s.

I keep thinking this is a woman. Apparently, however, the female version of this name is spelled with 2 L’s according to Wiki:

Hilary or Hillary is a given and family name, derived from the Latin hilarius meaning “cheerful”, from hilaris, “cheerful, merry”[1] which comes from the Greek ἱλαρός (hilaros), “cheerful, merry”,[2] which in turn comes from ἵλαος (hilaos), “propitious, gracious”.[3] Historically (in America), the spelling Hilary has generally been used for men and Hillary for women, though there are exceptions, some of which are noted below. In modern times it has drastically declined in popularity as a name for men. Ilaria is the popular Italian and Spanish form. Ilariana and Ylariana (/aɪˌlɑːriˈɑːnə/ eye-LAH–ree-AH-nə) are two very rare feminine variants of the name.

It also reminds me that i really shud get around to reading his famous paper:


Whenever I talk with continentals they keep getting angry at me. Because I continually claim not to understand what they say. An example. Some days ago I was at a party where a lot of phil. students attended. I talked with some of them that I don’t normally talk with (and now I have even better reason not to talk with them). I don’t recall why but we got into a discussion of scientism, and one of them advanced an argument against some kind of very strong scientism which he phrased like this (translated)
“Science has all the answers.”

And I asked him what he meant because, clearly, he was using some kind of metaphor. What would it even mean to say that science has an answer? I gave them an example of how “having an answer” is used literally. An example with a classroom and the teacher asking a specific student if he has the answer for a specific question. That is an instance of literal use of the phrase. The student has an answer iff he knows what the correct answer is to the question. I asked the person if he meant that scientists have all the answers (to all questions presumably). But he insisted that it made sense to say what he did. I asked him what it would mean to say that some other field of inquiry had all the answers, like mathematics. What would that mean? But I didn’t get any useful reply. After some minutes or maybe just seconds he gave up and stopped talking with me. So good for actually saying something meaningful.

I prefer not to use the phrase “has all the answers” at all since it’s pretty unclear. Presumably it’s about having (that is, knowing or at least believing) that something is a correct answer to some question. If I was to discuss scientism, I would phrase it something like: Are there things which if true cannot be discovered to be so by doing science? Something like that.

I think I recall why we talked of scientism. He thinks that analytic phil. ‘makes’ the claim that we talked about. Whatever that means.

Now, today I saw a relatively analytical person write something similar.

“PSR says: “For every fact F, there must be an explanation why F is the case.”

An atom of plutonium sits there in the canister of radioactive waste. It sits there and sits there and sits there … and then POW! … it decays.

Q: What is the explanation for why it decayed THEN? And not some other time?

A: Modern science says there is no reason. It is random. Which does not comport with the PSR.” (Smullyan-esque, post)

“PSR” =df “Principle of Sufficient Reason”

The interesting sentence in this case is “Modern science says there is no reason.”. It is some kind of non-literal language. It does not mean anything to say of a field of inquiry that it says something. But it seems to me that what he meant is that theories or findings in modern science imply that there isn’t a reason (i.e. quantum theory). But it isn’t entirely clear. I prefer literal language.

Give it a read. It is divided into 4 parts:

My Take on the Liar Paradox (Part I of IV)
My Take on the Liar Paradox (Part II of IV)
My Take on the Liar Paradox (Part III of IV)
My Take on the Liar Paradox (Part IV of IV)
All four articles combine to a total of about 8,000 words, so it will not take long for a dedicated reader to read through it.