Categories

## Book review: The Book of Why: The New Science of Cause and Effect (Judea Pearl, Dana Mackenzie)

This is an interesting but annoying book. The basic pattern of the book goes like this:

1. Scientists used to totally not know anything proper about how to do X or think about X.
2. But no worries, then I, the Great Pearl (+ students), came along and invented Causal Diagrams and some equations, and now the field has been revolutionized.

The book contains basically no real life applications of these methods, and any working scientist should scoff at this. A particularly concise example toward the end of the book:

Elias Bareinboim [i.e. Pearl’s student] has managed to do the same thing for the problem of transportability [Pearl’s term for validity generalization] that Ilya Shpitser did for the problem of interventions. He has developed an algorithm that can automatically determine for you whether the effect you are seeking is transportable, using graphical criteria alone. In other words, it can tell you whether the required separation of S from the do-operators can be accomplished or not.

Bareinboim’s results are exciting because they change what was formerly seen as a threat to validity into an opportunity to leverage the many studies in which participation cannot be mandated and where we therefore cannot guarantee that the study population would be the same as the population of interest. Instead of seeing the difference between populations as a threat to the “external validity” of a study, we now have a methodology for establishing validity in situations that would have appeared hopeless before. It is precisely because we live in the era of Big Data that we have access to information on many studies and on many of the auxiliary variables (like Z and W) that will allow us to transport results from one population to another.

I will mention in passing that Bareinboim has also proved analogous results for another problem that has long bedeviled statisticians: selection bias. This kind of bias occurs when the sample group being studied differs from the target population in some relevant way. This sounds a lot like the transportability problem—and it is, except for one very important modification: instead of drawing an arrow from the indicator variable S to the affected variable, we draw the arrow toward S. We can think of S as standing for “selection” (into the study). For example, if our study observes only hospitalized patients, as in the Berkson bias example, we would draw an arrow from Hospitalization to S, indicating that hospitalization is a cause of selection for our study. In Chapter 6 we saw this situation only as a threat to the validity of our study. But now, we can look at it as an opportunity. If we understand the mechanism by which we recruit subjects for the study, we can recover from bias by collecting data on the right set of deconfounders and using an appropriate reweighting or adjustment formula. Bareinboim’s work allows us to exploit causal logic and Big Data to perform miracles that were previously inconceivable.
Words like “miracles” and “inconceivable” are rare in scientific discourse, and the reader may wonder if I am being a little too enthusiastic. But I use them for a good reason. The concept of external validity as a threat to experimental science has been around for at least half a century, ever since Donald Campbell and Julian Stanley recognized and defined the term in 1963. I have talked to dozens of experts and prominent authors who have written about this topic. To my amazement, not one of them was able to tackle any of the toy problems presented in Figure 10.2. I call them “toy problems” because they are easy to describe, easy to solve, and easy to verify if a given solution is correct.

At present, the culture of “external validity” is totally preoccupied with listing and categorizing the threats to validity rather than fighting them. It is in fact so paralyzed by threats that it looks with suspicion and disbelief on the very idea that threats can be disarmed. The experts, who are novices to graphical models, find it easier to configure additional threats than to attempt to remedy any one of them. Language like “miracles,” so I hope, should jolt my colleagues into looking at such problems as intellectual challenges rather than reasons for despair.

I wish that I could present the reader with successful case studies of a complex transportability task and recovery from selection bias, but the techniques are still too new to have penetrated into general usage. I am very confident, though, that researchers will discover the power of Bareinboim’s algorithms before long, and then external validity, like confounding before it, will cease to have its mystical and terrifying power.

Alright then! For those who are wondering what he is talking about. He is merely talking about the conditions under which we can extrapolate the validity of some effect in one population to another that might differ in some characteristics. In Pearl’s world, this means that we just draw up a simple Causal Diagram for it, and then we figure out how to modify the effect size given changes to this diagram. What his student did is come up with an algorithm that can handle arbitrary changes to this diagram. Of course, in the real world, we don’t have any simple causal diagrams everybody agrees on, and neither do we know how they might differ between one population and another. So these methods, though cool, are inapplicable to the real world.

Pearl himself takes some pride in this otherworldly approach:

In Chapter 3 I wrote about some of the reasons for this slow progress. In the 1970s and early 1980s, artificial intelligence research was hampered by its focus on rule-based systems. But rule-based systems proved to be on the wrong track. They were very brittle. Any slight change to their working assumptions required that they be rewritten. They could not cope well with uncertainty or with contradictory data. Finally, they were not scientifically transparent; you could not prove mathematically that they would behave in a certain way, and you could not pinpoint exactly what needed repair when they didn’t. Not all AI researchers objected to the lack of transparency. The field at the time was divided into “neats” (who wanted transparent systems with guarantees of behavior) and “scruffies” (who just wanted something that worked). I was always a “neat.”

I was lucky to come along at a time when the field was ready for a new approach. Bayesian networks were probabilistic; they could cope with a world full of conflicting and uncertain data. Unlike the rule-based systems, they were modular and easily implemented on a distributed computing platform, which made them fast. Finally, as was important to me (and other “neats”), Bayesian networks dealt with probabilities in a mathematically sound way. This guaranteed that if anything went wrong, the bug was in the program, not in our thinking.

Even with all these advantages, Bayesian networks still could not understand causes and effects. By design, in a Bayesian network, information flows in both directions, causal and diagnostic: smoke increases the likelihood of fire, and fire increases the likelihood of smoke. In fact, a Bayesian network can’t even tell what the “causal direction” is. The pursuit of this anomaly—this wonderful anomaly, as it turned out—drew me away from the field of machine learning and toward the study of causation. I could not reconcile myself to the idea that future robots would not be able to communicate with us in our native language of cause and effect. Once in causality land, I was naturally drawn toward the vast spectrum of other sciences where causal asymmetry is of the utmost importance.
So, for the past twenty-five years, I have been somewhat of an expatriate from the land of automated reasoning and machine learning. Nevertheless, from my distant vantage point I can still see the current trends and fashions.

In recent years, the most remarkable progress in AI has taken place in an area called “deep learning,” which uses methods like convolutional neural networks. These networks do not follow the rules of probability; they do not deal with uncertainty in a rigorous or transparent way. Still less do they incorporate any explicit representation of the environment in which they operate. Instead, the architecture of the network is left free to evolve on its own. When finished training a new network, the programmer has no idea what computations it is performing or why they work. If the network fails, she has no idea how to fix it.

Perhaps the prototypical example is AlphaGo, a convolutional neural-network-based program that plays the ancient Asian game of Go, developed by DeepMind, a subsidiary of Google. Among human games of perfect information, Go had always been considered the toughest nut for AI. Though computers conquered humans in chess in 1997, they were not considered a match even for the lowest-level professional Go players as recently as 2015. The Go community thought that computers were still a decade or more away from giving humans a real battle.

That changed almost overnight with the advent of AlphaGo. Most Go players first heard about the program in late 2015, when it trounced a human professional 5–0. In March 2016, AlphaGo defeated Lee Sedol, for years considered the strongest human player, 4–1. A few months later it played sixty online games against top human players without losing a single one, and in 2017 it was officially retired after beating the current world champion, Ke Jie. The one game it lost to Sedol is the only one it will ever lose to a human.

All of this is exciting, and the results leave no doubt: deep learning works for certain tasks. But it is the antithesis of transparency. Even AlphaGo’s programmers cannot tell you why the program plays so well. They knew from experience that deep networks have been successful at tasks in computer vision and speech recognition. Nevertheless, our understanding of deep learning is completely empirical and comes with no guarantees. The AlphaGo team could not have predicted at the outset that the program would beat the best human in a year, or two, or five. They simply experimented, and it did.

Some people will argue that transparency is not really needed. We do not understand in detail how the human brain works, and yet it runs well, and we forgive our meager understanding. So, they argue, why not unleash deep-learning systems and create a new kind of intelligence without understanding how it works? I cannot say they are wrong. The “scruffies,” at this moment in time, have taken the lead. Nevertheless, I can say that I personally don’t like opaque systems, and that is why I do not choose to do research on them.

I still want to give this book 4/5 stars because it certainly is informative about how Pearl works, and Pearl is an interesting guy and no doubt has had a big impact. The book should thus be seen as an inadvertent semi-autobiography of Pearl. And of course, it presents a lot of interesting information about how one can think in simplified causal pathways, even though these aren’t usually so useful in practice.

Categories

## Empirical math, or how experiments can increase our confidence in mathematical conclusions despite access to formal proofs

Earlier posts: Something about certainty, proofs in math, induction/abduction, Is the summed cubes equal to the squared sum of counting integer series?

The things I’m going to say for math are equally true for logic, so whenever I write “math”, just mentally substitute to “math and logic”.

Here I’m going to be talking about something different than the normal empiricist approach to philosophy of math. Briefly speaking, that approach states that mathematical truths are just like truths in empirical science because they really are knowable in kind of the same way and there is no special math way of knowing things. The knowability can be either thru inductive generalizations (enumerative induction) or thru a coherentist view of epistemic justification (which I adhere to and which is closely related to consilience).

### When a sound proof isn’t enough

I asked a mathy friend of mine to find out whether all numbers of the series:

S. 9, 99, 999, 9999, …

are divisible by 3. Let’s call this proposition P. A moment’s reflection should reveal the answer to be yes, but it is not so easy to give a formal proof of why this is the case*.

My mathy friend came up with this:

What he is calling cross sum is a translation of the Danish tværsum, but it looks like it refers to something unrelated. However, I managed to find the English term: digit sum. The recursive version is digital root.

I’m not sure it is quite formatted correctly (e.g. ai should just be a, I think), but the idea is something like this:

1. Each of the numbers in S can be constructed with the summation equation given, a=9 and for k→∞. E.g. for k=2, summation is n=9*100+9*101+9*102=9+90+900=999.
2. The 10 modulus 9 is 1, which is just to say that dividing 10 by 9 gives a remainder of 1.
3. Some equations with digit roots and modulus which aims to show that the digit root of each member of S is the same (?).
4. Finally, because we know that having a digital root of 9 means a number is divisible by 3, then all members of S are divisible by 3.

I’m not sure why he is using modulus, but presumably there is some strong connection between modulus and digital roots that I’m not familiar with. A reader of this post can probably come up with a better proof. :)

However, let’s assume that we have a proof that we think is sound (valid+has true premises). How certain should we be in our belief that P is true (by which I just mean, what probability should we assign to P being true), given that we think we have a sound proof? The answer isn’t 100%. To see why, we need to consider that uncertainty in a conclusion can come from (at least) three sources: 1) uncertainty inherent in the inference, 2) uncertainty in the premises, and 3) uncertainty about the inference type.

The first source is the one most familiar to scientists: since we (usually) can’t measure the entire population of some class of interest (e.g. humans, some of which are dead!), we need to rely on a subset of the population, which we call a sample. To put it very briefly, statistical testing is concerned with the question of how to infer and how certain we can be about the properties of the population. This source is absent when dealing with deductive arguments about math.

The second source is our certainty in the premises. Assuming a roughly linear transfer of justification from premises (in conjoint form) to conclusion in deductive arguments means that the certainty in a conclusion we derive from a set of premises cannot be stronger than our certainty in our premises. In empirical domains, we are never 100% certain about our premises since these themselves are infested with these sources of uncertainty. However, for math, this brings us back to the question of how certain we are about the assumptions we use in proofs. Generally, these can be deduced from simpler principles until we in the end get down to the foundations of math. This again brings us to the question of which epistemology is right: foundationalism or coherentism (or exotic infinitism, but few take that seriously)? Can we really be 100% certain about the foundations of math? Weren’t some very smart people wrong in the past about this question (some of them must be, since they disagree)? What does this show about how certain we should be?

The third source is our human fallibility in even knowing which structure the argument has. Who has not made countless errors during their lifetime in getting mathematical proofs right? Thus, we have ample empirical evidence of our own failing to correctly identify which arguments are valid and which are not. The same applies to inductive arguments.

### The place for experiments in math

If the above is convincing, it means that we cannot be certain about mathematical conclusions, even when we have formal proofs.

Often, we use a suitable counterexample to show that a mathematical claim is false. For instance, if someone claimed that naive set theory is consistent (i.e. has no inconsistencies), we would point the person to Russell’s paradox. However, coming up with this counterexample wasn’t easy, it remained undiscovered for many years. Counterexamples in math are, as far as I know, usually found thru expert intuition. While this sometimes works, it isn’t a good method. A better method is to systematically search for counterexamples.

How could we do that? Well, we could simply try a lot of numbers. This would make it impractical for humans, but not necessarily for computers. Thus, when we have a mathematical claim of the type all Xs are Ys, we can try to disprove it by generating lots of cases and checking whether they have the property that is claimed (i.e. they shouldn’t have X&~Y). Sometimes we can even do this in an exhaustive or locally exhaustive (e.g. try all numbers between 1 and 1000) way. Other times the search space is too large for modern computers, so we would need to use sampling. This of course means that we introduce type 1 uncertainty discussed above.

Still, if continued search as described above fails to disprove a mathematical claim, my contention is that this increases our certainty about the claim, just as it does in regular empirical science. In my conversation with mathy people, they were surprisingly unwilling to accept this conclusion which is why I wrote up this longer blogpost.

### Experimental code for members of S

So, are all members of S divisible by 3?

```# divisible by 3 ------------------------------------------------------------
library(stringr)
depth = 20
results = numeric()
for (i in 1:depth) {
#repeat 9 i times
x_rep = rep(9, i)
#place next to each other, then convert to numeric
x = str_c(x_rep, collapse = "") %>% as.numeric
#save modulus 3
results[i] = x %% 3
}
results
#> [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0```

So, the first 20 members of S are divisible by 3. This increases the strength of my belief in P, even if the proof above (or something similar) turns out to work.

### My intuitive proof

The reason I said that a moment’s reflection should be sufficient is that the following informal proof springs immediately to my mind:

1. All members of S are multiples of the first member of S.
Specifically, the vector of factors to use is, F: 1, 11, 111, 1111, …
2. If n is divisible by k, then n*i is also divisible of k, where n, k, and i are all non-zero integers.
3. 9 is divisible by 3.
4. Thus, all members of S are divisible by 3.

Maybe someone can find a way to prove (1) and (2).

### Other R code

Some other R code I wrote for this post, but didn’t end up discussing in the post.

```# digital root ------------------------------------------------
digital_root = function(x) {
#convert to chr
x_str = as.character(x)
#split into digits, unlist, convert back to num
x_split = str_split(x_str, "") %>% unlist %>% as.numeric
#get sum of digits
x_sum = sum(x_split)
#if is has more than 1 digit, call digital sum on result
#otherwise return result
if (x_sum > 9) {
digital_root(x_sum)
} else {
return(x_sum)
}
}

#tests
digital_root(9) == 9
digital_root(99) == 9
digital_root(999) == 9
digital_root(12) == 3
digital_root(123) == 6
digital_root(1234) == 1```
```# distributive? -----------------------------------------------------------
depth = 100
results = matrix(nrow=depth, ncol=depth)
for (i in 1:depth) {
for (j in 1:depth) {
ij = i + j
results[i, j] = digital_root(ij) == digital_root(i) + digital_root(j)
}
}```
Categories

## Review of Expert Political Judgement (Philip E. Tetlock)

[Philip_E._Tetlock]_Expert_Political_Judgment_How(Bookos.org)

Very interesting book!

—-

Game Theorists. The rivalry between Sherlock Holmes and the evil

genius Professor Moriarty illustrates how indeterminacy can arise as a

natural by-product of rational agents second-guessing each other. When

the two ﬁrst met, Moriarty was eager, too eager, to display his capacity

for interactive thinking by announcing: “All I have to say has already

crossed yours.” As the plot unfolds, Holmes uses his superior “interac-

tive knowledge” to outmaneuver Moriarty by unexpectedly getting off

the train at Canterbury, thwarting Moriarty who had calculated that Paris

was Holmes’s rational destination. Convoluted though it is, Moriarty

would deduce what a rational Holmes would do under the circum-

stances, and the odds now favored Holmes getting off the train earlier

than once planned.23

Indeterminacy problems of this sort are the bread and butter of behav-

ioral game theory. In the “guess the number” game, for example, con-

testants pick a number between 0 and 100, with the goal of making their

guess come as close as possible to two-thirds of the average guess of all

the contestants.

24 In a world of only rational players—who base their

guesses on the maximum number of levels of deduction—the equilib-

rium is 0. However, in a contest run at Richard Thaler’s prompting by

the Financial Times,

25 the most popular guesses were 33 (the right guess

if everyone else chooses a number at random, producing an average guess

of 50) and 22 (the right guess if everyone thinks through the preceding

argument and picks 33). Dwindling numbers of respondents carried the

deductive logic to the third stage (picking two-thirds of 22) or higher,

with a tiny hypereducated group recognizing the logically correct answer

to be 0. The average guess was 18.91 and the winning guess, 13, which

suggests that, for this newspaper’s readership, a third order of sophisti-

cation was roughly optimal.

interesting

Our reluctance to acknowledge unpredictability keeps us looking for

predictive cues well beyond the point of diminishing returns. 39 I witnessed

a demonstration thirty years ago that pitted the predictive abilities of a

classroom of Yale undergraduates against those of a single Norwegian

rat. The task was predicting on which side of a T-maze food would ap-

pear, with appearances determined—unbeknownst to both the humans

and the rat—by a random binomial process (60 percent left and 40 per-

cent right). The demonstration replicated the classic studies by Edwards

and by Estes: the rat went for the more frequently rewarded side (getting

it right roughly 60 percent of the time), whereas the humans looked hard

for patterns and wound up choosing the left or the right side in roughly

the proportion they were rewarded (getting it right roughly 52 percent of

the time). Human performance suffers because we are, deep down, de-

terministic thinkers with an aversion to probabilistic strategies that ac-

cept the inevitability of error. We insist on looking for order in random

sequences. Confronted by the T-maze, we look for subtle patterns like

“food appears in alternating two left/one right sequences, except after

the third cycle when food pops up on the right.” This determination to

ferret out order from chaos has served our species well. We are all bene-

ficiaries of our great collective successes in the pursuit of deterministic reg-

ularities in messy phenomena: agriculture, antibiotics, and countless other

inventions that make our comfortable lives possible. But there are occa-

sions when the refusal to accept the inevitability of error—to acknowledge

that some phenomena are irreducibly probabilistic—can be harmful.

indeed, but generally it is wise to not accept the unpredictability hypothesis about some fenomena. many things that were thought unpredictable for centures turned out to be predictable after all, or at least to some degree. i have confidence we will see the same for earthquakes, weather systems and the like in the future as well.

predictability (and the related determinism) hypothesis are good working hypotheses, even if they turn out to be wrong some times.

this is what i wrote about years ago on my danish blog here. basically, its a 2×2 table:

 What we think/what is true Determinism Indeterminism Determinism We keep looking for explanations for fenomena and in over time, we find regularities and explanations. We waste time looking for patterns that arent there. Indeterminism We dont spend time looking for patterns, but there actually are patterns we that cud use to predict the future, and hence we lose out on possible advances in science. We dont waste time looking for patterns that arent there.

The above is assuming that indeterminism implies total unpredictability. This isnt true, but in the simplified case where were dealing with completely random fenomena and completely predictable fenomena, this is a reasonable way of looking at it. IMO, it is much better to waste time looking for explanations for things that are not orderly (after all), than risk not spotting real patterns in nature.

Finally, regardless of whether it is rash to abandon the meliorist search

for the Holy Grail of good judgment, most of us feel it is. When we weigh

the perils of Type I errors (seeking correlates of good judgment that will

prove ephemeral) against those of Type II errors (failing to discover

durable correlates with lasting value), it does not feel like a close call. We

would rather risk anointing lucky fools over ignoring wise counsel. Radi-

cal skepticism is too bitter a doctrinal pill for most of us to swallow.

exactly

But betting is one thing, paying up another. Focusing just on reactions

to losing reputational bets, ﬁgure 4.1 shows that neither hedgehogs nor

foxes changed their minds as much as Reverend Bayes says they should

have. But foxes move more in the Bayesian direction than do hybrids and

hedgehogs. And this greater movement is all the more impressive in light

of the fact that the Bayesian updating formula demanded less movement

from foxes than from other groups. Foxes move 59 percent of the pre-

scribed amount, whereas hedgehogs move only 19 percent of the pre-

scribed amount. Indeed, in two regional forecasting exercises, hedgehogs

move their opinions in the opposite direction to that prescribed by Bayes’s

theorem, and nudged up their conﬁdence in their prior point of view after

the unexpected happens. This latter pattern is not just contra-Bayesian; it

is incompatible with all normative theories of belief adjustment.8

https://en.wikipedia.org/wiki/Backfire_effect#Backfire_effect

Categories

## Review: The Signal and the Noise

The Signal and the Noise Why So Many Predictions Fail – but Some Don’t Nate Silver 544p

It is a pretty interesting book especially becus it covers some areas of science not usually covered in popsci (geology, meteorology), and i learned a lot. it is also clearly written and easy to read, which speeds up reading speeds, making the 450ish pages rather quickly to devour. From a learning perspectiv this is awesome as it allows for faster learning. it shud also be mentioned that it has a lot of very useful illustrations which i shared on my social networks while reading it.

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

thats interesting. never heard of this.

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

Nicely illustrating for importance of honesty in reporting data, even on local TV.

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

and this is why we cant have nice things, i mean macroeconomics

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

thats interesting. never heard of this.

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

Nicely illustrating for importance of honesty in reporting data, even on local TV.

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

and this is why we cant have nice things, i mean macroeconomics

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

Categories

## Review: Beyond the Hoax: Science and Culture (Alan Sokal)

beyond the hoax – alan sokal

Much of the material is the same as in Sokal and Bricmont’s earlier book. But there is some new material as well. I especially found the stuff on hindu nationalism and pseudoscience interesting, and the stuff on pseudoscience in nursing. Never heard of that before, but it wasnt totally unexpected. All health related fields hav large amounts of pseudoscience. It is unfortunate that the most important fields are those most full of pseudoscience!

—-

Part III goes on to treat weightier social and political topics using the

same lens. Chapter 8 analyzes the paradoxical relation between pseudo­

science and postmodernism, and investigates how extreme skepticism can

abet extreme credulity, using a series of detailed case studies: pseudosci­

entific therapies in nursing and “alternative medicine”; Hindu nationalist

pseudoscience in India21; and radical environmentalism. This investigation

is motivated by my suspicion that credulity in minor matters prepares the

mind for credulity in matters of greater import — and, conversely, that the

kind of critical thinking useful for distinguishing science from pseudoscience

might also be of some use in distinguishing truths in affairs of state from

lies. Chapter 9 takes on the largest and most powerful pseudoscience of all:

organized religion. This chapter focusses on the central philosophical and

political issues raised by religion in the contemporary world: it deplores the

damage that is done by our culture’s deference toward “faith”, and it asks

how nonbelievers and believers can find political common ground based

on shared moral ideas. Finally, Chapter 10 draws some of these concerns

together, and discusses the relationship between epistemology and ethics as

they interact in the public sphere.

surely this is true.

#115 The idea that theories should refer only to observable quantities is called operationalism.-, far

from being postmodernist, it was popular among physicists and philosophers of physics in the first

half of the twentieth century. But it has severe flaws: see Chapter 7 below (pp. 240-245) as well as

Weinberg (1992, pp. 174-184).

i thought this was a part of logiclal positivism, and it seems that it was. i knew about operational definitions.

http://plato.stanford.edu/entries/operationalism/

When all is said and done, the fundamental flaw in Merchant and Hard­

ing’s metaphor-hermeneutics is not exegetical but logical. Let us grant for the

sake of argument that some of the founders of modem science consciously

used sexist metaphors to promote their epistemological and methodological

views (this much is probably true, even if Merchant and Harding have exag­

gerated the case). But what would that entail for the philosophy (as opposed

to the history) of science? Apparently the critics wish to claim that sexism

could have passed from metaphor into the substantive content of scientific

methods and/or theories. But if modem science does in fact contain sexist

assumptions, then surely the feminist theorists ought to be able to locate and

criticize those biased assumptions, independently of any argument from his­

tory. Indeed, to do otherwise is to commit the “genetic fallacy”: evaluating an

idea on the basis of its origin rather than its content.

Putting aside the florid accusations of rape and torture, the argument of

Merchant and Harding boils down to the assertion that the scientific rev­

olution of the seventeenth century displaced a female-centered (spiritual,

hermetic, organic, geocentric) universe in favor of a male-centered (ratio­

nalist, scientific, mechanical, heliocentric) one.21 How should we evaluate

this argument?

To begin with, one might wonder whether the gender associations claimed

for these two cosmologies are really as univocal as the feminist critics

claim.22 (After all, the main defender of the geocentric worldview — the

Catholic Church — was not exactly a female-centered enterprise, its adora­

tion of the Virgin Mary notwithstanding.) But let us put aside this objection

and grant these gender associations for the sake of argument; for the princi­

pal flaw in the Merchant-Harding thesis is, once again, not historical but log­

ical. Margarita Levin puts it bluntly: Do Merchant and Harding really “think

we have a choice about which theory is correct? Masculine or feminine, the

solar system is the way it is.”23

The same point applies not only to astronomy but to scientific theories

quite generally; and the bottom line is that there is ample evidence, indepen­

dent of any allegedly sexist imagery, for the epistemic value of modem sci­

ence. Therefore, as Koertge remarks, “if it really could be shown that patri­

archal thinking not only played a crucial role in the Scientific Revolution but

is also necessary for carrying out scientific inquiry as we know it, that would

constitute the strongest argument for patriarchy that I can think of!”24

true story :D

Of course, the feminist science-critics are not only archaeologists of

300-year-old science; some of their critique is resolutely modem, even post­

modern. Here, for instance, is what Donna Haraway, professor of the history

of consciousness (!) at the University of Califomia-Santa Cruz and one of

the most acclaimed feminist theorists of science, says about her research:

For the complex or boundary objects in which I am interested, the

mythic, textual, technical, political, organic, and economic dimensions

implode. That is, they collapse into each other in a knot of extraordinary

density that constitutes the objects themselves. In my sense, story telling

is in no way an ‘art practice’ — it is, rather, a fraught practice for narrat­

ing complexity in such a field of knots or black holes. In no way is story

telling opposed to materiality. But materiality itself is tropic; it makes us

swerve, it trips us; it is a knot of the textual, technical, mythic/oneiric,

organic, political, and economic.2

As right-wing critic Roger Kimball acidly comments: “Remember that this

woman is not some crank but a professor at a prestigious university and

one of the leading lights of contemporary ‘women’s studies.’ ”26 The saddest

thing, for us pinkos and feminists, is that Kimball is dead on target.

women’s studies is nearly completely trash. reminds me of the article about black studies in the US: https://chronicle.com/blogs/brainstorm/the-most-persuasive-case-for-eliminating-black-studies-just-read-the-dissertations/46346

This theory is startling, to say the least: Does the author really believe

that menstruation makes it more difficult for young women to understand

elementary notions of geometry? Evidently we are not far from the Victorian

gentlemen who held that women, with their delicate reproductive organs,

are unsuited to rational thought and to science. With friends like this, the

feminist cause has no need of enemies.

the worst enemy of women: women.

[after quoting Lacan]

Mathematicians and physicists are used to receiving this sort of stuff in

typewritten envelopes from unknown correspondents. Lacan’s grammar and

spelling are better than in most of these treatises, but his logic isn’t. To put it

bluntly, Lacan is a crank — an unusually erudite one, to be sure, but a crank

nonetheless.59

interesting. i will ask Sokal to expand on that theme.

So, if we look critically at realism, we may be tempted to turn toward

instrumentalism. But if we look critically at instrumentalism, we feel forced

to return to a modest form of realism. What, then, should one do? Before

coming to a possible solution, let us first consider radical alternatives.

surprisingly true.

[after quoting Plantinga]

Let us stress that we disagree with 90% of Plantinga’s philosophy; but if he is so eloquently on

target on this particular point, why not give him credit for it?

i was surprised they quoted him, but then, they make that comment. perfect play!

Let me stress in advance that I will not be concerned here with explaining

in detail why astrology, homeopathy and the rest are in fact pseudoscience;

that would take me too far afield. Nor will I address, except in passing, the

important but difficult problems of understanding the psychological attrac­

tions of pseudoscience and the social factors affecting its spread.28 Rather,

my principal aim is to investigate the logical and sociological nexus between

pseudoscience and postmodernism.

footnote 28:

For a shrewd meditation on the former question, see Levitt (1999, especially pp. 12-22

and chapter 4). The latter question is indirectly addressed by Burnham (1987), in the context

of a fascinating history of the popularization of science in the United States in the nineteenth

and twentieth centuries.

For my own part, I have been struck by the fact that nearly all the pseudoscientific systems

to be examined in this essay are based philosophically on vitalism: that is, the idea that living

beings, and especially human beings, are endowed with some special quality ( “life energy”,

elan vital, prana, q i ) that transcends the ordinary laws of physics. Mainstream science has

rejected vitalism since at least the 1930s, for a plethora of good reasons that have only become

stronger with time (see e.g. Mayr 1982). But these good reasons are understood by only a tiny

fraction of the populace, even in the industrialized countries where science is supposedly held

in high esteem. Moreover — and perhaps much more importantly — the anti-vitalism charac­

teristic of modem science is deeply unsettling emotionally to most (perhaps all) people, even

to those who are not conventionally religious. See again Levitt (1999). Of course, none of these

speculations pretend to any scientific rigor; careful empirical investigation by psychologists

and sociologists is required.

vitalism -.-

Sokal mentions the https://en.wikipedia.org/wiki/Emily_Rosa experiment.

the proponents must really feel bad… even a child can disprove their beliefs. how study are they??? hopefully, it was only a fringe idea, right, right?

When I first heard about Emily’s experiment, I admired her ingenuity but

wondered whether anyone really took Therapeutic Touch seriously. How

wrong I was! Therapeutic Touch is taught in more than 80 college and uni­

versity schools of nursing in at least 70 countries, is practiced in at least

80 hospitals across North America, and is promoted by leading American

nursing associations.32 Its inventor claims to have trained more than 47,000

practitioners over a 26-year period, who have gone on to train many more.33

At least 245 books or dissertations have been published that include “Thera­

Therapeutic Touch appears to have become one of the most widely practiced

“holistic” nursing techniques.

sigh!

cited from pseudoscience source:

[0]ur intuitive faculty is nothing other than a source of sound premises about the

nature of reality…. [T]here exists within us a source of direct information about

reality that can teach us all we need to know.

top #1 reason not to teach Plato’s nonsense.

But of course, those who believe in Genesis or transubstantiation do not

consider these ideas to be crazy; quite the contrary, they think that they have

good reasons to hold their beliefs. Indeed, Harris argues convincingly that

whenever any person P believes any proposition X — at least in the ordi­

nary sense of the English word “believe” — this requires, first of all, that P

must believe X to be true, i.e. to be a factually accurate representation of

the world; and secondly, that P must think he has good reasons to believe

X, in the sense that he envisions his belief as caused, at least in part, by

the fact that X is true. As Harris puts it (p. 63), “there must be some causal

connection, or an appearance thereof, between the fact in question and my

acceptance of it.”

this kind of causal reliabilism will not work. cf. http://plato.stanford.edu/entries/platonism-mathematics/#EpiAcc

Categories

## Conversation with Miao about guidance, selection of students, epistemology of getting rid of bad meme complexes

[11:22:42] The Midget – Miao: Who the fuck is Isstsidwmnh?
[11:23:09] The Midget – Miao: [05:52:19] Isstsidwmnh: We have no capacity to falsify the existence of reality. That does not mean we throw away reality. In fact… that means we embrace it!
[05:52:32] Isstsidwmnh: If something is too obviously true, then my only criticism is that it will get boring too soon.
[05:57:04] Isstsidwmnh: But, anyway, all this seems to be scientific values projected onto what can be acceptable described as a non-scientific work.
[11:23:12] The Midget – Miao: WTF.
[11:23:35] Emil – Deleet: its I said stupid things so i dont want my name here
[11:23:37] Emil – Deleet: -guy
[11:24:11] The Midget – Miao: At least he knows he said stupid things
[11:24:17] The Midget – Miao: I stopped reading after that part
[11:24:22] Emil – Deleet: dont do that
[11:24:30] Emil – Deleet: u need to stop stopping reading too early
[11:24:45] The Midget – Miao: You are very patient with him!
[11:25:15] Emil – Deleet: im a very patient person
[11:25:16] Emil – Deleet: err
[11:26:02] The Midget – Miao: In that conversation you actually sounded very nurturing and patient
[11:26:15] The Midget – Miao: which is quite different from my impression of you
[11:28:17] Emil – Deleet: im a guiding light for lesser minds
[11:28:39] Emil – Deleet: if i had zero tolerance like u, then they wudnt develop at all
[11:28:49] The Midget – Miao: Yes, but you are also very selective of who you talk to
You don’t talk to ALL stupid people who happen to cross your path
[11:28:56] Emil – Deleet: ofc not
[11:28:58] Emil – Deleet: waste of time
[11:29:05 | Edited 11:29:09] The Midget – Miao: Yes, so what are the selection requirements?
[11:29:11] Emil – Deleet: it isnt even high iq
[11:29:12] Emil – Deleet: :P
[11:29:26] Emil – Deleet: those two filosofy students that i know, they arent high
[11:29:47] Emil – Deleet: but they show tremendous progress
[11:29:53] The Midget – Miao: Yup, what I meant was:
Why are you more kind towards a Freud supporter than a Christian (for example)
[11:29:54] Emil – Deleet: what they needed was guidance
[11:30:09] Emil – Deleet: and a bit of open mindedness and willingness to read a lot of stuff
[11:30:17] The Midget – Miao: Hm.
[11:30:21] Emil – Deleet: xtians are usually hopeless
[11:30:21] The Midget – Miao: Fair enough
[11:30:28] Emil – Deleet: freudian supporters not always
[11:30:49] Emil – Deleet: the freudian complex (HAH!) is more easy to get rid off
[11:31:23] Emil – Deleet: in terms of web of belief, its becus it doesnt integrate so well with the persons other beliefs
[11:31:45] Emil – Deleet: hence, easier to reject – becus it requires a smaller number of changes to beliefs and their connections
[11:32:07] Emil – Deleet: a smaller overhaul of the web of belief, to say it in a more figurative way
[11:32:20] The Midget – Miao: Understandable. If you look at philosophers of religion like Platinga and an Inwagen you’d realise that their arguments are often very convoluted
[11:32:37] The Midget – Miao: because their religious beliefs are very inconsistent with what they know
[11:33:03] The Midget – Miao: So they come up with a lot of very twisted arguments to try to weave a coherent whole
[11:33:26] Emil – Deleet: i think Plantingas modal ontological argument is rather easy to deal with – it ‘only’ requires some understanding of equivocation, alethic (S5) modal logic !
[11:33:42] Emil – Deleet: thats little compared to some other arguments.
[11:33:54] Emil – Deleet: say, arguments from design require a lot of biology, cosmology etc. to deal with
[11:34:26] The Midget – Miao: I forgot the exact contents of his modal ontological argument
[11:34:48] Emil – Deleet: http://analyticabstraction.blogspot.dk/2007/11/philosophy-of-religion-2-natural_14.html

Categories

## Something about certainty, proofs in math, induction/abduction

This conversation followed me posting the post just before, and several people bringing up the same proof.

Aowpwtomsihermng = Afraid of what people will think of me, so i had Emil remove my name-guy

[09:57:00] Emil – Deleet: http://mathbin.net/109013
[09:58:50] Aowpwtomsihermng: Your mates know their algebra.
[10:00:09] Emil – Deleet: this guy is a mathematician
[10:00:27] Emil – Deleet: fysicist ppl have not chimed in yet
[10:00:32] Emil – Deleet: they are having classes i think
[10:08:18] Aowpwtomsihermng: Have you worked out the inductive proof yet?
[10:09:33] Emil – Deleet: no
[10:09:40] Emil – Deleet: i dont know how they work in detail
[10:09:43] Emil – Deleet: and it takes time
[10:09:49] Emil – Deleet: and i already crowdsourced the problem
[10:10:00] Emil – Deleet: so… doesnt pay for me to look for it
[10:10:19] Aowpwtomsihermng: CBA, right?
[10:10:24] Emil – Deleet: i didnt even need any fancy math proof to begin with
[10:10:30] Emil – Deleet: since i already proved it to my satisfaction
[10:10:54] Aowpwtomsihermng: Induction in the logical rather than mathematical sense…
[10:11:00] Emil – Deleet: yes
[10:11:17] Aowpwtomsihermng: Not as rigorous, but useful anyway.
[10:11:23] Emil – Deleet: or abduction
[10:11:46] Emil – Deleet: mathematical certainty is overrated
[10:11:48] Emil – Deleet: ;)
[10:11:59] Emil – Deleet: just look at economics
[10:12:02] Emil – Deleet: :P
[10:12:27] Aowpwtomsihermng: You never know, it might have worked for the first twenty numbers then stopped working. Unlikely, but possible.
[10:12:48] Aowpwtomsihermng: At least now you know that’s not the case.
[10:12:49] Emil – Deleet: astronomically unlikely
[10:12:56] Emil – Deleet: and i also tried other random numbers
[10:13:02] Emil – Deleet: like 3242
[10:13:21] Emil – Deleet: IMO, not much certainty was gained
[10:13:50 | Edited 10:14:04] Emil – Deleet: its approximately as likely that we missed an error in the proof as it is that abduction/induction fails in this case
[10:14:26] Aowpwtomsihermng: But once you have two or three proofs, then that likelihood drops dramatically.
[10:14:46] Emil – Deleet: perhaps
[10:15:00] Aowpwtomsihermng: But I take your point, it’s not a *great* deal of extra certainty.
[10:15:15] Emil – Deleet: for practice, its an irrelevant increase
[10:15:34] Emil – Deleet: if it comes at a great time cost – not worth it
[10:15:41] Emil – Deleet: thats what mathematicians are for ;)
[10:15:50] Emil – Deleet: (with the implication that their time isnt worth much! :D)
[10:16:55 | Edited 10:17:14] Aowpwtomsihermng: Right, right. We programmers and mathematicians are mere cogs in the machinery of your grand device.
[10:17:19] Emil – Deleet: ^^
[10:17:36] Emil – Deleet: at least ure part of something great ^^
[10:17:37] Emil – Deleet: :P

Categories

## Some more stuff about KK-principle

The reason to post the other post about the KKp is that i was having a conversation with a friend about it. Turns out he is a supporter of it. I think it is rather obvious that it isnt true. I posted the argument about infinite beliefs already, but there is more to be said about it.

As it is, some years ago i spent some time discussing KKp with a friend (kennethamy, prof. emeritus, RIP). Unfortunately, i googled around and cudnt find the specific threads. It was on freeratio (FRDB) or what used to be call philosophy forums (now able2know). I found a couple of threads about it, but not exactly what i was looking for.

Im of the opinion that the best way to change wrong opinions in people is not to talk to them about it, and give arguments. This usually results in combative behavior (such are humans). Instead i recommend reading a lot about the subject. So, i looked for some writings to send to my friend about KKp. Here’s what i wrote to him on Skype:

[13:12:42] Emil – Deleet: sketch of proof
[13:12:42] Emil – Deleet: http://emilkirkegaard.dk/en/?p=3264
[13:12:48] Emil – Deleet: and no
[13:12:49] Emil – Deleet: ofc not
[13:12:54] Emil – Deleet: and yes, it is obvious
[13:13:11] Emil – Deleet: i went and looked for my conversation with kennethamy about it
[13:13:16] Emil – Deleet: didnt find it
[13:13:21] Emil – Deleet: well “it”
[13:13:29] Emil – Deleet: since i have talked with him about it many times
[13:13:45] Emil – Deleet: (kennethamy is a now dead prof. emeritus that i knew)
[13:16:41] Emil – Deleet: http://www.iep.utm.edu/kk-princ/
[13:16:50] Emil – Deleet: i think i read that art. years ago
[13:21:22] Emil – Deleet: not particularly clear regarding important things (suprise exam argument)
[13:23:06] Emil – Deleet: //
if u like infinite numbers of beliefs, u might like infinitism as a way out of the regression argument of epis.
[13:23:20] Emil – Deleet: https://en.wikipedia.org/wiki/Infinitism
[13:23:37] Emil – Deleet: i read Klein’s papers in high school (by myself, was curious)
[13:23:41] Emil – Deleet: perhaps u will like them
[13:23:49] Emil – Deleet: i found them interesting but not convincing
[13:24:21] Emil – Deleet: if i had time to go into some depth about KKp again, id read http://web.mit.edu/dlgreco/www/KKPaper.pdf
[13:24:33] Emil – Deleet: u might like that as well, as it is a defense of KKp
[13:30:50] Emil – Deleet: also useful
[13:30:50] Emil – Deleet: http://www.unc.edu/~ujanel/Notes%20on%20the%20KK%20Thesis.pdf
[13:30:59] Emil – Deleet: contains some common counterexamples
[13:32:43] Emil – Deleet: Swartz’s remark also comes to mind “Over the years I have found that a great many claims in epistemology are refuted by looking at the actual beliefs of young children and uneducated and unsophisticated adults.”
[13:33:01] Emil – Deleet: cf. http://emilkirkegaard.dk/en/?p=1690

Categories

## Incomplete formal proof that the KK-principle is wrong

(KK)If one knows that p, then one knows that one knows that p.

Definitions
A0is the proposition that 1+1=2.
A1is the proposition that Emil knows that 1+1=2.
A2is the proposition that Emil knows that Emil knows that 1+1=2.

Anis the proposition that Emil knows that Emil knows that … that 1+1=2.
Where “…” is filled by “that Emil knows” repeated the number of times in the subscript of A.

Argument
1. Assumption for RAA
(∀P∀x)Kx(P)→Kx(Kx(P)))
For any proposition, P, and any person, x, if x knows that P, then x knows that x knows that P.

2. Premise
Ke(A0)
Emil knows that A0.

3. Premise
(∃S1)(A0S1A1S1…∧AnS1)∧|S1|=∞∧S1=SA
There is a set, S1, such that A0belongs to S1, and A1belongs to S1, and … and Anbelongs to S1, and the cardinality of S1is infinite, and S1is identicla to SA.

4. Inference from (1), (2), and (3)
(∀P)P∈SAKe(P)
For any proposition, P, if P belongs to SA, then Emil knows that P.

5. Premise
¬(∀P)P∈SAKe(P)
It is not the case that, for any proposition, P, if P belongs to SA, then Emil knows that P.

6. Inference from (1-5), RAA
¬(∀P∀x)Kx(P)→Kx(Kx(P)))
It is not the case that, for any proposition, P, and any person, x, if x knows that P, then x knows that x knows that P.

Proving it
Proving that it is valid formally is sort of difficult as it requires a system with set theory, predicate logic with quantification over propositions. The above sketch should be enough for whoever doubts the formal validity.

Categories

## Review of and thoughs about “Fooled by Randomness” (Nassim Nicholas Taleb)

In general, this book is full of repetitions, and so it can easily get boring to read. The book shud have been 50-100 pages shorter, then it wud have been much better.

Aside from that, the book is okay. I wudnt particularly recommend reading it, but i wudnt particularly recommend not reading it either. Altho in general one shud not read mediocre books without reason. Time is limited, so one shud read the highest quality material one can find.

Fooled by Randomness – Role of Chance in Markets and Life PROPER

### Pre-chapter 1 – SOLON’S WARNING

Part I is concerned with the degree to which a situation may yet, in

the course of time, suffer change. For we can be tricked by situations

involving mostly the activities of the Goddess Fortuna – Jupiter’s

firstborn daughter. Solon was wise enough to get the following point;

that which came with the help of luck could be taken away by luck (and

often rapidly and unexpectedly at that). The flipside, which deserves to

be considered as well (in fact it is even more of our concern), is that

things that come with little help from luck are more resistant to

randomness. Solon also had the intuition of a problem that has obsessed

science for the past three centuries. It is called the problem of induction.

I call it in this book the black swan or the rare event. Solon even

understood another linked problem, which I call the skewness issue; it

does not matter how frequently something succeeds if failure is too

costly to bear.

Wat. Problem of induction is not the same as the black swan fenomenon. Altho they are somewhat related.

### Chapter 1 – IF YOU’RE SO RICH WHY AREN’T YOU SO SMART?

Nero holds an undergraduate degree in ancient literature and

mathematics from Cambridge University. He enrolled in a Ph.D.

program in statistics at the University of Chicago but, after completing

the prerequisite coursework, as well as the bulk of his doctoral research,

he switched to the philosophy department. He called the switch “a

moment of temporary sanity”, adding to the consternation of his thesis

director who warned him against philosophers and predicted his return

back to the fold. He finished writing his thesis in philosophy. But not the

Derrida continental style of incomprehensible philosophy (that is,

incomprehensible to anyone outside of their ranks, like myself). It was

quite the opposite; his thesis was on the methodology of statistical

inference in its application to the social sciences. In fact, his thesis was

indistinguishable from a thesis in mathematical statistics – it was just a

bit more thoughtful (and twice as long).

I like where this is going. Except that im inclined to think it is incomprehensible to them as well, they are just deluded into thinking that it isnt.

### Chapter 2 – A BIZARRE ACCOUNTING METHOD

The failure rate of these scientists, though, was better, but only

slightly so than that of MBAs; but it came from another reason, linked

to their being on average (but only on average) devoid of the smallest bit

of practical intelligence. Some successful scientists had the judgment

(and social graces) of a door knob – but by no means all of them. Many

people were capable of the most complex calculations with utmost rigor

when it came to equations, but were totally incapable of solving a

problem with the smallest connection to reality; it was as if they

understood the letter but not the spirit of the math. I am convinced that

X, a likeable Russian man of my acquaintance, has two brains: one for

math and another, considerably inferior one, for everything else (which

included solving problems related to the mathematics of finance). But on

occasion a fast-thinking scientific-minded person with street smarts

would emerge. Whatever the benefits of such population shift, it

improved our chess skills and provided us with quality conversation

during lunchtime – it extended the lunch hour considerably. Consider

that I had in the 1980s to chat with colleagues who had an MBA or tax

accounting background and were capable of the heroic feat of discussing

FASB standards. I have to say that their interests were not too

contagious. The interesting thing about these physicists does not lie in

their ability to discuss fluid dynamics; it is that they were naturally

interested in a variety of intellectual subjects and provide pleasant

conversation.

I cud not agree more about fysisists! Thats one reason why i like them. Generally clever and curious people, even if they read far too less to match up with me, and lack rigour in filosofical discussions. Mostly due to no training at all (no logic, no critical thinking etc.).

### Chapter 3 – A MATHEMATICAL MEDITATION ON HISTORY

Another analogy would be with grammar; mathematics is often

tedious and insightless grammar. There are those who are interested in

grammar for grammar’s sake, and those interested in avoiding solecisms

while writing documents. We are called “quants” – like physicists, we

have more interest in the employment of the mathematical tool than in

the tool itself. Mathematicians are born, never made. Physicists and

quants too. I do not care about the “elegance” and “quality” of the

mathematics I use so long as I can get the point right. I have recourse to

Monte Carlo machines whenever I can. They can get the work done.

They are also far more pedagogical, and I will use them in this book for

the examples.

I agree 100% with this view of math. A tool, somewhat interesting in itself, but MUCH MORE interesting when it is applicable to something that interests me.

### Chapter 4 – RANDOMNESS, NONSENSE, AND THE SCIENTIFIC INTELLECTUAL

One conceivable way to discriminate between a scientific intellectual

and a literary intellectual is by considering that a scientific intellectual can

: usually recognize the writing of another but that the literary intellectual

would not be able to tell the difference between lines jotted down by a

scientist and those by a glib non-scientist. This is even more apparent

when the literary intellectual starts using scientific buzzwords, like

“uncertainty principle”, “Godel’s theorem”, “parallel universe”, or

“relativity” either out of context or, as often, in exact opposition to the

scientific meaning. I suggest reading the hilarious Fashionable Nonsense

by Alan Sokal for an illustration of such practice (I was laughing so loudly

and so frequently while reading it on a plane that other passengers kept

whispering things about me). By dumping the kitchen sink of scientific

references in a paper, one can make another literary intellectual believe

that one’s material has the stamp of science. Clearly, to a scientist, science

lies in the rigor of the inference, not in random references to such

grandiose concepts as general relativity or quantum indeterminacy. Such

rigor can be spelled out in plain English. Science is method and rigor; it

can be identified in the simplest of prose writing. For instance, what

struck me while reading Richard Dawkins’ Selfish Gene3 is that, although

the text does not exhibit a single equation, it seems as if it were translated

from the language of mathematics. Yet it is artistic prose.

I like this! I really want to read Fashionable Nonsense aka. Intellectual Impostures, (review here). But i cant find a fucking ebook version. It is SO annoying to read paper books. They are not easy to quote and discuss!

And paper books are ridiculessly overpriced. Especially textbooks.

Randomness can be of considerable help with the matter. For there is

another, far more entertaining way to make the distinction between the

babbler and the thinker. You can sometimes replicate something that can

be mistaken for a literary discourse with a Monte Carlo generator but it

is not possible randomly to construct a scientific one. Rhetoric can be

constructed randomly, but not genuine scientific knowledge. This is the

application of Turing’s test of artificial intelligence, except in reverse.

What is the Turing test? The brilliant British mathematician, eccentric,

and computer pioneer Alan Turing came up with the following test: a

computer can be said to be intelligent if it can (on average) fool a human

into mistaking it for another human. The converse should be true. A

human can be said to be unintelligent if we can replicate his speech by a

computer, which we know is unintelligent, and fool a human into

believing that it was written by a human. Can one produce a piece of

work that can be largely mistaken for Derrida entirely randomly?

The answer seems to be yes. Aside from the hoax by Alan Sokal (the

same of the hilarious book a few lines ago) who managed to produce

Monte Carlo generators designed to structure such texts and write entire

papers. Fed with “postmodernist” texts, they can randomize phrases

under a method called recursive grammar, and produce grammatically

sound but entirely meaningless sentences that sound like Jacques

Derrida, Camille Paglia, and such a crowd. Owing to the fuzziness of his

thought, the literary intellectual can be fooled by randomness.

I agree, except that i dont think Paglia is that bad. Altho, she does appear to like The Second Sex according to Wiki, she gave a pretty nice interview along with Summers. I find it difficult to dislike someone who is labeled an “antifeminist” initually. I skimmed her Wikiquote page, and it doesnt appear to have (m)any nonsense quotations like Derrida and others.

Perhaps Taleb has confused her with some other feminist writer? Surely, there are lots of insane ones.

It is hard to resist discussion of artificial history without a comment on

the father of all pseudothinkers, Hegel. Hegel writes a jargon that is

meaningless outside of a chic Left-Bank Parisian cafe or the humanities

department of some university extremely well insulated from the real

world. I suggest this passage from the German “philosopher” (this

passage was detected, translated and reviled by Karl Popper):

Sound is the change in the specific condition of segregation of the

material parts, and in the negation of this condition; merely an

abstract or an ideal ideality, as it were, of that specification. But this

change, accordingly, is itself immediately the negation of the material

specific subsistence; which is, therefore, real ideality of specific

gravity and cohesion, i.e. – heat. The heating up of sounding bodies,

just as of beaten and or rubbed ones, is the appearance of heat,

originating conceptually together with sound.

Even a Monte Carlo engine could not sound as random as the great

philosophical master thinker (it would take plenty of sample runs to get

the mixture of heat and sound). People call that philosophy and

frequently finance it with taxpayer subsidies! Now consider that

Hegelian thinking is generally linked to a “scientific” approach to

history; it has produced such results as Marxist regimes and even a

branch called “neo-Hegelian” thinking. These “thinkers” should be

given an undergraduate-level class on statistical sampling theory prior to

their release in the open world.

Good title. Ill refer to him as that from now on.

Think this is obscure filosofy? It isnt. It is common. There are multiple mandatory exams where one can pick Hegel in the Aarhus University Department of Philosophy.

Yes, they dun goofed.

There are instances where I like to be fooled by randomness. My allergy

to nonsense and verbiage dissipates when it comes to art and poetry. On

the one hand, I try to define myself and behave officially as a no-

nonsense hyper-realist ferreting out the role of chance; on the other,

have no qualms indulging in all manner of personal superstitions. Where

do I draw the line? The answer is aesthetics. Some aesthetic forms

appeal to something genetic in us, whether or not they originate in

random associations or plain hallucination. Something in our human

genes is deeply moved by the fuzziness and ambiguity of language; then

why fight it?

The poetry and language-lover in me was initially depressed by the

account of the Exquisite Cadavers poetic exercise where interesting and

poetic sentences are randomly constructed. By throwing enough words

together, some unusual and magical-sounding metaphor is bound to

emerge according to the laws of combinatorics. Yet one cannot deny

that some of these poems are of ravishing beauty. Who cares about their

origin if they manage to please our aesthetic senses?

Answer: people who commit the genetic fallacy.

### Chapter 6 – SKEWNESS AND ASYMMETRY

When I was in the employment of the New York office of a large

investment house, I was subjected on occasions to the harrying weekly

“discussion meeting”, which gathered most professionals of the New York

trading room. I do not conceal that I was not fond of such gatherings, and

not only because they cut into my gym time. While the meetings included

traders, that is, people who are judged on their numerical performance, it

was mostly a forum for salespeople (people capable of charming

customers), and the category of entertainers called Wall Street

“economists” or “strategists” who make pronouncements on the fate of

the markets, but do not engage in any form of risk taking, thus having their

success dependent on rhetoric rather than actually testable facts. During

the discussion, people were supposed to present their opinions on the state

of the world. To me, the meeting was pure intellectual pollution. Everyone

had a story, a theory, and insights that they wanted others to share. I resent

the person who, without having done much homework in libraries, thinks

that he is onto something rather original and insightful on a given subject

matter (and respect people with scientific minds like my friend Stan Jonas

who feel compelled to spend their nights reading wholesale on a subject

matter, trying to figure out what was done on the subject by others before

emitting an opinion – would the reader listen to the opinion of a doctor

who does not read medical papers?).

I have to confess that my optimal strategy (to soothe my boredom

and allergy to confident platitudes) was to speak as much as I could,

while totally avoiding listening to other people’s replies by trying to

solve equations in my head. Speaking too much would help me clarify

my mind, and, with a little bit of luck, I would not be “invited” back

(that is, forced to attend) the following week.

Hahaha. What is not to like about this guy? :D He is ofc right about speaking wanting to speak about stuff they know nothing about. There are a few subjects where this ALWAYS happens: IQ-research, politics, filosofy of religion. I think the most annoying is the first, since the science on the matter is so clear. I have definitely changed my mind from initial skepticism towards wholeheartedly embracing it. Most people are still stuck in the “initial skepticism” fase, and they never get out of it becus they dont read. I keep mocking people. They have not even read the mean Wikipedia article about it, but they keep criticizing the research for dumb reasons that have been refuted decades ago, ex. tests are biased (dealing with racial issues), tests dont measure anything useful, one cannot ‘reduce intelligence to one number’ (not even sure what this means, if anything), etc.

I must admit to copying his strategy of talking as much as possible. Altho, keep in mind that “Generally speaking, you aren’t learning much when your lips are moving.”. But then again, i dont generally socialize to learn stuff. Learning stuff is best done at home, reading.

Note that the economist Robert Lucas dealt a blow to econometrics

by arguing that if people were rational then their rationality would

cause them to figure out predictable patterns from the past and adapt, so

that past information would be completely useless for predicting the

future (the argument, phrased in a very mathematical form, earned him

a Nobel Memorial Prize in Economics). We are human and act

according to our knowledge, which integrates past data. I can translate

his point with the following analogy. If rational traders detect a pattern

of stocks rising on Mondays, then, immediately such a pattern becomes

detectable, it would be ironed out by people buying on Friday in

anticipation of such an effect. There is no point searching for patterns

that are available to everyone with a brokerage account; once detected,

they would be ironed out.

I hope he did more than that to get the nobel prize. I have that of that concept many times, altho never mentined it to anyone IIRC. Wud be nice with some 2e+6 swedish kronor.

Somehow, what came to be known as the Lucas critique was not

carried through by the “scientists”. It was confidently believed that the

scientific successes of the industrial revolution could be carried through

into the social sciences, particularly with such movements as Marxism.

Pseudoscience came with a collection of idealistic nerds who tried to

create a tailor-made society, the epitome of which is the central planner.

Economics was the most likely candidate for such use of science; you

can disguise charlatanism under the weight of equations, and nobody

can catch you since there is no such thing as a controlled experiment.

Now the spirit of such methods, called scientism by its detractors (like

myself), continued past Marxism, into the discipline of finance as a few

technicians thought that their mathematical knowledge could lead them

to understand markets. The practice of “financial engineering” came

along with massive doses of pseudoscience. Practitioners of these

methods measure risks, using the tool of past history as an indication of

the future. We will just say at this point that the mere possibility of the

distributions not being stationary makes the entire concept seem like a

costly (perhaps very costly) mistake. This leads us to a more

fundamental question: the problem of induction, to which we will turn

in the next chapter.

Historicism has always bothered me, altho i never studied it in detail. I dont think i have even done the bare minimum of reading the Wikipedia page. Does the Lucasian argument from before show that it is impossible to do historicism. It seems not, altho it goes some of the way. Clearly, there are patterns in history. Perhaps we have just not created some workable general theory(+pl) of history like we have in fysics or biology. Either becus it isnt possible, or becus we havent tried hard enough, or becus we are not clever enough, or becus we have too little data. I dont really know. Altho, if i had to bet, id bet against any such general theory of history.

### Chapter 7 – THE PROBLEM OF INDUCTION

Popper came up with a major answer to the problem of induction (to me

he came up with the answer). No man has influenced the way scientists

do science more than Sir Karl – in spite of the fact that many of his

fellow professional philosophers find him quite naive (to his credit, in

my opinion). Popper’s idea is that science is not to be taken as seriously

as it sounds (Popper when meeting Einstein did not take him as the

demigod he thought he was). There are only two types of theories:

1. Theories that are known to be wrong, as they were tested and

adequately rejected (he calls them falsified).

2. Theories that have not yet been known to be wrong, not falsified yet,

but are exposed to be proved wrong.

Why is a theory never right? Because we will never know if all the

swans are white (Popper borrowed the Kantian idea of the flaws in our

mechanisms of perception). The testing mechanism may be faulty.

However, the, statement that there is a black swan is possible to make. A

theory cannot b’e verified. To paraphrase baseball coach Yogi Berra

again, past data has a lot of good in it, but it is the bad side that is bad. It

can only be provisionally accepted. A theory that falls outside of these

two categories is not a theory. A theory that does not present a set of

conditions under which it would be considered wrong would be termed

charlatanism – they would be impossible to reject otherwise. Why?.

Because the astrologist can always find a reason to fit the past event, by

saying that Mars was probably in line but not too much so (likewise to

me a trader who does not have a point that would make him change his

mind is not a trader). Indeed the difference between Newtonian physics,

which was falsified by Einstein’s relativity, and astrology lies in the

following irony. Newtonian physics is scientific because it allowed us to

falsify it, as we know that it is wrong, while astrology is not because it

does not offer conditions under which we could reject it. Astrology

cannot be disproved, owing to the auxiliary hypotheses that come into

play. Such point lies at the basis of the demarcation between science and

nonsense (called “the problem of demarcation”).

I swear one day i will write something about the cliché Popperian writings. Difficult to believe that a phd cud have written this. It is full of the usual dumb stuff like obvious internal inconsistencies in language use like “A theory that falls outside of these two categories is not a theory.” or what about obviously wrong things like the confusion with demarcation principle (dividing things into science and non-science), falsification principle (a proposed demarcation principle) and meaningfulness. He is using falsification as a reverse verificationism of meaning.

Then there are things like “Newtonian physics is scientific because it allowed us to falsify it” giving the reader the idea, that being falsified is a sufficient condition for being science. Eh.

And whats with astrology as nonfalsifiable? There have been lots of studies that falsify various parts of astrology. It is very much falsified, and it is not scientific becus of that.

This part leaves me greatly disappointed.

### Chapter 9 – IT IS EASIER TO BUY AND SELL THAN FRY AN EGG

You get an anonymous letter on January 2nd informing you that the

market will go up during the month. It proves to be true, but you

disregard it owing to the well-known January effect (stocks have gone

up historically during January). Then you receive another one on Feb 1st

telling you that the market will go down. Again, it proves to be true.

Then you get another letter on March 1st – same story. By July you are

intrigued by the prescience of the anonymous person and you are asked

to invest in a special offshore fund. You pour all your savings into it.

Two months later, your money is gone. You go spill your tears on your

neighbor’s shoulder and he tells you that he remembers that he received

two such mysterious letters. But the mailings stopped at the second

letter. He recalls that the first one was correct in its prediction, the other

incorrect.

What happened? The trick is as follows. The con operator pulls

10,000 names out of a phone book. He mails a bullish letter to one half

of the sample, and a bearish one to the other half. The following month

he selects the names of the persons to whom he mailed the letter whose

prediction turned out to be right, that is, 5,000 names. The next month

he does the same with the remaining 2,500 names, until the list narrows

down to 500 people. Of these there will be 200 victims. An investment

in a few thousand dollars worth of postage stamps will turn into several

million.

This is a rather clever scam.

The most intuitive way to describe the data mining problem to a non-

statistician is through what is called the birthday paradox, though it is

not really a paradox, simply a perceptional oddity. If you meet

someone randomly, there is a one in 365.25 chance of your sharing

their birthday, and a considerably smaller one of having the exact

birthday of the same year. So, sharing the same birthday would be a

coincidental event that you would discuss at the dinner table. Now let

us look at a situation where there are 23 people in a room. What is the

chance of there being two people with the same birthday? About 50%.

For we are not specifying which people need to share a birthday; any

pair works.

I am familiar with the scenario, but is it really that easy to deal with leap year birthdays? It seems not. To make it easy, let’s say that we are looking at a 4-year period. For illustration purposes, let’s talk about marbles with numbers on them in a pool. They have numbers from 1 to 366. For every number except 60 (31 days in january, 29 days in february) there are 4 marbles with that number. There is only one marble with the number 6. In total there are 4·365+1=1461 marbles. With replacement, is the chance of picking a marble, noting the number, blending them, picking a marble again and noting the same number realy 365.25? My intuition says that it is 365+1/8 instead becus of the increased rarity of that marble.

Suppose a person P has birthday on the 1st january. What is his chance of meeting someone with the same birthday? 4 in 1461 = 1 in 365.25. So far so good.

Suppose a person S has birthday on the 29th february. What is his chance of meeting someone with the same birthday? 1 in 1461 ≠ 1 in 365.25.

Im not sure how to add these up to get the average chance. Surely, it very rarely happens that two people born on the 29th february meet each other. This should be reflected in the probability for the average person. It seems to me that his number does not take this into account. It is implicitly ‘assuming’ that the first person is not born on the 29th february.

Im sure a mathematician can solve his and either prove me right or wrong. Another way is just to program a test of it, which might be faster than trying to solve it mathematically.

### Chapter 11 – RANDOMNESS AND OUR BRAIN: WE ARE PROBABILITY BLIND

Who are the most influential economists of the century, in terms of

journal references, their followings, and their influence over the

profession? No, it is not John Maynard Keynes, not Alfred Marshall,

not Paul Samuelson, and certainly not Milton Friedman. They are

Daniel Kahneman and Amos Tversky, psychology researchers whose

specialty was to uncover areas where human beings are not endowed

with rational thinking and optimal economic behavior.

The pair taught us a lot about the way we perceive and handle

uncertainty. Their research, conducted on a population of students and

professors in the early 1970s, showed that we do not correctly

understand contingencies. Furthermore, they showed that in the rare

cases when we understand probability, we do not seem to consider it in

our behavior. Since the Kahneman and Tversky results, an entire

discipline called behavioral finance and economics has flourished. It is in

open contradiction with the orthodox so-called neoclassical economics

taught in business schools under the normative names of efficient

markets, rational expectations, and other such concepts. It is worth

stopping, at this juncture, and discussing the distinction between

normative and positive sciences. A normative science (clearly a self-

contradictory concept) offers prescriptive teachings; it studies how

things should be. Some economists, for example, (those of the efficient

market religion) believe that humans are rational and act rationally

because it is the best thing for them to do (it is mathematically

“optimal”). The opposite is a positive science, which is based on how

people actually are observed to behave. In spite of econQmists’ envy of

physicists, physics is an inherently positive science while economics,

particularly microeconomics and financial economics, is predominantly

a normative one.

A normative science is ‘clearly’ a contradiction? That reminds me of the Peircian definition of “logic”, which is similar to the one found here “Briefly speaking, we might define logic as the study of the principles of correct reasoning.”.

The soft sciences of psychology and economics have cheated us on

occasions in the past. How? Economics has produced laughable ideas,

ideas that evaporate once one changes the assumptions a little bit. It

seems difficult to take sides with bickering economists trading often-

incomprehensible arguments (even to economists). Biology and

medicine, on the other hand, rank higher in scientific firmness; like

true sciences, they can explain things while at the same time being

subjected to falsification. They are both positive and their theories are

better theories, that is, more easily testable. The good news is that

neurologists are starting to confirm these results, with what is called

environment mapping in the brain, by taking a patient whose brain is

damaged in one single spot (say, by a tumor or an injury deemed to be

local) and deducing by elimination the function performed by such part

of the aniatomy. This isolates the parts of the brain that perform the

various functions. The Kahneman and Tversky results thus found a terra

firma with the leaps in our knowledge obtained through behavioral

genetics and, farther, plain medicine. Some of the physiology of our

brain makes us perceive things and behave in a given manner. We are,

whether we like it or not, prisoners of our biology.

Researchers in evolutionary psychology provide convincing reasons

for these biases. We have not had the incentive to develop an ability to

understand probability because we did not have to do so – but the more

profound reason is that we are not designed to understand things. We

are built only to survive and procreate. To survive, we need to overstate

some probabilities, such as those that can affect our survival. For

instance, those whose brain imparted higher odds to dangers of death, in

other words the paranoid, survived and gave us their genes (provided

such paranoia did not come at too high a cost, otherwise it would have

been a drawback). Our brain has been wired with biases that may

hamper us in a more complex environment, one that requires a more

accurate assessment of probabilities.

The story of these biases is thus being corroborated by the various

disciplines; the magnitude of the perceptional distortions makes us less

than rational, in the sense of both having coherent beliefs (i.e. free of

logical contradictions) and acting in a manner compatible with these

beliefs.

What he is talking about is called error management theory. See Handbook of Evolutionary Psychology, ex. p. 241.

### Chapter 13 – CARNEADES COMES TO ROME: ON PROBABILITY AND SKEPTICISM

soft sciences. People confuse science and scientists. Science is great, but

individual scientists are dangerous. They are human; they are marred by

the biases humans have. Perhaps even more. For most scientists are hard

headed, otherwise they would not derive the patience and energy to

perform the Herculean tasks asked of them, like spending 18 hours a

day perfecting their doctoral thesis.

A scientist may be forced to act like a cheap defense lawyer rather

than a pure seeker of the truth. A doctoral thesis is “defended” by the

applicant; it would be a rare situation to see the student change his mind

upon being supplied with a convincing argument. But science is better

than scientists. It was said that science evolves from funeral to funeral.

After the LTCM collapse, a new financial economist will emerge, who

will integrate such knowledge in his science. He will be resisted by the

older ones but, again, they will be much closer to their funeral date than

he.

The saying he is referring to (without source) is this one:

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it. “ (source)