BMI is often used a proxy for fat percent or similar measures. BMI has a proven track record of predictive power of many health conditions, yet it still receives lots of criticism due to the fact that it gives misleading results for some groups, notably body builders. There is a conceptual link here with the criticism of simple IQ tests, such as Raven’s which ‘only measure ability to spot figures’. Nonverbal matrix tests such as Raven’s or Cattell’s do indeed not measure g as well as more diverse batteries do (Johnson et al, 2008). These visual tests could be similarly criticized for not working well on those with bad eyesight. However, they are still useful for a broad sample of the population.

Criticisms like this strike me as an incarnation of the perfect solution/Nirvana fallacy:

The perfect solution fallacy (aka the nirvana fallacy) is a fallacy of assumption: if an action is not a perfect solution to a problem, it is not worth taking. Stated baldly, the assumption is obviously false. The fallacy is usually stated more subtly, however. For example, arguers against specific vaccines, such as the flu vaccine, or vaccines in general often emphasize the imperfect nature of vaccines as a good reason for not getting vaccinated: vaccines aren’t 100% effective or 100% safe. Vaccines are safe and effective; however, they are not 100% safe and effective. It is true that getting vaccinated is not a 100% guarantee against a disease, but it is not valid to infer from that fact that nobody should get vaccinated until every vaccine everywhere prevents anybody anywhere from getting any disease the vaccines are designed to protect us from without harming anyone anywhere.

Any measure that has more than 0 validity can be useful in the right circumstances. If a measure has some validity and is easy to administer (BMI or non-verbal pen and paper group tests), they can be very useful even if they have less validity than better measures (fat% test or full battery IQ tests).

Anyway, BMI should probably/perhaps retired now because we have found a more effective (but surely not the best either!) measure:

Our aim was to differentiate the screening potential of waist-to-height ratio (WHtR) and waist circumference (WC) for adult cardiometabolic risk in people of different nationalities and to compare both with body mass index (BMI). We undertook a systematic review and meta-analysis of studies that used receiver operating characteristics (ROC) curves for assessing the discriminatory power of anthropometric indices in distinguishing adults with hypertension, type-2 diabetes, dyslipidaemia, metabolic syndrome and general cardiovascular outcomes (CVD). Thirty one papers met the inclusion criteria. Using data on all outcomes, averaged within study group, WHtR had significantly greater discriminatory power compared with BMI. Compared with BMI, WC improved discrimination of adverse outcomes by 3% (P < 0.05) and WHtR improved discrimination by 4–5% over BMI (P < 0.01). Most importantly, statistical analysis of the within-study difference in AUC showed WHtR to be significantly better than WC for diabetes, hypertension, CVD and all outcomes (P < 0.005) in men and women.
For the first time, robust statistical evidence from studies involving more than 300 000 adults in several ethnic groups, shows the superiority of WHtR over WC and BMI for detecting cardiometabolic risk factors in both sexes. Waist-to-height ratio should therefore be considered as a screening tool. (Ashwell et al, 2012)

Ashwell, M., Gunn, P., & Gibson, S. (2012). Waist‐to‐height ratio is a better screening tool than waist circumference and BMI for adult cardiometabolic risk factors: systematic review and meta‐analysis. obesity reviews, 13(3), 275-286.

Johnson, W., Nijenhuis, J. T., & Bouchard Jr, T. J. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81-95.

Some quotes from Giving Debiasing Away: Can Psychological Research on Correcting Cognitive Errors Promote Human Welfare? by Scott O. Lilienfeld, Rachel Ammirati, and Kristin Landfield.

I was happy to learn that Lilienfeld is part of the neorational movement! I have already read one of his books, 50 Great Myths of Popular Psychology: Shattering Widespread Misconceptions about Human Behavior, and plan on reading another of them: Science and pseudoscience in clinical psychology.


ABSTRACT—Despite Miller’s (1969) now-famous clarion

call to ‘‘give psychology away’’ to the general public, sci-

entific psychology has done relatively little to combat fes-

tering problems of ideological extremism and both inter-

and intragroup conflict. After proposing that ideological

extremism is a significant contributor to world conflict and

that confirmation bias and several related biases are sig-

nificant contributors to ideological extremism, we raise a

crucial scientific question: Can debiasing the general

public against such biases promote human welfare by

tempering ideological extremism? We review the knowns

and unknowns of debiasing techniques against confirma-

tion bias, examine potential barriers to their real-world

efficacy, and delineate future directions for research on

debiasing. We argue that research on combating extreme

confirmation bias should be among psychological science’s

most pressing priorities.

Second, the term bias blind spot (Pronin, Gilovich, & Ross,

2004), more informally called the ‘‘not me fallacy’’ (Felson,

2002), refers to the belief that others are biased but that we are

not. Research shows that people readily recognize confirmation

bias and related biases in others, but not in themselves (Pronin

et al., 2004). The bias blind spot, which we can think of as a

‘‘meta-bias,’’ leads us to believe that only others, not ourselves,

interpret evidence in a distorted fashion.

Second, many individuals may be unreceptive to debiasing

efforts because they do not perceive these efforts as relevant to

their personal welfare. Research suggests that at least some

cognitive biases may be reduced by enhancing participants’

motivation to examine evidence thoughtfully (e.g., by increasing

their accountability to others), thereby promoting less per-

functory processing of information (Arkes, 1991; Tetlock&Kim,

1987). Therefore, some debiasing efforts may succeed only if

participants can be persuaded that their biases result in poor

decisions of real-world consequence to them.


Surely this is correct.

Fifth, researchers must be cognizant of the possibility that

efforts to combat confirmation bias may occasionally backfire

(Wilson, Centerbar, & Brekke, 2002). Researchers have ob-

served a backfire effect in the literature on hindsight bias

(Sanna, Schwarz, & Stocker, 2002), in which asking participants

to generate many alternative outcomes for an event paradoxi-

cally increases their certainty that the original outcome was

inevitable. This effect may arise because participants asked to

think of numerous alternative outcomes find doing so difficult,

leading them (by means of the availability heuristic; Tversky &

Kahneman, 1973) to conclude that there weren’t so many al-

ternative outcomes after all. Whether similar backfire effects

could result from efforts to debias participants against confir-

mation bias by encouraging them to consider alternative view-

points is unclear. Moreover, because research on attitude

inoculation (McGuire, 1962) suggests that exposure to weak

versions of arguments may actually immunize people against

these arguments, exposing people to alternative positions may

be effective only to the extent that these arguments are pre-

sented persuasively.


I recently stumbled upon this profile on OKCupid (her profile, my profile). She is obviously a very bright person and well-read as well. So prominent that i shud have heard of her, given that she has nearly identical interests to me. So, i used my Google-fu and tried “vulcan straw rational talk” and instantly found her. Turns out i had already seen (a bit of) one of her talks. No wonder she seemed familiar. Here is one of her talks, which is quite good. Not too much i learned from it but that’s just becus i have already read a lot about cognitive biases, system 1+2, rationalism @, etc.

Apparently, she has done many videos.

See also:

This reminds me that some years ago i toyed with the idea of making some videos of me explaining things (i never did it). Yes, this lessens the information density, but it also increases the ease of spreading the information. It also has the added benefit of training me for public speaking which i will probably engage in later anyway. It also trains me for semi-public speaking, like giving a lecture for a class or teaching at a school.

Too bad she lives in the US. She is totally hot, cute and has a very nice personality. Obviously, a chick like this will get a lot of messages, and with me also not living remotely close to her, i figure it isn’t worth the time to write a decent message.

This surely sounds like one of those dubious psychology experiments. U know the type, the type with a single or perhaps two small experiments that were slightly below p<0.05 and therefore publishable. So, i decided to take a look.

The Foreign-Language Effect Thinking in a Foreign Tongue Reduces Decision Biases

In general, there is not much to note about the study, except that technically speaking in a small detail, their study actually shows the opposite of what they think. The reason for this is that they gave the percentages as 33.3% and 66.6% instead of the correct 66.7%. Technically, this wud make the secure option always the best one by a small margin.

Anyway, i’d like to see some other people reproduce this effect in a larger sample size. I’m not keen on these 40-200 sample sizes in psychology.

The implications are interesting. So, suppose we now know that one makes more rational decisions (at least with regards to one cognitive bias) when thinking in a foreign language, what do we use this knowledge to? It is perhaps the most intresting reason to learn a foreign language i’ve heard of so far. Promoting rational thinking thru.. learning a new language. Strange, but it seems legit. Anyway, one cud force people to consider financial decisions in a foreign language, such as when taking a loan. I’d like to see some more research about this effect with regards to other biases.

The two goals: 1) having true beliefs, 2) not having false beliefs may seem to be equivalent or something like that, but they in fact result in different optimal strats.

Not having false beliefs

If one has only goal (2), the optimal strat is to believe as few things as possible, suspending belief about most things. Even if one meticulously checks the evidence to avoid being wrong, one will make mistakes once in a while perhaps simply because the current best available evidence about the subject is misleading (i.e. indicates that a particular thing is true that is actually false). So, the optimal strat is to believe nothing at all, but it is hardly possible to live like that. Unfortunately acquiring beliefs is more or less automatic in the sense that if one is exposed to evidence one will automatically form the relevant beliefs without making a choice about it.1 So, one needs to avoid exposing oneself to any evidence relevant to things that one does not already believe. Perhaps spending one’s time meditating is the optimal strat here.

Having true beliefs

If one has only goal (1), the optimal strat is to acquire lots of information and believe all sorts of things about it. I’m not entirely sure about the exact optimal strat. Perhaps the optimal strat is to set the evidence requirement for a belief as low as possible becus this makes it possible to form lots of beliefs, even on bad evidence. However, even this takes some time and since one wants to maximize the number of true beliefs, not just any beliefs, one shud probably gather at least some evidence. However, gathering evidence takes time and effort which cud be spend gathering evidence about some other subjects and forming beliefs about them as a result. So, some equilibrium will emerge about the optimal evidence requirement. Gathering evidence about a particular subject is a diminishing returns strat for having true beliefs.

The kind of thing that one shud gather evidence about matters. It is best to stick to areas where there is consensus about the experts and so that one can simply appeal to their authority and be right (most of the time). For subjects where there is much disagreement among experts, one wud need to gather some better evidence oneself to find out what the truth is. This probably means that one shud employ the heuristics mentioned in the previous link and stay away from subjects that fail at one or more of them. So, basically, one shud avoid all the subjects that i like. :P

However, as the number of one’s beliefs keep rising, one will run out of consensus subjects to study, in which case one will need to study non-consensus subjects and form beliefs about them too, even the ‘worst’ and less socially acceptable/politically correct subjects. Perhaps one shud keep these beliefs to oneself.

It is also a good idea to find other people with the same goal so that one can share evidence about things with them to speed up the progress.

Both having true beliefs and avoiding having false beliefs?

What about someone who has both goals and perhaps with different importance ratings? Here it is more difficult to give advice about the optimal strat. Perhaps one shud set the evidence requirement pretty high, but not impossibly high so that no amount of evidence is enuf. This shud take care of most of the false beliefs. However, there are more ways to get rid of false beliefs. There are many debunking sites around and lists of common misconceptions about all kinds of stuff. One shud read those to get rid of pesky beliefs that got thru the ‘evidence defense’, perhaps at some time before one was a critical thinker. Beliefs tend to stick around. Here are some good things to read to get rid of misconceptions about various things:

There are plenty more of such skepticism material around which brings me to the next point: One shud study critical thinking and logic with a focus on fallacies so that one can avoid making them and thus acquiring wrong beliefs. It is also a good idea to know about cognitive biases so that one can try to compensate for them. A strong command of mathematics, especially statistics, is also a good idea. This helps assess much of the science as most of that employ statistics now a days.

Then, one shud spend alot of time gathering good quality evidence about subjects. Since it takes time to read stuff, one shud read the highest quality stuff in that it offers the best evidence about the particular subject. This probably means reading science in journals and textbooks to begin with to learn the things about the subject that there is consensus about.

Other relevant literature


1As a side note, this is why pragmatic arguments for the belief in something are not very useful. One cannot just will oneself to start believing in something that one thinks that there is no evidence for, e.g. the xtian god. See

I found dhe esae from reeding anodher tekst (very komon praktis):

Dhe abov is aktualy orlsoe worth reeding. It is a book revuew of a book with very unpopular ideas, but shuurly som good peeses among dhem. Heer is dhe opening:

“SEATTLE, Washington – Now why would I be reviewing this book, which was published by the Neo-Nazi National Alliance and that I acquired via some shady Internet bookseller? After all, I am hardly the Aryan prototype.

For one thing, I’ve followed the Alliance for years, through some combination of curiosity, pity, and entertainment. I was listening to Dr. Pierce’s Internet broadcasts since the late 1990’s, what we now call “podcasts.” Pierce died in 2002 and the Alliance has undergone the usual tumult, power plays, and backbiting that occur whenever a personality-driven organization loses its leader.

Secondly, I have for years put effort into exposing myself to politically incorrect, unconventional, or unfashionable thought, as there is often some truth in there. According to Paul Graham, intelligent people tend to do this and it is overall a good thing.

Third, the author of Which Way Western Man?, William Gayley Simpson, looks like an agreeable chap.

So, I waded into the 1070-page treatise with full enthusiasm. The book consists of modified essays that Simpson originally wrote in the 1940s (and in most cases updated in the 1970s) and the writing has an erudite, early-20th-Century style to it. Simpson was born in 1892 and spent the 1910s and 1920s as a minister (he gave up the frock by 1918,) laborer, and general-purpose liberal, pacifistic Christian. Much of the book gives personal testament to his activities during these years and how they molded his worldview into what it was at the time of the writing(s). “

wich is waer i got dhe tekst i reely want to promoet from:

Dhe blog, bdw, is orlsoe worth a kloeser look. Dhe author seems to hav stoped poesting (last post 2010, August), but it has alot of poests to begin with.