Clear Language, Clear Mind

April 7, 2017

On crackpottery (or why I don’t think I’m a pseudoscientist)

Filed under: Science,Science — Tags: , , — Emil O. W. Kirkegaard @ 13:54

This post is unusually blunt because the topic concerns some rather serious criticism leveled against me. This necessitates replying with some facts that I’ve used for self-assessment purposes.

In case you missed it, my post the mental and behavioral problems of kids with parents from different races generated some furor. I replied to the first thread already, but there’s a second one which is more serious in the presented evidence. It is also long and rambly, so reading it is annoying. Instead I will summarize the case.

The crackpot case

My critic lists a number of reasons to think I’m a crackpot pseudoscientist. For the purpose of this post, the definition of this term is someone who appears to be doing science, but whose methods are so poor that the conclusions cannot be trusted to any degree. He gives a number of arguments, summarized below:

  1. I list a lot of interests and projects on my website. He thinks that science requires specialization and thus someone who has many diverse interests is spread out too thin and unlikely to be an expert on anything. Indeed, crackpots will often claim to be an expert on many things.
  2. Large fraction of solo papers (38/57, 67%). This indicates a lack of working relationship with other researchers.
  3. Co-authors are fairly unknown. He mentions John Fuerst and Julius Bjerrekær. John Fuerst actually has a publication in Intelligence as well. Julius has no other published work.
  4. High rate of self-citations. He bases this on my Google Scholar profile which does not seem to provide numerical statistics about this. But I’d say he is probably right about his assessment that >50% of citations are self-citations. He takes this to indicate that “almost no one is reading or interested in his papers.”.
  5. I don’t have a relevant degree, and no phd degree at all. In fact, my degree is only a low tier one — bachelor — and it’s in an irrelevant field, linguistics.
  6. Most work is published in new or low tier journals. He mentions the OpenPsych journals and Winnower. The first set of journals is edited by myself, which is also suspicious. After all, creationists have their own journals too. He takes this to indicate that the work is so poor that no one else will have it.

If you heard these arguments about someone, what probability would you assign to that person being a pseudoscientist? Pretty high, maybe 95% or 99%. Still, that leaves 5% to 1% that it’s a false diagnosis.

The defense

I thought about how I would appear a long time ago when I started publishing science (my first paper is from 2013). I weighed the various benefits and costs of publishing in different journals, and ultimately decided to start a new open science publisher (with Davide Piffer) which I knew would not have high prestige any time soon. The rationale for this is simple: I think it is more important to optimize the scientific process than it is to be well-respected. This is a point that often comes up in discussion with my more traditional colleagues. Here’s an email from one colleague, who is a member of the editorial board of Intelligence:

Dear Emil,
You do good research and you are very dedicated to science,
e.g. founding scientific journals.
I do not know anybody else who is doing this.
You would be the ideal professor and scientist.
However, some clever adaptation to the system would be necessary.
1. Make a master.
2. Make a PhD.
3. Publish also (but not only) in reputable lefty outlets.
4. Choose also one mainstream research topic to promote your reputation.
5. Look for grants from standard money givers.

Solo papers

Solo papers are a sign of pseudoscience, but can also just be due to lone wolfery. A large majority of Arthur Jensen’s papers are solo papers, and yet he was a great scientist.


For some reason, he ignores a number of my co-authors. Here’s the others:

These are not conventional top researchers — say, professors at top 100 universities — but they are obviously not incompetent or insane people.

This should not be taken to mean that the above agree with my heterodox views.


Academia moves slowly. My research into HBD matters dates only from 2013 to 2017, giving a maximum of 4 years for people to start citing stuff. Given that most papers were published in fairly unknown outlets, it’s no surprise that most citations are self-citations. The reason for the large number of self-citations is simply that I publish a lot, and mostly publish stuff that builds on my own previous work. For instance, a long list of papers concern the performance of immigrant groups, and these papers naturally cite some of the earlier papers. What’s the alternative here? Ignore the previous research in order to seem less crackpotty? Only publish studies on diverse topics to avoid self-citations?

Social network

Scott Alexander, in a comment on the previous criticism thread, noted that:

Emil is definitely odd, but I notice he’s got some peer-reviewed publications co-authored with respected people in the field (example), his papers get cited in major journals, and he’s always talking to professors and PhD students on Twitter who seem to think he’s okay. I’m not going to say that SSC doesn’t have higher standards than peer-reviewed journals, because goodness knows we do, but I haven’t seen any reason to active them here.

The social part is the key, and there is a lot of evidence available to those who look. The easiest method to examine researcher networks is to look at social networking sites for researchers, such as ResearchGate (RG). Here one can see everybody who follows a particular person. The clear prediction from the crackpot model is that serious researchers will ignore — not follow — such a person. What do the data show? I have 102 followers. These include a lot of well-respected, mainstream researchers most of which work in fields related to my research.

We can go further: We can look up all the editorial board members of Intelligence. This is a reasonable selection of world experts on the topic that I mostly study. There are 37 members of the board, how many of them follow me on RG? 1) Bates, 2) Coyle, 3) Karama, 4) Nijenhuis, 5) Wai. So about 14%. Of the ones on RG, about 50% follow me.

How many of them follow me on Twitter? 1) Bates, 2) Colom, 3) Conway, 4) Coyle, 5) Jung, 6), Karama, 7) Meisenberg, 8) Wai, 9) Wicherts. So about 24%. Of the ones on Twitter, something like 75% follow me.

Follower status on Twitter and RG is stochastic. Some people don’t use a given service much and consequently do not follow many people at all (e.g. Gignac follows only 8 persons on Twitter). Their lack of following me is thus ambiguous evidence. Indeed, some people follow me on RG but not on Twitter and vice versa.

This should not be taken to mean that the above agree with my heterodox views.


My critic infers that no one reads my research based on the lack of citations from others. However, this might simply indicate that others don’t publish research on topics where they need to cite my work. For instance, they might follow my work carefully, but avoid publishing in the area for strategic reasons. RG, however, does publicly display the statistics. My combined publications have 7.2k reads. Is this a lot? Well, one can compare with other researchers on the site, and they have similar numbers, so it seems that people do read my work.

Statistical competence

Not all my work concerns HBD. I’ve been making a collection of interactive visualizations of statistical concepts. Many people find these useful. The reception on e.g. /r/statistics is positive. Thus, it seems unlikely that my statistical incompetence is as low as my critic seems to think.


Crackpots don’t get asked to review papers for scientific journals, but:

I declined to review, and did so publicly. Why? The same reason I don’t publish in Intelligence: I hate Elsevier.

Edited: Someone told me it was rude to not anonymize this. Perhaps. It is too late now. My apologies to McDaniel and the unknown authors’ who had their abstract exposed (perhaps).

Private information

Some information is not available to others because it consists of email exchanges or private conversations I have with other scientists. For example, regarding the stereotype study, some comments from experts I sent the study to were:

“Thanks, good and important study.”

“Hey, thanks, Emil.  Amazing but not surprising — hell, your findings line up almost exactly with the conclusions we reached repeatedly in reviews pubbed in 2009, 2012, and 2015.”

“wow, this is super interesting. thanks.”

“Great study!”

I have many such comments spread out in various email exchanges with experts, many of whom are personal friends.

Publication in top journals and research quality

My critic writes:

So how’d he get published? I suspect many of you don’t realize how easy it is to produce a paper that looks scholarly enough, and how easy it is to get it published if you aim low enough. Forget about a third-tier journal, the lowest a “real” scientist will go to, what about a sixth or seventh-tier journal? Tenth-tier? Does it even go that low? These virtually never cited journals are literally less than worthless among experts, but impressive to the utterly ignorant.

Intelligence is the top journal for this field. Scott notes:

(also, you’re calling Intelligence a low-impact journal whereas I’ve previously seen it called high-impact (impact factor is 3.425, Wikipedia rates it 10th out of 120, these people rate it 24th out of 118). I mean, it isn’t Nature, but it’s a heck of a lot better than the sort of places I send my case studies to, and I’m proud of those case studies.)

However, it doesn’t matter so much because research quality seems to be either unrelated to journal impact factor or even negatively related:


Self-assessment is hard. When people are asked to estimate their own intelligence, their estimates only correlate about .33 with the measured scores, and most people overestimate their intelligence levels. Crackpots are essentially people who overestimate their own scientific ability and accomplishments by a very large amount. I don’t recall any public statement I’ve made about myself about this matter. I guess I will have to make a public self-assessment. I consider myself pretty competent with practical statistics, i.e. with such things as cleaning up date for analysis, choosing models to use, interpreting results. Compared to similar researchers, I’m very productive, partially because I choose outlets where there is less wasted time. I don’t think I’m a genius, and I don’t compare myself favorably to Galileo, Einstein or Galton. My goal is to make a long list of substantial solid empirical contributions, but I don’t expect to instigate some kind of revolution or paradigm change. So far, my contributions are substantial for two topics: 1) the performance of immigrant groups by country of origin, 2) associations of intelligence/cognitive ability in aggregate data. We will see what the future brings. I expect to do quite a lot of work in behavioral genetics and genomics in the next couple of years.

The conclusion is that researchers do in general think my work is interesting, but that my odd publication habits combined with interests in taboo/sensitive topics make me look like a crackpot pseudoscientist. I could ease this by publishing a few papers in some mainstream journals, getting a relevant degree, getting a relevant job, co-authoring with some big name people etc. I will be sending a few papers to legacy journals, so that I can get a special higher doctorate degree, and to make John Fuerst a little happier (this will let me call myself “Doctor Graveyard” because that is what my last name means!). The combination of a few more papers in standard journals with a degree should get rid of the worst accusations without seriously affecting my publication habits.

Updated: 27th May 2017

In reply to my tweet of this post, Rex Jung, replied:

Rex is one of the two famous neuroscientist-intelligence researchers who came up with the widely supported P-FIT model. He’s also a board member of Intelligence.


For the paper count, I included everything listed on my site, so this includes talks at conferences and books.

December 15, 2016

Review: Climatology Versus Pseudoscience: Exposing the Failed Predictions of Global Warming Skeptics

Filed under: Climatology — Tags: , , — Emil O. W. Kirkegaard @ 14:16

This book is written by one of the persons behind Skeptical Science, a website debunking pseudoscience in the area of climatology and global warning, very much like Talk Origins debunks evolution related pseudoscience. The book itself has more of a meta style and does not cover all arguments put forward by climate contrarians. After all, that is the purpose of the website. The book concerns itself primarily with exactly that which the title says: finding predictions made by contrarians, mainstream scientists (including IPCC) and examining how well they held up. I think I can rule any any surprises: the contrarians’ predictions have not generally held up well, but the mainstream ones have.

In general, I have few complaints about the climatology discussed and also learned some new things, such as that standard models predict cooling in the upper atmosphere, which we do observe, while sun models predict the opposite. Such diverging predictions allow for strong inference and is the way science should proceed to decide between competing models when possible.

I have two complaints about the book and was wondering whether to give it 3 or 4 stars on Goodreads. The low resolution of ratings on Goodreads is annoying because I often want to give a book a rating of 3.5 or 4.5. On a side note, the Good Judgment Project found that persons who use more granularity in their predictions were better predictors. I conjecture this is a general phenomenon of ability in an area and in general.

My first complaint is that the book is strongly US centric. This is a common problem with science books, but with climatology there is little excuse because climate contrarianism is not at all limited to the US. Furthermore, the US political system is quite peculiar compared to every other half-decent country because it has only 2 parties. The authors would have done well to consult with some non-US colleagues to discuss how the climatology debate goes on in countries like Germany (population 80 million, or 16% of EU), UK (64 million, 13%) and France (66 million, 13%). Instead, the book endlessly complains of US conservatives and republicans. A related complaint is that the book has a very simple-minded conception of politics as being effectively 1-dimensional, whereas our data says “not really”, and especially not among the general population (unpublished large sample results, sorry!).

My second complaint is that because the authors are apparently only familiar with climatology, they use fairly strong wordings with regards to science denial in the Republican party, and the motivations of people who deny mainstream findings in this area. Motive speculation is generally a big no-no. Sure enough, climatology and biology are the usual targets of this political cluster, but that doesn’t mean the other political clusters do not also deny areas of science that conflicts with their politics. I am of course very familiar with this topic because of my familiarity of topics that especially conflicts with egalitarian ideology, primarily differential psychology and behavioral genetics, but also evolutionary psychology. Libertarians also frequently deny climatology findings because of their opposition to big regulation/government.

A related point is that because they focus on climatology, they don’t seem to realize that things are not quite so simple with science and consensus as they seem to think. See discussion by Yudkowsky and Scott Alexander.

Finally, there is no mention of the rather extreme left-wing politics of academics and journalists. Just because conservatives and free market people often deny climatology findings due to it conflicting with their politics of no/little regulation, there is also the possibility that left-wing people affirm the findings because it fits their politics. This question is not at all examined.

January 14, 2015

Scott O. Lilienfeld is a great researcher

Filed under: Psychology — Tags: , — Emil O. W. Kirkegaard @ 06:10

Some researchers are just more interesting than others to you than others. So when I find one that has written something very interesting, I attempt to find their other papers to see if they have produced more interesting stuff. This is another such person. Lilienfeld writes about science and pseudoscience with regards to psychology, especially clinical psychology. He has a number of papers on a variety of dubious ideas in psychology such as repressed memory. He also writes about the public’s perception of psychology.

Pubmed lists 123 papers under his name, Scholar lists 381 publications, so he is certainly pretty productive. Here’s a collection of interesting material:

Of interest also are his books, of which I’ve already read two:

February 12, 2013

Educational psychologist 41(4), the issue that is about multiple intelligence ‘theories’

Filed under: Differential psychology/psychometrics,Psychology — Tags: , — Emil O. W. Kirkegaard @ 07:35

December 17, 2012

Alan Sokal ebooks

Filed under: Metaphilosophy,Sociology — Tags: — Emil O. W. Kirkegaard @ 19:29

I recently acquired these (yes, bought them). Enjoy!

Fashionable Nonsense, Postmodern Intellectuals’ Abuse of Science – Alan Sokal, Jean Bricmont

beyond the hoax – alan sokal

December 6, 2012

Review: Surely You’re Joking, Mr. Feynman! (Richard Feynmann)

Filed under: Humor,Psychology — Tags: , — Emil O. W. Kirkegaard @ 06:50

Richard Feynman Surely Youre Joking Mr Feynman v5 ebook download free pdf


this is a fun, easy to read book. i was told to read it by a friend. i read it to avoid doing the linguistics tests im supposed to do. useful procrastination ftw!


As usual, comments and quotes below



Another thing I did in high school was to invent problems and theorems. I mean, if I were doing

any mathematical thing at all, I would find some practical example for which it would be useful. I

invented a set of right-triangle problems. But instead of giving the lengths of two of the sides to

find the third, I gave the difference of the two sides. A typical example was: There’s a flagpole, and

there’s a rope that comes down from the top. When you hold the rope straight down, it’s three feet

longer than the pole, and when you pull the rope out tight, it’s five feet from the base of the pole.

How high is the pole?


tricky, but certainly doable for primary school children. the smart of them. im fairly certain that a lot of high school students wud not be able to solve this.



I tried to explain–it was my own aunt–that there was no reason not to do that, but you can’t say

that to anybody who’s smart, who runs a hotel! I learned there that innovation is a very difficult

thing in the real world.


truth! this is politics in a nutshell, any kind of politics: national, local, office…



The other guy’s afraid, so he says no. So I take the two girls in a taxi to the hotel, and discover

that there’s a dance organized by the deaf and dumb, believe it or not. They all belonged to a club.

It turns out many of them can feel the rhythm enough to dance to the music and applaud the band at

the end of each number.


It was very, very interesting! I felt as if I was in a foreign country and couldn’t speak the

language: I could speak, but nobody could hear me. Everybody was talking with signs to everybody

else, and I couldn’t understand anything! I asked my girl to teach me some signs and I learned a few,

like you learn a foreign language, just for fun.


Everyone was so happy and relaxed with each other, making jokes and smiling all the time; they

didn’t seem to have any real difficulty of any kind communicating with each other. It was the same

as with any other language, except for one thing: as they’re making signs to each other, their heads

were always turning from one side to the other. I realized what that was. When someone wants to

make a side remark or interrupt you, he can’t yell, “Hey, Jack!” He can only make a signal, which

you won’t catch unless you’re in the habit of looking around all the time.


never thought of that, but true!



When it came time for me to give my talk on the subject, I started off by drawing an outline of

the cat and began to name the various muscles.

The other students in the class interrupt me: “We know all that!”

“Oh,” I say, “you do? Then no wonder I can catch up with you so fast after you’ve had four years

of biology.” They had wasted all their time memorizing stuff like that, when it could be looked up

in fifteen minutes.


ive heard this complaint lots of time about biology. i rather like evolutionary biology, which surely cannot be learned in 15 mins, but i dunno abouy plant cell biology or whatever. is biology mostly just remembering stuff? surely things like genetics, pop* genetics, evolutionary theory are hard.



At the Princeton graduate school, the physics department and the math department shared a

common lounge, and every day at four o’clock we would have tea. It was a way of relaxing in the

afternoon, in addition to imitating an English college. People would sit around playing Go, or

discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front

of him, saying, “And therefore such-and-such is true.”


“Why is that?” the guy on the couch asks.


“It’s trivial! It’s trivial!” the standing guy says, and he rapidly reels off a series of logical steps:

“First you assume thus-and-so, then we have Kerchoff’s this-and-that; then there’s Waffenstoffer’s

Theorem, and we substitute this and construct that. Now you put the vector which goes around here

and then thus-and-so . . .” The guy on the couch is struggling to understand all this stuff, which

goes on at high speed for about fifteen minutes!


Finally the standing guy comes out the other end, and the guy on the couch says, “Yeah, yeah.

It’s trivial.”


We physicists were laughing, trying to figure them out. We decided that “trivial” means

“proved.” So we joked with the mathematicians: “We have a new theorem–that mathematicians can

prove only trivial theorems, because every theorem that’s proved is trivial.”


i thought of that befor. it makes certain theories of tautologies rather implausible. if tautologies, or necessary truths are all trivial, and just restating things – why arent they all obvius? …



One thing I never did learn was contour integration. I had learned to do integrals by various

methods shown in a book that my high school physics teacher Mr. Bader had given me.


One day he told me to stay after class. “Feynman,” he said, “you talk too much and you make

too much noise. I know why. You’re bored. So I’m going to give you a book. You go up there in the

back, in the corner, and study this book, and when you know everything that’s in this book, you can

talk again.”


i wish my teachers wud hav don that to me! or that i had grown up with Khan academy!



In another experiment, I laid out a lot of glass microscope slides, and got the ants to walk on

them, back and forth, to some sugar I put on the windowsill. Then, by replacing an old slide with a

new one, or by rearranging the slides, I could demonstrate that the ants had no sense of geometry:

they couldn’t figure out where something was. If they went to the sugar one way and there was a

shorter way back, they would never figure out the short way.

It was also pretty clear from rearranging the glass slides that the ants left some sort of trail. So

then came a lot of easy experiments to find out how long it takes a trail to dry up, whether it can be

easily wiped off, and so on. I also found out the trail wasn’t directional. If I’d pick up an ant on a

piece of paper, turn him around and around, and then put him back onto the trail, he wouldn’t know

that he was going the wrong way until he met another ant. (Later, in Brazil, I noticed some leaf-

cutting ants and tried the same experiment on them. They could tell, within a few steps, whether

they were going toward the food or away from it–presumably from the trail, which might be a

series of smells in a pattern: A, B, space, A, B, space, and so on.)

I tried at one point to make the ants go around in a circle, but I didn’t have enough patience to set

it up. I could see no reason, other than lack of patience, why it couldn’t be done.


yes, that DOES happen by accident in nature.



So Frankel figured out a nice program. If we got enough of these machines in a room, we could

take the cards and put them through a cycle. Everybody who does numerical calculations now

knows exactly what I’m talking about, but this was kind of a new thing then–mass production with

machines. We had done things like this on adding machines. Usually you go one step across, doing

everything yourself. But this was different–where you go first to the adder, then to the multiplier,

then to the adder, and so on. So Frankel designed this system and ordered the machines from the

IBM company because we realized it was a good way of solving our problems.


We needed a man to repair the machines, to keep them going and everything. And the army was

always going to send this fellow they had, but he was always delayed. Now, we always were in a

hurry. Everything we did, we tried to do as quickly as possible. In this particular case, we worked

out all the numerical steps that the machines were supposed to do–multiply this, and then do this,

and subtract that. Then we worked out the program, but we didn’t have any machine to test it on. So

we set up this room with girls in it. Each one had a Marchant: one was the multiplier, another was

the adder. This one cubed–all she did was cube a number on an index card and send it to the next



We went through our cycle this way until we got all the bugs out. It turned out that the speed at

which we were able to do it was a hell of a lot faster than the other way where every single person

did all the steps. We got speed with this system that was the predicted speed for the IBM machine.

The only difference is that the IBM machines didn’t get tired and could work three shifts. But the

girls got tired after a while.





Well, Mr. Frankel, who started this program, began to suffer from the computer disease that

anybody who works with computers now knows about. It’s a very serious disease and it interferes

completely with the work. The trouble with computers is you play with them. They are so

wonderful. You have these switches–if it’s an even number you do this, if it’s an odd number you

do that–and pretty soon you can do more and more elaborate things if you are clever enough, on

one machine.





All during the war, and even after, there were these perpetual rumors: “Somebody’s been trying

to get into Building Omega!” You see, during the war they were doing experiments for the bomb in

which they wanted to get enough material together for the chain reaction to just get started. They

would drop one piece of material through another, and when it went through, the reaction would

start and they’d measure how many neutrons they got. The piece would fall through so fast that

nothing should build up and explode. Enough of a reaction would begin, however, so they could

tell that things were really starting correctly, that the rates were right, and everything was going

according to prediction–a very dangerous experiment!


O_o, very dangerus experiment indeed!



That evening I went for a walk in town, and came upon a small crowd of people standing around

a great big rectangular hole in the road–it had been dug for sewer pipes, or something–and there,

sitting exactly in the hole, was a car. It was marvelous: it fitted absolutely perfectly, with its roof

level with the road. The workmen hadn’t bothered to put up any signs at the end of the day, and the

guy had simply driven into it. I noticed a difference: When we’d dig a hole, there’d be all kinds of

detour signs and flashing lights to protect us. There, they dig the hole, and when they’re finished for

the day, they just leave.





The meeting in Japan was in two parts: one was in Tokyo, and the other was in Kyoto. In the bus

on the way to Kyoto I told my friend Abraham Pais about the Japanese-style hotel, and he wanted

to try it. We stayed at the Hotel Miyako, which had both American-style and Japanese-style rooms,

and Pais shared a Japanese-style room with me.


The next morning the young woman taking care of our room fixes the bath, which was right in

our room. Sometime later she returns with a tray to deliver breakfast. I’m partly dressed. She turns

to me and says, politely, “Ohayo, gozai masu,” which means, “Good morning.”

Pais is just coming out of the bath, sopping wet and completely nude. She turns to him and with

equal composure says, “Ohayo, gozai masu,” and puts the tray down for us.

Pais looks at me and says, “God, are we uncivilized!”


We realized that in America if the maid was delivering breakfast and the guy’s standing there,

stark naked, there would be little screams and a big fuss. But in Japan they were completely used to

it, and we felt that they were much more advanced and civilized about those things than we were.


stupid puritanism and fear of nakedness.



There was a sociologist who had written a paper for us all to read–something he had written

ahead of time. I started to read the damn thing, and my eyes were coming out: I couldn’t make head

nor tail of it! I figured it was because I hadn’t read any of the books on that list. I had this uneasy

feeling of “I’m not adequate,” until finally I said to myself, “I’m gonna stop, and read one sentence

slowly, so I can figure out what the hell it means.”

So I stopped–at random–and read the next sentence very carefully. I can’t remember it precisely,

but it was very close to this: “The individual member of the social community often receives his

information via visual, symbolic channels.” I went back and forth over it, and translated. You know

what it means? “People read.”


Then I went over the next sentence, and I realized that I could translate that one also. Then it

became a kind of empty business: “Sometimes people read; sometimes people listen to the radio,”

and so on, but written in such a fancy way that I couldn’t understand it at first, and when I finally

deciphered it, there was nothing to it.


There was only one thing that happened at that meeting that was pleasant or amusing. At this

conference, every word that every guy said at the plenary session was so important that they had a

stenotypist there, typing every goddamn thing. Somewhere on the second day the stenotypist came

up to me and said, “What profession are you? Surely not a professor.”

“I am a professor,” I said.

“Of what?”

“Of physics–science.”

“Oh! That must be the reason,” he said.

“Reason for what?” He said, “You see, I’m a stenotypist, and I type everything that is said here. Now, when the other

fellas talk, I type what they say, but I don’t understand what they’re saying. But every time you get

up to ask a question or to say something, I understand exactly what you mean–what the question is,

and what you’re saying–so I thought you can’t be a professor!”


yes, it is mor difficult to say somthing clearly than to obscure it.



There was a special dinner at some point, and the head of the theology place, a very nice, very

Jewish man, gave a speech. It was a good speech, and he was a very good speaker, so while it

sounds crazy now, when I’m telling about it, at that time his main idea sounded completely obvious

and true. He talked about the big differences in the welfare of various countries, which cause

jealousy, which leads to conflict, and now that we have atomic weapons, any war and we’re

doomed, so therefore the right way out is to strive for peace by making sure there are no great

differences from place to place, and since we have so much in the United States, we should give up

nearly everything to the other countries until we’re all even. Everybody was listening to this, and

we were all full of sacrificial feeling, and all thinking we ought to do this. But I came back to my

senses on the way home.


The next day one of the guys in our group said, “I think that speech last night was so good that

we should all endorse it, and it should be the summary of our conference.”

I started to say that the idea of distributing everything evenly is based on a theory that there’s

only X amount of stuff in the world, that somehow we took it away from the poorer countries in the

first place, and therefore we should give it back to them. But this theory doesn’t take into account

the real reason for the differences between countries–that is, the development of new techniques

for growing food, the development of machinery to grow food and to do other things, and the fact

that all this machinery requires the concentration of capital. It isn’t the stuff, but the power to make

the stuff, that is important. But I realize now that these people were not in science; they didn’t

understand it. They didn’t understand technology; they didn’t understand their time.


sounds like sorryaboutcolonialism (see


these inequalities ar ther becus of ppl ar unequal to begin with. even if we redistributed wealth, it wudnt take long b4 whites and asians were superior again.



Once I was asked to serve on a committee which was to evaluate various weapons for the army,

and I wrote a letter back which explained that I was only a theoretical physicist, and I didn’t know

anything about weapons for the army.


The army responded that they had found in their experience that theoretical physicists were very

useful to them in making decisions, so would I please reconsider?

I wrote back again and said I didn’t really know anything, and doubted I could help them.

Finally I got a letter from the Secretary of the Army, which proposed a compromise: I would

come to the first meeting, where I could listen and see whether I could make a contribution or not.

Then I could decide whether I should continue.

I said I would, of course. What else could I do?

I went down to Washington and the first thing that I went to was a cocktail party to meet

everybody. There were generals and other important characters from the army, and everybody

talked. It was pleasant enough.


One guy in a uniform came to me and told me that the army was glad that physicists were

advising the military because it had a lot of problems. One of the problems was that tanks use up

their fuel very quickly and thus can’t go very far. So the question was how to refuel them as they’re

going along. Now this guy had the idea that, since the physicists can get energy out of uranium,

could I work out a way in which we could use silicon dioxide–sand, dirt–as a fuel? If that were

possible, then all this tank would have to do would be to have a little scoop underneath, and as it

goes along, it would pick up the dirt and use it for fuel! He thought that was a great idea, and that

all I had to do was to work out the details. That was the kind of problem I thought we would be

talking about in the meeting the next day.


i wonder… ar they still so depressingly dumb?



This question of trying to figure out whether a book is good or bad by looking at it carefully or

by taking the reports of a lot of people who looked at it carelessly is like this famous old problem:

Nobody was permitted to see the Emperor of China, and the question was, What is the length of the

Emperor of China’s nose? To find out, you go all over the country asking people what they think

the length of the Emperor of China’s nose is, and you average it. And that would be very “accurate”

because you averaged so many people. But it’s no way to find anything out; when you have a very

wide range of people who contribute without looking carefully at it, you don’t improve your

knowledge of the situation by averaging.


F seems to be wrong, but he might hav a point about the conditions under which wisdom of the crowds averaging works.



I thought: “Now where is the ego located? I know everybody thinks the seat of thinking is in the

brain, but how do they know that?” I knew already from reading things that it wasn’t so obvious to

people before a lot of psychological studies were made. The Greeks thought the seat of thinking

was in the liver, for instance. I wondered, “Is it possible that where the ego is located is learned by

children looking at people putting their hand to their head when they say, ‘Let me think’? Therefore

the idea that the ego is located up there, behind the eyes, might be conventional!” I figured that if I

could move my ego an inch to one side, I could move it further. This was the beginning of my


Feynmann didnt do his research properly.

”During the second half of the first millennium BC, the Ancient Greeks developed differing views on the function of the brain. It is said that it was the Pythagorean Alcmaeon of Croton (6th and 5th centuries BC) who first considered the brain to be the place where the mind was located. In the 4th century BC Hippocrates, believed the brain to be the seat of intelligence (based, among others before him, on Alcmaeon’s work). During the 4th century BC Aristotle thought that, while the heart was the seat of intelligence, the brain was a cooling mechanism for the blood. He reasoned that humans are more rational than the beasts because, among other reasons, they have a larger brain to cool their hot-bloodedness.[2]



Other kinds of errors are more characteristic of poor science. When I was at Cornell, I often

talked to the people in the psychology department. One of the students told me she wanted to do an

experiment that went something like this–it had been found by others that under certain

circumstances, X, rats did something, A. She was curious as to whether, if she changed the

circumstances to Y, they would still do A. So her proposal was to do the experiment under

circumstances Y and see if they still did A.


I explained to her that it was necessary first to repeat in her laboratory the experiment of the

other person–to do it under condition X to see if she could also get result A, and then change to Y

and see if A changed. Then she would know that the real difference was the thing she thought she

had under control.


She was very delighted with this new idea, and went to her professor. And his reply was, no, you

cannot do that, because the experiment has already been done and you would be wasting time. This

was in about 1947 or so, and it seems to have been the general policy then to not try to repeat

psychological experiments, but only to change the conditions and see what happens.


sadly, this is STILL the case!



So I have just one wish for you–the good luck to be somewhere where you are free to maintain

the kind of integrity I have described, and where you do not feel heed by a need to maintain your

position In the organization, or financial support, or so on, to lose your integrity. May you have that




Feynmann wud hav been sad to see the state of affairs of the modern publish or perish science, the lack of repetitions in various fields, the publication bias, the near impossibility of politically incorrect science.

August 16, 2012

Thoughts and comments: Is psychology a science? (Paul Lutus)

Filed under: Psychology,Science — Tags: , — Emil O. W. Kirkegaard @ 12:39


In order to consider whether psychology is a science, we must first define our terms. It is not

overarching to say that science is what separates human beings from animals, and, as time goes by

and we learn more about our animal neighbors here on Earth, it becomes increasingly clear that

science is all that separates humans from animals. We are learning that animals have feelings,

passions, and certain rights. What animals do not have is the ability to reason, to rise above feeling.



The point here is that legal evidence is not remotely scientific evidence. Contrary to popular belief,

science doesn’t use sloppy evidentiary standards like “beyond a reasonable doubt,” and scientific

theories never become facts. This is why the oft-heard expression “proven scientific fact” is never

appropriate – it only reflects the scientific ignorance of the speaker. Scientific theories are always

theories, they never become the final and only explanation for a given phenomenon.


Meh. Sure is phil of sci 101 here.

Besides the confusing word usage “become facts” (wat), a scientific fact is just something that is beyond reasonable doubt and enjoys virtually unanimous agreement among the relevant scientists.

Apart from being filtered through all possible explanations, scientific theories have another

important property – they must make predictions that can be tested and possibly falsified. In fact,

and this may surprise you, scientific theories can only be falsified, they can never be proven true

once and for all. That is why they are called “theories,” as certain as some of them are – it is always

possible they may be replaced by better theories, ones that explain more, or are simpler, or that

make more accurate predictions than their forebears.


No, that is not why they are called “theories”, they are called “theories” because thats the word for “explanation” in science.


Nothing can be “proven true once and for all” with absolute certainty. This is not specific to science.

It’s very simple, really. If a theory doesn’t make testable predictions, or if the tests are not practical,

or if the tests cannot lead to a clear outcome that supports or falsifies the theory, the theory is not

scientific. This may come as another surprise, but very little of the theoretical content of human

psychology meets this scientific criterion. As to the clinical practice of psychology, even less meets

any reasonable definition of “scientific.”


Nonsense. There have been many scientific theories that we could not figure out how to test to begin with, but we later did, and the evidence either test either confirmed or disconfirmed the theories.

Human psychology and the related fields of psychoanalysis and psychotherapy achieved their

greatest acceptance and popularity in the 1950s, at which time they were publicly perceived as

sciences. But this was never true, and it is not true today – human psychology has never risen to the

status of a science, for several reasons


Derp. Conflation of psychoanalysis crap with good psychology.


Although, to his defense, he did somewhat announce this in the beginning:

Since its first appearance in 2003, this article has become required reading in a number of college-

level psychology courses. Because this article is directed toward educated nonspecialist readers

considering psychological treatment, students of psychology are cautioned that terms such as

“psychology,” “clinical psychology” and “psychiatry” are used interchangeably, on the ground that

they rely on the field of human psychology for validation, in the same way that astronomy and

particle physics, though very different, rely on physics for validation.

But as to the study of human beings, there are severe limitations on what kinds of

studies are permitted. As an example, if you want to know whether removing specific

brain tissue results in specific behavioral changes, you cannot perform the study on

humans. You have to perform it on animals and try to extrapolate the result to humans.


Eh. One can just look at case studies of people with brain injuries.


Besides, there are lots of studies that are allowed, and in the past we did some studies that probably would not be allowed today, say Milgram Experiment or perhaps Stanford Prison Experiment.

One of the common work-arounds to this ethical problem is to perform what are called

“retrospective studies,” studies that try to draw conclusions from past events rather than

setting up a formal laboratory experiment with strict experimental protocols and a

control group. If you simply gather information about people who have had a certain

kind of past experience, you are freed from the ethical constraint that prevents you from

exposing experimental subjects to that experience in the present.


But, because of intrinsic problems, retrospective studies produce very poor evidence

and science. For example, a hypothetical retrospective study meant to discover whether

vitamin X makes people more intelligent may only “discover” that the people who took

the vitamin were those intelligent enough to take it in the first place. In general,

retrospective studies cannot reliably distinguish between causes and effects, and any

conclusions drawn from them are suspect.


Think about this for a moment. In order for human psychology to be placed on a

scientific footing, it would have to conduct strictly controlled experiments on humans,

in some cases denying treatments or nutritional elements deemed essential to health (in

order to have a control group), and the researchers would not be able to tell the subjects

whether or not they were receiving proper care (in order not to bias the result). This is

obviously unethical behavior, and it is a key reason why human psychology is not a



He is just wrong. It is possible to distinguish between cause and effects. One has to do more studies of different kinds. Etc. It is difficult but not impossible.

The items listed above inevitably create an atmosphere in which absolutely anything

goes (at least temporarily), judgments about efficacy are utterly subjective, and as a

result, the field of psychology perpetually splinters into cults and fads (examples

below). “Studies” are regularly published that would never pass muster with a self-

respecting peer review committee from some less soft branch of science.


Another dumb conflation of psychology as a whole with some specific subfield, and the most dodgy of them all.

In an effort to answer the question of whether intelligence is primarily governed

by environment or genes, psychologist Cyril Burt (1883-1971) performed a

long-term study of twins that was later shown to be most likely a case of

conscious or unconscious scientific fraud. His work, which purported to show

that IQ is largely inherited, was used as a “scientific” basis by various racists and

others, and, despite having been discredited, still is.


1) The case against him seems rather weak.

2) His conclusions are very consistent with modern studies of the same thing.


See, John Philippe Rushton – New evidence on Sir Cyril Burt His 1964 Speech to the Association of Educational Psychologists

In the 1950s, at the height of psychology’s public acceptance, neurologist Walter

Freeman created a surgical procedure known as “prefrontal lobotomy.” As

though on a quest and based solely on his reputation and skills of persuasion,

Freeman singlehandedly popularized lobotomy among U.S. psychologists,

eventually performing about 3500 lobotomies, before the dreadful consequences

of this practice became apparent.


At the height of Freeman’s personal campaign, he drove around the country in a

van he called the “lobotomobile,” performing lobotomies as he traveled. There

was plenty of evidence that prefrontal lobotomy was a catastrophic clinical

practice, but no one noticed the evidence or acted on it. There was — and is —

no reliable mechanism within clinical psychology to prevent this sort of abuse.


Ah yes, lobotomies. He seems to have missed ECT on his example list.


The last claim is clearly wrong.

These examples are part of a long list of people who have tried to use psychology to

give a scientific patina to their personal beliefs, perhaps beginning with Francis Galton

(1822-1911), the founder and namer of eugenics. Galton tried (and failed) to design

psychological tests meant to prove his eugenic beliefs. This practice of using

psychology as a personal soapbox continues to the present, in fact, it seems to have

become more popular.


What these accounts have in common is that no one was able (or willing) to use

scientific standards of evidence to refute the claims at the time of their appearance,

because psychology is only apparently a science. Only through enormous efforts and

patience, including sometimes repeating an entire study using the original materials, can

a rare, specific psychological claim be refuted. Such exceptions aside, there is ordinarily

no recourse to the “testable, falsifiable claims” criterion that sets science apart from

ordinary human behavior.


Galton was a very cool guy, and eugenics is well and alive today, we just call eugenic practices, like prenatal screening, something else (well, most people do).


Intelligence does actually seem to have fallen from when Galton and others measured reaction times to modern reaction time measurements, cf. this post.

Some may object that the revolution produced by psychoactive drugs has finally placed psychology

on a firm scientific footing, but the application of these drugs is not psychology, it is pharmacology.

The efficacy of drugs in treating conditions once thought to be psychological in origin simply

presents another example where psychology got it wrong, and the errors could only be uncovered

using disciplines outside psychology.


It’s neither. It’s psychopharmacology.

To summarize this section, psychology is the sort of field that can describe things, but as shown

above, it cannot reliably explain what it has described. In science, descriptions are only a first step

— explanations are essential:

• An explanation, a theory, allows one to make a prediction about observations not yet made.

• A prediction would permit a laboratory test that might support or falsify the underlying


• The possibility of falsification is what distinguishes science from cocktail chatter.


A labaratory test? Perhaps geology isn’t science either? Surely, it has a history of crazy theories as well, try Expanding Earth theory.

As with most professions, scientists have a private language, using terms that seem completely

ordinary but that convey special meaning to other scientists. For example, when a scientist identifies

a field as a “descriptive science,” he is politely saying it is not a science.


No… It means that is isn’t a causal science. Say, grammar is a descriptive science/subfield within linguistics.


Depending on whather we include non-empirical fields in science, there is also logic and math, which are formal, descrptive and noncausal fields.


But in another use of the word, it means something else, namely, descriptive as opposed to applied.

This seems an appropriate time (and context) to comment on psychology’s “bible”: the Diagnostic

and Statistical Manual of Mental Disorders and its companion, the International Classifications of

Diseases, Mental Disorders Section (hereafter jointly referred to as DSM). Now in its fourth edition,

this volume is very revealing because of its significance to the practice of psychology and

psychiatry and because of what it claims are valid mental illnesses.


These comparisons with religion (“bible”) are not very impartial. He would have helped his case if he was more neutral in his word choice.


That’s not to say that the DSM’s, psychiatry and the various diagnosis aren’t dodgy.

Putting aside for the moment the nebulous “phase of life problem,” excuse me? – “Sibling rivalry”

is now a mental illness? Yes, according to the current DSM/ICD. And few are as strict about

spelling as I am, but even I am not ready to brand as mentally ill those who (frequently) cannot

accurately choose from among “site,” “cite” and “sight” when they write to comment on my Web

pages. As to “mathematics disorder” being a mental illness, sorry, that just doesn’t add up.


Eh, they are refering to dyslexia probably, not the inability to distinguish various English homophones.

[table with the number of different diagnoses in the DSM over the years]

Based on this table and extrapolating into the future using appropriate regression methods, in 100

years there will be more than 3600 conditions meriting treatment as mental illnesses. To put it

another way, there will be more mental states identified as abnormal than there are known, distinct

mental states. In short, no behavior will be normal.


This doesn’t follow. It might be that the diagnoses are simply getting more and more specific. For instance, there are now quite a few different eating disorders diagnosed, and quite a few diferent schizophrenic disorders. These are just splitting the diagnoses into more without covering more or much more behavior.


There is also the possibility that the future diagnoses will be more and more niche related, covering less and less behavior. In that case, there won’t be any sharp increase.

Many conditions have made their way into the DSM and nearly none are later removed.

Homosexuality was until recently listed as a mental illness, one believed to be amenable to

treatment, in spite of the total absence of clinical evidence. Then a combination of research findings

from fields other than psychology, and simple political pressure, resulted in the belated removal of

homosexuality from psychology’s official list of mental illnesses. Imagine a group of activists

demanding that the concept of gravity be removed from physics. Then imagine physicists yielding

to political pressure on a scientific issue. But in psychology, this is the norm, not the exception, and it is nearly always the case that the impetus for change comes from a field other than psychology.


Meh. Extrapolating much.

Does research honor the null hypothesis? The “null hypothesis” is a scientific precept

that says assertions are assumed to be false unless and until there is evidence to support

them. In scientific fields the null hypothesis serves as a threshold-setting device to

prevent the waste of limited resources on speculations and hypotheses that are not

supported by direct evidence or reasonable extrapolations from established theory.


Does psychology meet this criterion? Well, to put it diplomatically, if psychiatrist John

Mack of the Harvard Medical School can conduct a research program that takes alien

abduction stories at face value, if clinical psychologists can appear as expert witnesses

in criminal court to testify about nonexistent “recovered memories,” only to see their

clients vigorously deny and retract those “memories” later, if any imaginable therapeutic

method can be put into practice without any preliminary evaluation or research, then no,

the null hypothesis is not honored, and psychology fails Point B.


That’s not how the null hypothesis works. From Wiki:

The practice of science involves formulating and testing hypotheses, assertions that are capable of being proven false using a test of observed data. The null hypothesis typically corresponds to a general or default position. For example, the null hypothesis might be that there is no relationship between two measured phenomena[1] or that a potential treatment has no effect.[2]

In response to my claim that evidence-based practice is to date an unrealized idea, a

psychologist recently replied that there is “practice-based evidence.” Obviously this

argument was offered in the heat of the moment and my correspondent could not have

considered the implications of his remark.


Practice-based evidence, to the degree that it exists, suffers from serious ethical and

practical issues. It fails an obvious ethical standard — if the “evidence” is coincidental

to therapy, a client will be unable to provide informed consent to be a research subject

on the ground that neither he nor the therapist knows in advance that he will be a

research subject. Let me add that a scenario like this would never be acceptable in

mainstream medicine (not to claim that it never happens), but it is all too common in

clinical psychology for research papers to exploit evidence drawn from therapeutic



What? Practice-based evidence is common in medicine. The reason is that we simply don’t know how well many often used treatments work. Cf. Bad Science.


Case studies are also very common, and useful.


Let’s compare the foregoing to physics, a field that perfectly exemplifies the interplay of

scientific research and practice. When I use a GPS receiver to find my way across the

landscape, every aspect of the experience is governed by rigorously tested physical

theory. The semiconductor technology responsible for the receiver’s integrated circuits

obeys quantum theory and materials science. The mathematics used to reduce satellite

radio signals to a terrestrial position honors Einstein’s relativity theories (both of them,

and for different reasons) as well as orbital mechanics. If any of these theories is not

perfectly understood and taken into account, I won’t be where the GPS receiver says I

am, and that could easily have serious consequences.


Yes, let’s compare it to a very disimilar field. Psychology is a social science. The fields are very different.

I offer this mini-essay and this comparison because most of my psychological

correspondents have no idea what makes a field scientific. Many people believe that any

field where science takes place is ipso facto scientific. But this is not true — there is

more to science than outward appearances.


But physics is not a good field to compare with. The epistemology of physics is EASY compared with social science, including psychology.

But this is all hypothetical, because psychology and psychiatry have never been based in science,

and therefore are free of the constraints placed on scientific theories. This means these fields will

prevail far beyond their last shred of credibility, just as religions do, and they will be propelled by

the same energy source — belief. That pure, old-fashioned fervent variety of belief, unsullied by

reason or evidence.



This essay feels like it was written by a physicist or something like that who is disppointed that the same evidence standard is not used in other fields. He chose some kinda of mix of psychology and psychiatry to blame. Unfairly blaiming the entire field of psychology, when the problems are mostly within certain subfields.


He also displays a lack of knowledge about many of the things he mentions.


Mix it with a poor understanding of phil of sci, yeah.


So what is he? Well, read for yourself.

February 4, 2011

Review: Dick Taverne – The March of Unreason

Filed under: Politics,Science — Tags: , , , , — Emil O. W. Kirkegaard @ 07:06

317 pages. Can be found here. ETA: Apparently, that link got DMCA’d. Here is the book: The.March.of.Unreason.Science.Democracy.and.the.New.Fundamentalism

I thought it was an interesting read. It is one long rant against unreasonable people at various places in life. It has a chapter on: Alternative medicine, organic farming, GM crops, fundamentalist environmentalism, globalization, and reason and democracy. It has made me want to reconsider my views on environmentalism, Greenpeace and the like, so I’ll be doing that in the near future (in a danish essay). Sometimes it would have helped him if he knew a bit more about philosophy or science. Two examples: 1. He does not seem to know that in science the words “theory” and “fact” are used differently from their usage in ordinary english. He should have read something like this.

“There is a consensus among scientists that Darwin’s theory of

natural selection is no longer a theory (whatever the creationists

may say) but a true description of the way species evolved. But

the scientific method itself involves critical examination and test-

ing of every new hypothesis and many hypotheses will be replaced

in time.” (p. 257)

2. He should have learned about Hume, reason and emotion when a critic threw this quote at him

“The National Gallery is a monument to irrationality! Every concert hall is a monument to irrationality!—and so is a nicely kept garden, or a lover’s favour, or a home for stray dogs. You stupid woman, if rationality were the criterion for things being allowed to exist, the world would be one gigantic field of soya beans!” (p. 286)

But overall it’s a good read if you think reason is important and that people are not reaonable enough. People with alternative views will not be convinced by this book because it is pretty one-sided. But then again, not much will convince someone that believes alternative medicine works or some such.

January 29, 2011

Review: Martin Gardner – Fads and fallacies in the name of science

Filed under: Uncategorized — Tags: , — Emil O. W. Kirkegaard @ 15:52

Wikipedia about it.

Download Martin Gardner – Fads and Fallacies in the Name of Science.

Thanks to the person who made this available to me via email. You know who you are if you’re reading this.

The book is consists of a series of chapters about different pseudoscientific ideas. It’s a must read for someone interested in pseudoscience. It could use more references.

Powered by WordPress