This post is unusually blunt because the topic concerns some rather serious criticism leveled against me. This necessitates replying with some facts that I’ve used for self-assessment purposes.
In case you missed it, my post the mental and behavioral problems of kids with parents from different races generated some furor. I replied to the first thread already, but there’s a second one which is more serious in the presented evidence. It is also long and rambly, so reading it is annoying. Instead I will summarize the case.
The crackpot case
My critic lists a number of reasons to think I’m a crackpot pseudoscientist. For the purpose of this post, the definition of this term is someone who appears to be doing science, but whose methods are so poor that the conclusions cannot be trusted to any degree. He gives a number of arguments, summarized below:
- I list a lot of interests and projects on my website. He thinks that science requires specialization and thus someone who has many diverse interests is spread out too thin and unlikely to be an expert on anything. Indeed, crackpots will often claim to be an expert on many things.
- Large fraction of solo papers (38/57, 67%). This indicates a lack of working relationship with other researchers.
- Co-authors are fairly unknown. He mentions John Fuerst and Julius Bjerrekær. John Fuerst actually has a publication in Intelligence as well. Julius has no other published work.
- High rate of self-citations. He bases this on my Google Scholar profile which does not seem to provide numerical statistics about this. But I’d say he is probably right about his assessment that >50% of citations are self-citations. He takes this to indicate that “almost no one is reading or interested in his papers.”.
- I don’t have a relevant degree, and no phd degree at all. In fact, my degree is only a low tier one — bachelor — and it’s in an irrelevant field, linguistics.
- Most work is published in new or low tier journals. He mentions the OpenPsych journals and Winnower. The first set of journals is edited by myself, which is also suspicious. After all, creationists have their own journals too. He takes this to indicate that the work is so poor that no one else will have it.
If you heard these arguments about someone, what probability would you assign to that person being a pseudoscientist? Pretty high, maybe 95% or 99%. Still, that leaves 5% to 1% that it’s a false diagnosis.
I thought about how I would appear a long time ago when I started publishing science (my first paper is from 2013). I weighed the various benefits and costs of publishing in different journals, and ultimately decided to start a new open science publisher (with Davide Piffer) which I knew would not have high prestige any time soon. The rationale for this is simple: I think it is more important to optimize the scientific process than it is to be well-respected. This is a point that often comes up in discussion with my more traditional colleagues. Here’s an email from one colleague, who is a member of the editorial board of Intelligence:
You do good research and you are very dedicated to science,
e.g. founding scientific journals.
I do not know anybody else who is doing this.
You would be the ideal professor and scientist.
However, some clever adaptation to the system would be necessary.
1. Make a master.
2. Make a PhD.
3. Publish also (but not only) in reputable lefty outlets.
4. Choose also one mainstream research topic to promote your reputation.
5. Look for grants from standard money givers.
Solo papers are a sign of pseudoscience, but can also just be due to lone wolfery. A large majority of Arthur Jensen’s papers are solo papers, and yet he was a great scientist.
For some reason, he ignores a number of my co-authors. Here’s the others:
- Jan te Nijenhuis. Psychologist, phd, lecturer. Jan has tons of papers in mainstream journals, lots of citations, and sits on the editorial board of Intelligence.
- Bo Tranberg. Physicist, phd student in physics/engineering.
- Birthe Jongeneel-Grimen. Epidemiologist, phd.
- David Becker. Psychology student, BA. Has a few papers in mainstream journals. Only started published recently.
- Edward Dutton. Psychologist-anthropologist. Lots of publications and citations in both fringy and mainstream journals.
- Davide Piffer. Psychologist, phd student. Quite a few papers in both fringy and mainstream journals.
These are not conventional top researchers — say, professors at top 100 universities — but they are obviously not incompetent or insane people.
This should not be taken to mean that the above agree with my heterodox views.
Academia moves slowly. My research into HBD matters dates only from 2013 to 2017, giving a maximum of 4 years for people to start citing stuff. Given that most papers were published in fairly unknown outlets, it’s no surprise that most citations are self-citations. The reason for the large number of self-citations is simply that I publish a lot, and mostly publish stuff that builds on my own previous work. For instance, a long list of papers concern the performance of immigrant groups, and these papers naturally cite some of the earlier papers. What’s the alternative here? Ignore the previous research in order to seem less crackpotty? Only publish studies on diverse topics to avoid self-citations?
Scott Alexander, in a comment on the previous criticism thread, noted that:
Emil is definitely odd, but I notice he’s got some peer-reviewed publications co-authored with respected people in the field (example), his papers get cited in major journals, and he’s always talking to professors and PhD students on Twitter who seem to think he’s okay. I’m not going to say that SSC doesn’t have higher standards than peer-reviewed journals, because goodness knows we do, but I haven’t seen any reason to active them here.
The social part is the key, and there is a lot of evidence available to those who look. The easiest method to examine researcher networks is to look at social networking sites for researchers, such as ResearchGate (RG). Here one can see everybody who follows a particular person. The clear prediction from the crackpot model is that serious researchers will ignore — not follow — such a person. What do the data show? I have 102 followers. These include a lot of well-respected, mainstream researchers most of which work in fields related to my research.
We can go further: We can look up all the editorial board members of Intelligence. This is a reasonable selection of world experts on the topic that I mostly study. There are 37 members of the board, how many of them follow me on RG? 1) Bates, 2) Coyle, 3) Karama, 4) Nijenhuis, 5) Wai. So about 14%. Of the ones on RG, about 50% follow me.
How many of them follow me on Twitter? 1) Bates, 2) Colom, 3) Conway, 4) Coyle, 5) Jung, 6), Karama, 7) Meisenberg, 8) Wai, 9) Wicherts. So about 24%. Of the ones on Twitter, something like 75% follow me.
Follower status on Twitter and RG is stochastic. Some people don’t use a given service much and consequently do not follow many people at all (e.g. Gignac follows only 8 persons on Twitter). Their lack of following me is thus ambiguous evidence. Indeed, some people follow me on RG but not on Twitter and vice versa.
This should not be taken to mean that the above agree with my heterodox views.
My critic infers that no one reads my research based on the lack of citations from others. However, this might simply indicate that others don’t publish research on topics where they need to cite my work. For instance, they might follow my work carefully, but avoid publishing in the area for strategic reasons. RG, however, does publicly display the statistics. My combined publications have 7.2k reads. Is this a lot? Well, one can compare with other researchers on the site, and they have similar numbers, so it seems that people do read my work.
Not all my work concerns HBD. I’ve been making a collection of interactive visualizations of statistical concepts. Many people find these useful. The reception on e.g. /r/statistics is positive. Thus, it seems unlikely that my statistical incompetence is as low as my critic seems to think.
Crackpots don’t get asked to review papers for scientific journals, but:
I declined to review, and did so publicly. Why? The same reason I don’t publish in Intelligence: I hate Elsevier.
Edited: Someone told me it was rude to not anonymize this. Perhaps. It is too late now. My apologies to McDaniel and the unknown authors’ who had their abstract exposed (perhaps).
Some information is not available to others because it consists of email exchanges or private conversations I have with other scientists. For example, regarding the stereotype study, some comments from experts I sent the study to were:
“Thanks, good and important study.”
“Hey, thanks, Emil. Amazing but not surprising — hell, your findings line up almost exactly with the conclusions we reached repeatedly in reviews pubbed in 2009, 2012, and 2015.”
“wow, this is super interesting. thanks.”
I have many such comments spread out in various email exchanges with experts, many of whom are personal friends.
Publication in top journals and research quality
My critic writes:
So how’d he get published? I suspect many of you don’t realize how easy it is to produce a paper that looks scholarly enough, and how easy it is to get it published if you aim low enough. Forget about a third-tier journal, the lowest a “real” scientist will go to, what about a sixth or seventh-tier journal? Tenth-tier? Does it even go that low? These virtually never cited journals are literally less than worthless among experts, but impressive to the utterly ignorant.
Intelligence is the top journal for this field. Scott notes:
(also, you’re calling Intelligence a low-impact journal whereas I’ve previously seen it called high-impact (impact factor is 3.425, Wikipedia rates it 10th out of 120, these people rate it 24th out of 118). I mean, it isn’t Nature, but it’s a heck of a lot better than the sort of places I send my case studies to, and I’m proud of those case studies.)
However, it doesn’t matter so much because research quality seems to be either unrelated to journal impact factor or even negatively related:
Self-assessment is hard. When people are asked to estimate their own intelligence, their estimates only correlate about .33 with the measured scores, and most people overestimate their intelligence levels. Crackpots are essentially people who overestimate their own scientific ability and accomplishments by a very large amount. I don’t recall any public statement I’ve made about myself about this matter. I guess I will have to make a public self-assessment. I consider myself pretty competent with practical statistics, i.e. with such things as cleaning up date for analysis, choosing models to use, interpreting results. Compared to similar researchers, I’m very productive, partially because I choose outlets where there is less wasted time. I don’t think I’m a genius, and I don’t compare myself favorably to Galileo, Einstein or Galton. My goal is to make a long list of substantial solid empirical contributions, but I don’t expect to instigate some kind of revolution or paradigm change. So far, my contributions are substantial for two topics: 1) the performance of immigrant groups by country of origin, 2) associations of intelligence/cognitive ability in aggregate data. We will see what the future brings. I expect to do quite a lot of work in behavioral genetics and genomics in the next couple of years.
The conclusion is that researchers do in general think my work is interesting, but that my odd publication habits combined with interests in taboo/sensitive topics make me look like a crackpot pseudoscientist. I could ease this by publishing a few papers in some mainstream journals, getting a relevant degree, getting a relevant job, co-authoring with some big name people etc. I will be sending a few papers to legacy journals, so that I can get a special higher doctorate degree, and to make John Fuerst a little happier (this will let me call myself “Doctor Graveyard” because that is what my last name means!). The combination of a few more papers in standard journals with a degree should get rid of the worst accusations without seriously affecting my publication habits.
Updated: 27th May 2017
In reply to my tweet of this post, Rex Jung, replied:
@KirkegaardEmil Keep doing what you are doing – science winnows out false signals over time -academy (esp. publishing) is broken (but you should get a PhD)
— Rex Jung (@rexjung) April 7, 2017
Rex is one of the two famous neuroscientist-intelligence researchers who came up with the widely supported P-FIT model. He’s also a board member of Intelligence.
For the paper count, I included everything listed on my site, so this includes talks at conferences and books.