-
Kirkegaard, E. O. W., Bjerrekær, J. D., & Carl, N. (2017). Cognitive ability and political preferences in Denmark. Open Quantitative Sociology & Political Science, 1(1).
Various critics of Noah Carl are being asked to produce something that shows they have read and understand his research, and they appear to be struggling. In an attempt to rectify this, we have Danish researcher Stine Møllegaard, a professor at Copenhagen University, making some criticisms of our study on Twitter. (Note that she subsequently deleted all of these comments.) Unfortunately, she does not seem that familiar with research in this area, since her points do not make a lot of sense. Stine is a moderately qualified critic by virtue of her own field, sociology, but does not appear to have published anything on psyhometrics and political attitudes (the topic of our paper)
Data
I had work to do. I am also very sceptical about the “danish data” used in multiple papers in OpenPsych, some of which Noah also co-authored. It requires quite a lot of digging to find the actual description of the data – and I find it curious. Have you read any of the papers?
A strange claim considering that all the data is public and so are the surveys given to people in the case of our study. The link to the data is given both on the OpenPsych website and in the paper PDF:
https://osf.io/xdpcq/files/
Unknown pollster
“1) It’s collected via a service I’ve never heard of (and I got quite some experience with quantitative data research in Denmark). This is not critical in itself, but strange..”
Not particularly strange. Has Stine heard of every internet pollster? In fact, when we looked for data collection, we reached out to multiple Danish pollsters. There was a huge price difference between the options because some suggested doing phone or face interviews (expensive! one estimated 100k DKK). We settled on the relatively unknown Survee because it relied on online collecting, which is similar to other highly utilized English language pollsters like MTurk or Prolific.
Representativeness
2) This service pays participants to participate – probably motivating some types of participants rather than others. How are basic demographic measures such as occupation, income, education, marital status related to the likelihood of being on such a site?
Saying that it is representative is a bit of a stretch, imo. This is further confirmed by the rather large number of participants the co-authors themselves note “… did not comply with instructions and filled out the questions seemingly at random.”
It is closely nationally representative on several metrics as we reported by comparing with data from Danish stats agency:
Because our sample was essentially a self-selected sub-set of another sample, it might be biased. As a check on representativeness, we calculated mean values of relevant variables for responders and non-responders from the original sample. As Table 1 indicates, responders were slightly younger, had slightly lower cognitive ability and were slightly less educated than non-responders. There was, therefore, some selection bias in responding to our survey. However, in all cases the differences were quite small (e.g. d = .23 for cognitive ability) and the subset was thus still fairly representative of the general population.
She continues:
3) The authors “openly stated the purpose of the study in the introduction of the survey” which might further have contributed to some selection into who would want to participate in such a survey and their answers.
Would she rather we don’t state the purpose of the study? It’s a rather odd criticism since most studies give a general introduction in the beginning of the survey. She presents no evidence that this would cause important selection bias.
Did responders understand the task?
4) The authors openly admit that their followup survey showed that a group of participants (size unknown) had misunderstood some of the questions, “however they may have been lying.”
🤔
There is ample research (see this random example) that representative surveys often have issues with responders not understanding what they are supposed to do. We decided to investigate this, but the standard method is to ignore it in studies. We went to extra lengths to ensure that participants understood the tasks by excluding data from those who didn’t.
The cognitive test
5) The authors use a measure of cognitive ability they themselves developed and validated – but on a completely different group: namely primary school students. I would be very surprised if primary school students are representative of 30-39 year old Danes in general.
The authors admit that “did not have other criteria variables than age and grade level to validate the test against”. A bit of a stretch to use it as a general measure of cognitive abilities.
Particularly a person (Noah) who has done research on intelligence should be more critical about how to measure cognitive ability.
Stine is not familiar with the test. This was in fact a Danish translation of the ICAR test, which has been well-validated on tens of thousands of people and used by numerous other researchers. We have used it not once, but multiple times in Danish samples (with middle schoolers, high school students, and adults), and there is no evidence it does not work as intended generally speaking. Obviously, such a short test (9 items) will have fairly low reliability compared to multiple hour long tests given in person, but that’s the reality of survey data: we can’t feasibly give everybody a full Wechsler assessment. For comparison, there are hundreds of studies using the 10-item vocabulary test found in the ANES and GSS survey datasets. Thus, her criticism is not on target and applies equally well to 100s of other studies by other researchers.
Funding
6) For a group of researchers publishing in their own journal in the spirit of open science, it is really curious that they cannot disclose who is funding their research; “This research was supported by two anonymous research contributions.”
Stine appears oblivious to the political bias of the field and the media. Suppose I got private funding from some person who was interested in immigration but not heavily involved in politics. Now, I could report this, and the social justice activists would then seek out that person and try to ruin their reputation by virtue of their association with me or the study. There is no legal mandate to report private funding, and no ethical mandate to do so. The only place really where funding sources should be reported are for research with commercial implications, which ours obviously does not have.
Controversial
I’m just saying… This is really not what I associate with well-conducted research. I would expect more of my students. Maybe I’m harsh? But when studying controversial questions, this is even more important, imo.
And here we get the truth: Stine has higher requirements for research she or other left-wingers don’t like: that’s what “controversial” means in this context. Selectively applied higher standards is the norm of political bias in science. There are various lines of evidence that show this, one can e.g. read the recently edited book by Crawford and Jussim (2017), or the long target article from 2015 (Duarte et al 2015).
Real Peer Review
OK. Well then he shouldn’t have a problem publishing these papers in actual peer-reviewed journals? As in _not_ reviewed by his friends, but researchers in the field?
OpenPsych is actually peer reviewed — no need for the motte and bailey. The reviewers on this particular paper are listed on the website: Robert L. Williams, Peter Frost, L. J. Zigerell. Zigerell is a professor of political science, Peter Frost is an anthropologist with numerous publications and a long running interest in IQ research, and Williams is retired but is well read on IQ research as evidenced by his publication of a paper in Intelligence previously, which is the highest impact journal of this field and run by Elsevier.