Some colleagues wrote a paper summarizing intelligence/cognitive ability gaps in the United Kingdom based on large samples. Many of these were new results derived from government testing via freedom of information act requests. The aim of such a meta-analysis is of course to get a broad overview of what gaps exist. From a blank slate perspective, this would be important information to have because it allows one to track progress over time. It’s also important for people looking for genetic causes since of course these hypotheses make predictions for stability over time given certain assumptions (such as no massive elite immigration into groups). The abstract:

We analyzed cognitive test score data for 83,155 British adults across six representative national samples (AMP, 2000; Skills for Life, 2003; AMP, 2007; UKHLS, 2011-13; PIAAC, 2012; MCS, 2015). Grand mean IQs by ethnicity, relative to a White mean set to 100, were: Jews, 107 (N = 77); Chinese, 98 (N = 131); South Asians, 89 (N = 3,712); and Blacks, 89 (N = 2,091). Limited data were available for other sub-ethnic groups. Substantial heterogeneity in scores existed depending on various factors, including the type of test (e.g., verbal, numeric), ethnic subgroup (e.g., Indian, Pakistani), age group, and migrant generation (e.g., foreign born, UK born). As to the latter, Blacks and Asians born in the UK scored about 6-7 points higher than ones born elsewhere. Because human well-being correlates strongly with measured IQ differences, we conclude that these effects warrant more detailed investigation. Keywords: Ethnicity, cognitive ability, UK

So, the findings are not terribly surprising, but we are in a replication crisis, so replication is what we need more of. The sample size is quite massive. The paper was submitted to MDPI’s Journal of Intelligence, which describes itself as:

The Journal of Intelligence (ISSN 2079-3200) is a peer-reviewed scientific journal that publishes original empirical and theoretical articles, state-of-the-art articles and critical reviews, case studies, original short notes,  commentaries, and letters. Our aim is to offer an open access journal that moves forward the study of human intelligence: the basis and development of intelligence, its nature in terms of structure and processes, and its correlates and consequences, also including the measurement and modeling of intelligence. Related topics, such as artificial intelligence, and animal intelligence are welcomed as far as they shed light on human intelligence. We encourage authors to document their results in as much detail as possible. The full experimental and data analysis details must be provided so that the results can be reproduced and so that other researchers can better build on earlier work. Detailed descriptions of the procedure, software code and output of analyses must be made available in order to be deposited as supplementary material, unless it is in contradiction with privacy or security regulations or intellectual property rights. The journal will not publish articles that may lead to or enhance political controversies and the editors will judge whether that is the case.

So the latter line of course is quite explicit political selection of incoming papers, so it was a test case. The reply came quickly:

Comments from our academic editor:

Thank you for submitting your work to the Journal of Intelligence.

Unfortunately, I believe there is an issue with your manuscript based on a principle we announce on the journal website. See the last sentence under Aims: “The journal will not publish articles that may lead to or enhance political controversies and the editors will judge whether that is the case.”

It is an estimate from my part that your article may lead to or enhance political controversies. I believe that the motivation as described in the introduction of your manuscript and the mixed language (performance, skills, cognitive ability, cognitive competency, IQ) used to refer to the findings may lead to interpretations and conclusions which are politically controversial. This is neither an evaluation of the quality of your work (which would require a scientific review process), nor am I saying that your results should not be communicated in some way. However, in my estimation, the manuscript as submitted does not fit with policy of the journal

So I guess “our academic editor” means Matthias Ziegler (I suggest academics email him at matthias.ziegler@psychologie.hu-berlin.de to politely explain to him why this behavior is detrimental to science). He appears to have confirmed this himself on Twitter after we posted this obvious political suppression:

(Archived at http://archive.vn/ovLh1)

Conchobar is an unrelated user. Apparently, Matthias can read minds from a 3rd party user quoting his own words, and this tells him about intentions, and intentions are apparently the deciding factor here (side of the angels or not?). In apparent lack of irony, he describes his own political rejection as being about neutrality. We can work out the logic here:

  1. He wants to avoid politics
  2. Stuff that’s controversial is politics
  3. This paper would be controversial
  4. So this paper is politics, not neutral
  5. Politics paper should not be published

The assumption here being of course that stuff that is controversial is about politics itself, rather than being a political reaction. Due to the massive left-wing bias of academics, we know that being controversial means ‘disliked by left-wingers’, so by implication ‘neutral’ actually refers to left-wing friendly works because these are not controversial among left-wingers.

I post these things because I am tired of the suppression double speak. You see, while some academics openly advocate suppression (many examples given in Cofnas 2019; Meisenberg 2019), some of their friends at the same time advocate the view there is no suppression! Which is it? So here we document cases for posterity, for all to see, and we show that suppression is real. Sunlight is the best disinfectant, and this is political corruption of the scientific process.