You are currently viewing The role of censorship in science

The role of censorship in science

A new many-author paper makes the case that censorship is rampant in science:

Science is among humanity’s greatest achievements, yet scientific censorship is rarely studied empirically. We explore the social, psychological, and institutional causes and consequences of scientific censorship (defined as actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality). Popular narratives suggest that scientific censorship is driven by authoritarian officials with dark motives, such as dogmatism and intolerance. Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups. This perspective helps explain both recent findings on scientific censorship and recent changes to scientific institutions, such as the use of harm-based criteria to evaluate research. We discuss unknowns surrounding the consequences of censorship and provide recommendations for improving transparency and accountability in scientific decision-making to enable the exploration of these unknowns. The benefits of censorship may sometimes outweigh costs. However, until costs and benefits are examined empirically, scholars on opposing sides of ongoing debates are left to quarrel based on competing values, assumptions, and intuitions.

The title is amusing. Prosocial concerns underlie censorship? That’s surprising! Wait a minute, when was the last time a dictator announced some heinous policy and explicitly justified it using antisocial motives? Well, never? In other words, whether some policy is advocated based on prosocial concerns, whether truly so or merely claimed, doesn’t really tell us anything. Almost all humans have prosocial concerns.

Yet, the title is also apt because censorship in academia has mostly been justified in a quasi-utilitarian framework where usually unidentified and unproven harms are postulated to result from some research being done or published. For instance, every time a nationalist lunatic goes on a shooting spree, the 4chan archives and their browser history is searched for any links to bad influences which can then be targets for censorship. For instance, Michael Woodley received the following headline in the New York Times after the Buffalo shooting:

In reality, it is of course doubtful whether such research has any notable causal effects on encouraging mass shooters. Woodley’s 2010 paper is fairly obscure (27 citations) and technical review about subdivisions (races) of humans, not exactly something that would motivate anyone to do anything drastic (probably it would put most people to sleep). However, it did provide a convenient attack point. As a result of the censorship campaign, Woodley lost his honorary position at Vrije Universiteit Brussel.

Cases like this are usually used by egalitarian censors to ban research on race differences — that is, research that doesn’t toe the party line of blaming Europeans for everything. One can find doses of such calls for censorship. Amusingly, one such call for censorship is by the second author of the paper! Here’s Lee Jussim on Twitter in 2019 calling for a moratorium on race, genetics and intelligence:

To put this into perspective, the new paper helpfully provides a taxonomy:

Jussim’s proposed censorship is a kind of self-imposed soft censorship, as in, researchers should just recognize the harms of this work, and just not do it. History shows that this approach rarely leads anywhere. On the other hand, we might ask: is there any research we should censor? One could take a die-hard stance and say “no”, but there are many examples of research that should not have been done. For instance, COVID-19 most likely arose from joint US-Chinese research into highly contagious coronaviruses funded by our own governments. This line of work concerned deliberately inducing mutations (gain of function) in viruses to see if we could make them more infective (answer: yes!). As a result of this research, an engineered virus eventually escaped from a lab in Wuhan, China, and killed some 7 million people around the world. Was it worth it? I don’t think so. Even if you don’t accept lab leak theory, there are other research topics such as easy-to-make nuclear weapons that are ill-advised for anyone to be doing. Clearly, then, I think we should agree in principle that some potential research should not be done. The hard question is: who gets to decide what is OK and what is not? What are the criteria? The answer is that the powers that be decide this. So if the powers that be in science, and society at large, are egalitarians/Woke, they naturally think that everything related to race and intelligence research is dangerous and should be banned. For more on the ethics of race science, see:

I don’t want to argue the point of this at length again. Naturally I think this research is good to do and indeed important to do. I want to instead make the more general point that perception of harms is strongly related to one’s religious, moral and ideological positions. Religious people are very concerned about increases in godlessness. How could they not? If you thought that nonbelievers literally go to hell to suffer eternal torment, you would also be wise to advocate for banning anything that could lead in that direction, such as the teaching of evolution. Similarly, communists think that capitalism is a great evil, so they advocate for censorship of anything that makes that tempting. Various conservative dictatorships have likewise suppressed communist ideas in their countries.

If we remove ourselves from the present situation and think more generally, then, we know that strong belief in this or that religious, moral, or ideological idea leads to interest in reducing the perceived harms. In the case of religion, we know now that evolution is true, and it was a good thing that it was not censored, at least insofar as the enlightenment principles are concerned, and the great advances in biological technology this led to. If we want to pursue an enlightened society, as I do, then we have to be very, very strict about which topics we ban, because we might be wrong, as many have been in the past. Communists were wrong in suppressing capitalist ideas. Capitalism is awesome, communism sucks. Their prosocially motivated censorship cost millions of lives and retarded human progress for many decades (and still do in Venezuela, China, Vietnam; North Korea etc.). Before advocating any censorship, we should be very sure we are not repeating one of these mistakes. The bar must be very. very high. This censorship cautiousness, then, is similar to what the U. S. Supreme court has reached when it comes to free speech in general. Namely, that for speech to be censored, there must be:

  1. The speech is “directed to inciting or producing imminent lawless action,” AND
  2. The speech is “likely to incite or produce such action.”

Nothing in science reaches this level of danger, so nothing would be prevented by US free speech law. As we saw above, though, there are some things that should be banned or at least extremely limited. What criteria could be used to determine what is OK and what is not? I don’t think anyone has really thought hard enough about this to come up with any such principles. That’s why almost everybody who advocates censorship talks vaguely about harms. In the case of race science, the purported harms are 1) occasional mass shootings of non-Europeans, 2) increases in anti-non-European sentiment. As it happens, such shootings are very rare and not an important threat to society, even if they could be partially attributable to race science (which I don’t think they can). Neither is there strong and growing anti-non-European or anti-African sentiment that we have to fear increasing. In fact, Africans already enjoy rather extreme legal favoritism in many western countries (“affirmative action”). There aren’t any purported harms of race science that come close to the clear cases of lab leak (7M deaths) or easy access nuclear weapons (0 deaths so far!). The same is true with regards to the purported harms of research on transsexuals, homosexuals, the homeless and so on.

But what is often missing from these debates is the other side of the calculation. We also have to ask: what are the benefits of the research? The stated benefits of gain of function research into coronaviruses was to be better prepared for the next epidemic. That’s a good thing, but unfortunately the attempt to better prepare ended up causing the epidemic instead. It is more difficult to see what the benefits of easily available nuclear weapons might be, one would have to try some extreme version of mutually assured destruction peace theory. The people who call for censorship of various scientific topics practically never meet the standard of considering the benefits too, they only consider the purported harms. This results in an unduly negative assessment. The 3 papers I mentioned above go into detail about the benefits, and Arthur Jensen also pointed these out 50 years ago. Briefly stated, the benefits of knowing what causes social inequalities in general leads us to better develop policies to reduce them if we want to, and more importantly, not blame others for things that aren’t responsible for. The latter is the most important as European peoples are currently being blamed for just about everything bad in the world, just as men are being blamed for women’s relative lack of achievements in certain areas, normal people for the problems of transsexuals and so on. This produces a toxic and unfair society. The massive Black Lives Matter riots was based on false conclusions about the causes of racial inequality and caused at least 1-2 billion USD in damages. But that’s peanuts compared to the spending on various government projects to remove the inequalities, most of which seem to be having no effect at all. There’s the deaths and looting caused by reductions in policing. Many more things could be listed. How’s that for harms of the censorship?

Leaving the ethics discussion, let’s return to the new paper. How common is pro-censorship sentiments among scientists? Unfortunately, very prevalent and growing:

Surveys of US, UK, and Canadian academics have documented support for censorship (98). From 9 to 25% of academics and 43% of PhD students supported dismissal campaigns for scholars who report controversial findings, suggesting that dismissal campaigns may increase as current PhDs replace existing faculty. Many academics report willingness to discriminate against conservatives in hiring, promotions, grants, and publications, with the result that right-leaning academics self-censor more than left-leaning ones (40, 75, 99, 103).
A recent national survey of US faculty at four-year colleges and universities found the following: 1) 4 to 11% had been disciplined or threatened with discipline for teaching or research; 2) 6 to 36% supported soft punishment (condemnation, investigations) for peers who make controversial claims, with higher support among younger, more left-leaning, and female faculty; 3) 34% had been pressured by peers to avoid controversial research; 4) 25% reported being “very” or “extremely” likely to self-censor in academic publications; and 5) 91% reported being at least somewhat likely to self-censor in publications, meetings, presentations, or on social media (48).
A majority of eminent social psychologists reported that if science discovered a major genetic contribution to sex differences, widespread reporting of this finding would be bad (67). In a more recent survey, 468 US psychology professors reported that some empirically supported conclusions cannot be mentioned without punishment (40), especially those that unfavorably portray historically disadvantaged groups. A majority of these psychology professors reported some reluctance to speak openly about their empirical beliefs and feared various consequences if they were to do so. Respondents who believed taboo conclusions were true self-censored more, suggesting that professional discourse is systematically biased toward rejecting taboo conclusions. A minority of psychologists supported various punishments for scholars who reported taboo conclusions, including terminations, retractions, disinvitations, ostracism, refusing to publish their work regardless of its merits, and not hiring or promoting them. Compared to male psychologists, female psychologists were more supportive of punishments and less supportive of academic freedom, findings that have been replicated among female students and faculty (48, 98, 104106).

Which topics are considered most in need of censorship?

In the panel in the middle of the bottom, we find the topics of the targeted speech. Race is by far the most targeted topic, the strongest taboo. This is amusing because some egalitarians recently published a hilarious paper claiming that there is no such taboo!:

Recent discussions have revived old claims that hereditarian research on race differences in intelligence has been subject to a long and effective taboo. We argue that given the extensive publications, citations, and discussions of such work since 1969, claims of taboo and suppression are a myth. We critically examine claims that (self-described) hereditarians currently and exclusively experience major misrepresentation in the media, regular physical threats, denouncements, and academic job loss. We document substantial exaggeration and distortion in such claims. The repeated assertions that the negative reception of research asserting average Black inferiority is due to total ideological control over the academy by “environmentalists,” leftists, Marxists, or “thugs” are unwarranted character assassinations on those engaged in legitimate and valuable scholarly criticism.

They do get points for writing the most delusional paper in years. One might wonder if they were even being honest.

From the perspective of the ultimate goal of science, finding truth, censorship of course has very negative consequences. The authors illustrate this:

Due to various kinds of censorship on a topic, the academic peer-reviewed, that is, censor-approved, literature may be very misleading as to what actual research has found, and ultimately what the truth is. For instance, with regards to skin color discrimination, an industry industry exists publishing papers showing how people with darker skin are worse off (“colorism”). But the few critical studies that used causally informative designs are not mentioned and barely exist. For instance, if color discrimination is causing lower income through, for instance, hiring discrimination, we would also expect sibling differences in skin color still predict such income. But they don’t:

More generally, it is not clear that colorism is actually a potent force, at least in the USA. Consider research based on sibling designs, which can distinguish between discriminatory and intergenerational effects. A number of studies in the economics literature have utilized sibling control designs in this fashion [81,82,83,84,85,86]. Unfortunately, they differ somewhat in design (e.g., raw vs. SES-controlled results for between-family regressions), and do not report standardized effect measures, so we were unable to quantitatively meta-analyze them. However, generally speaking, when family characteristics are controlled for, residual associations between racial appearance and social outcomes are small. In the words of one researcher who studied a large dataset from Brazil: “[T]he estimated coefficients are small in magnitude, implying that individual discrimination is not the primary determinant of interracial disparities. Instead, racial differences are largely explained by the family and community that one is born into” [81]. Mill and Stein [83] make statements to the same effect based on an analysis of a large dataset from the USA.

If politicians use majority science (“scientific consensus”) as their guide to truth for policy making — what other choice do they have? — then social policies will be based on false conclusions. Numerous programs will be started to remove the non-existent discrimination. The Australian government funded a such project which included a randomized trial:

This study assessed whether women and minorities are discriminated against in the early stages of the recruitment process for senior positions in the APS, while also testing the impact of implementing a ‘blind’ or de-identified approach to reviewing candidates. Over 2,100 public servants from 14 agencies participated in the trial 1 . They completed an exercise in which they shortlisted applicants for a hypothetical senior role in their agency. Participants were randomly assigned to receive application materials for candidates in standard form or in de-identified form (with information about candidate gender, race and ethnicity removed). We found that the public servants engaged in positive (not negative) discrimination towards female and minority candidates:

• Participants were 2.9% more likely to shortlist female candidates and 3.2% less likely to shortlist male applicants when they were identifiable, compared with when they were de-identified.

• Minority males were 5.8% more likely to be shortlisted and minority females were 8.6% more likely to be shortlisted when identifiable compared to when applications were de-identified.

• The positive discrimination was strongest for Indigenous female candidates who were 22.2% more likely to be shortlisted when identifiable compared to when the applications were de-identified.

Interestingly, male reviewers displayed markedly more positive discrimination in favour of minority candidates than did female counterparts, and reviewers aged 40+ displayed much stronger affirmative action in favour for both women and minorities than did younger ones. Overall, the results indicate the need for caution when moving towards ’blind’ recruitment processes in the Australian Public Service, as de-identification may frustrate efforts aimed at promoting diversity 2 .

A truly amazing conclusion. The study showed that the purported discrimination against women and non-Europeans not only did not exist, it was actually in their favor. But somehow the policy goal of increasing the numbers of women and non-Europeans must be kept, and has to be reached by other methods. Imagine if we had done a study on water poisoning in some drinking water reservoirs which showed that actually the water is clean, but the study concluded that we have to find other ways to improve the water quality.

Where does this leave us? What can be done to decrease censorship in science? Well, change the demographics. As the authors report “younger, more left-leaning, and female faculty” are more censorious than average, so their numbers in academia should be reduced. Easier said than done, of course. We might hope that younger people grow wiser with age, so that this problem solves itself with time. On the other hand, young people are an important driving force of new ideas, so we cannot simply relegate science to a club of 60 year olds. Increasing meritocracy in science will reduce the proportion of females and non-Europeans — the most left-wing groups — as they are benefiting from government mandated discrimination, so that one is ‘easy’. I don’t know what to do with ideology itself, as leftists are probably higher in academic talent in general, so one cannot reduce their proportion through any strictly meritocratic mechanisms. I would guess that the main thing supporters of the enlightenment can do is to support legal changes that increase meritocracy in science in general. For that, Richard Hanania’s recent book on Wokeness is the go-to for ideas.