Publication bias as normally considered is really positive publication bias, i.e., the bias is away from zero, towards finding larger than reality results. There is another form, however, more rare, called reverse publication bias, or negative publication bias, where published results are biased towards the null. This pattern results from researchers’ own biases towards having something noteworthy to report, wanting to publish it, and getting through peer and editorial review. All of these things are related to political ideology of the researchers and their academic environment (self-censorship). Thus, due to the massive political ideology skew in social science, we expect:
- Positive publication bias for left-wing friendly and results in general
- Negative/reverse publication bias for results that left-wing unfriendly results
Below I list the examples I know of.
GPA and intelligence
Roth, B., Becker, N., Romeyke, S., Schäfer, S., Domnick, F., & Spinath, F. M. (2015). Intelligence and school grades: A meta-analysis. Intelligence, 53, 118-137.
— Emil O W Kirkegaard (@KirkegaardEmil) March 20, 2016
Sex difference in spatial ability
This study was still unpublished when I asked the Jakob about it Apr 11, 2019.
Jakob Pietschnig: Reverse publication bias for sex difference in spatial ability. Psychologists publish as if trying to hide the male advantage. #ISIR2018 https://t.co/lxmuUyXV17 pic.twitter.com/n4h2qm5NE6
— Emil O W Kirkegaard (@KirkegaardEmil) July 15, 2018
Race difference in personality (OCEAN)
Tate, B. W., & Mcdaniel, M. A. (2008). Race differences in personality: An evaluation of moderators and publication bias. Preprint from before preprints were cool.
More reverse publication bias: race differences in personality. Altho these gaps are fairly trivial for US black-white on self-reported OCEAN, there's still evidence of suppression. Seems in general that group difference research often shows reverse bias.https://t.co/0O6IwUPtD3 pic.twitter.com/Vc1DunY3py
— Emil O W Kirkegaard (@KirkegaardEmil) February 17, 2019
One can find recent discussions by searching Twitter: reverse publication bias or more strict with quotes “reverse publication bias”.
A bit of a side-interest is when people are not trying to suppress some finding, but just want to avoid the result of a statistical test of an assumption being violated. This can be from testing means in post-randomization groups (these are hopefully all false positives that should happen 5% of the time), or testing for model assumptions (e.g. normal distribution). There’s a lot of these kind of meta papers, two examples:
Chuard, P. J., Vrtílek, M., Head, M. L., & Jennions, M. D. (2019). Evidence that nonsignificant results are sometimes preferred: Reverse P-hacking or selective reporting?. PLoS biology, 17(1), e3000127.
Snyder, C., & Zhuo, R. (2018). Sniff Tests in Economics: Aggregate Distribution of Their Probability Values and Implications for Publication Bias (No. w25058). National Bureau of Economic Research.
These papers concern test of concern in studies, not tests for main findings that are undesired.
I first heard of this concept in ~2012 from Neuroskeptic:
New post: http://t.co/1k7pCpkJ The dangers of reverse publication bias – when everyone wants to publish negative results
— Neuroskeptic (@Neuro_Skeptic) February 23, 2012
Some have also tried to explain away replication failures as being a kind of reverse p-hacking:
There is indeed evidence of reverse publication bias https://t.co/SvkRmar5Rg
— Dan Quintana (@dsquintana) May 10, 2019
— Lionel Page (@page_eco) January 24, 2018