Publication bias index by the author: a way to keep scientists honest?

Would this work? Authors can reduce their publication bias measure by publishing new studies that are more honest (with regards to reporting, research practices or not making data up, etc). The primary problem seems, it seems to me, is that authors who currently have a high publication bias index would try to lower it quickly by publishing papers with artificially bad results. Perhaps by attempting to replicate other studies they don’t like and only publishing those that failed to reach the magic p-level.

Critics of null hypothesis testing will of course say that these kinds of problems are inherent to using that kind of statistics (e.g. Harlow et al 1997 or various Bayesian critics e.g. Kruschke 2010). They may be correct, but it seems to me that the focus on publication bias in many recent meta-analysis have helped the problem quite a bit. New methods are being developed to estimate true population effect sizes using only biased results (e.g. van Assen et al, 2014).

It would be best if the index could be automatically calculated by data mining papers by authors. This seems a little beyond my computer science abilities right now. Science basically needs WATSON to do meta-analysis for them, including coding all the data from papers. However, it is surely possible to create a website where information can be filled out for/by (if by, they will surely cheat again!) authors so their indexes can be calculated. I could quite possibly set up this website in 2015.

In general, from my reading of various kinds of meta-science the conclusion is clear: Humans cannot be trusted to do science properly. I don’t even trust myself to do it properly. Presumably, the two best remedies are: 1) making scientists (and humans in general) smarter by genetic methods (by selection or engineering), 2) developing AI to do it for/with (implants) us. Both of these are coming along but the first is met with considerable political opposition. In the meanwhile, the only thing we can do is to try to make the scientific process more robust to human follies, which means that we need to open everything up, i.e. Open Science. This was one of the goals of founding OpenPsych, to push psychology, specifically differential psychology, behavior genetics and the like, in that direction. We seem to have made some progress. Hopefully 2015 will outperform 2014. I am hopefully optimistic.

Refs
van Assen, M. A., van Aert, R., & Wicherts, J. M. (2014). Meta-Analysis Using Effect Size Distributions of Only Statistically Significant Studies.

Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (Eds.). (1997). What if there were no significance tests?. Psychology Press.

Kruschke, J. (2010). Doing Bayesian data analysis: A tutorial introduction with R. Academic Press.