I was working on a study and while doing so, I updated all my R packages. To my surprise, this changed the results in my analysis to a degree that it caused a change in the conclusion. I suspected it was due to an update in the psych package. The update notes read:
Bug fixed in scoreIrt where the values for the scores for people who gave all of the highest possible scores was incorrect (reported by Roland Leeuwen ). In addition, the scoreIrt.poly was not properly doing normal scoring but was in fact just doing logistic scoring. Fixed. This bug was affecting those people with max or min responses, and thus was particularly a problem for short scales.
So, if you used a scale with few items (e.g. ICAR5) and some persons obtained 0 or 5 correct answers, their scores will have been estimated incorrectly which may heavily distort your results. Bill Revelle (package author) notes that this was particularly troubling for himself because he often used short scales.
So, you should re-do your analyses and update your papers accordingly!
On a meta-science level, I appreciate the open reporting of troubling errors. This makes it possible to identify sources of inconsistent results between studies, or lack of ability to reproduce results even with the same data. It is troubling to think about how errors in statistical software affect science and we generally how no idea about the extent of this problem.