{"id":6545,"date":"2017-03-08T06:21:56","date_gmt":"2017-03-08T05:21:56","guid":{"rendered":"http:\/\/emilkirkegaard.dk\/en\/?p=6545"},"modified":"2017-03-08T06:23:28","modified_gmt":"2017-03-08T05:23:28","slug":"the-reverse-time-reversal-heuristic","status":"publish","type":"post","link":"https:\/\/emilkirkegaard.dk\/en\/2017\/03\/the-reverse-time-reversal-heuristic\/","title":{"rendered":"The reverse time reversal heuristic?"},"content":{"rendered":"<p><a href=\"http:\/\/andrewgelman.com\/2016\/01\/26\/more-power-posing\/\">Gelman proposed the time reversal heuristic<\/a> when evaluating discussions about failed replications.<\/p>\n<blockquote><p>One helpful (I think) way to think about this episode is to turn things around. Suppose the Ranehill et al. experiment, with its null finding, had come first. A large study finding no effect. And then Cuddy et al. had run a replication under slightly different conditions with a much smaller sample size and found statistically significance under non-preregistered conditions. Would we be inclined to believe it? I don\u2019t think so. At the very least, we\u2019d have to conclude that any power-pose effect is fragile.<\/p>\n<p>From this point of view, what Cuddy et al.\u2019s research has going for it is that (a) they found statistical significance, (b) their paper was published in a peer-reviewed journal, and (c) their paper came before, rather than after, the Ranehill et al. paper. I don\u2019t find these pieces of evidence very persuasive. (a) Statistical significance doesn\u2019t mean much in the absence of preregistration or something like it, (b) lots of mistakes get published in peer-reviewed journals, to the extent that the phrase \u201cPsychological Science\u201d has become a bit of a punch line, and (c) I don\u2019t see why we should take Cuddy et al. as the starting point in our discussion, just because it was published first.<\/p><\/blockquote>\n<p>I came across this:<\/p>\n<blockquote><p>Students should be alerted to a common fallacy in evaluating evidence. It is what I term the temporal order fallacy\u2014 that is, the failure of a later study to replicate the findings of an earlier study. The fallacy consists of according more weight to the second (more recent) study than to the first. This is terribly common in psychology. We often read that Dr. A\u2019s study found such and such and then Dr. B\u2019s study failed to replicate Dr. A\u2019s finding. Dr. A\u2019s finding is dismissed, and often that ends the matter. We can just as logically claim that Dr. A\u2019s study failed to replicate Dr. B\u2019s finding. The temporal order of the studies is irrelevant, other things being equal. If one study is superior in terms of design, statistical power, representativeness of samples, and the like, then of course it should be accorded more weight, regardless of its temporal order in relation to a contradictory study.<\/p><\/blockquote>\n<p>[In Arthur Jensen&#8217;s chapter in <em>Race. Social Class, and Individual Differences in I. Q.<\/em> by Sandra Scarr (1981).]<\/p>\n<p>Interesting how it is opposite to the supposed present day bias of giving extra weight to the first study. The recommendation is still the same: one should not assign any scientific value to the order studies came out when evaluating the evidence base for a claim, all else equal. This is in fact not done using standard meta-analytic tools, so the proper response to conflicting or &#8216;conflicting&#8217; findings is <a href=\"https:\/\/www.researchgate.net\/publication\/303957291_The_crisis_of_confidence_in_research_findings_in_psychology_Is_lack_of_replication_the_real_problem_Or_is_it_something_else?ev=prf_pub\">to collect more and especially larger samples<\/a>. If heterogeneity remains high <strong>AND<\/strong> one has many samples, then look for moderators. <em>Most moderator analyses are woefully low precision, and won&#8217;t produce anything useful.<\/em> Given the prominence of power posing, collecting more data seems worth doing &#8212; even if it does constitute <a href=\"http:\/\/emilkirkegaard.dk\/en\/?p=6489\">Captain Obvious science<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Gelman proposed the time reversal heuristic when evaluating discussions about failed replications. One helpful (I think) way to think about this episode is to turn things around. Suppose the Ranehill et al. experiment, with its null finding, had come first. A large study finding no effect. And then Cuddy et al. had run a replication [&hellip;]<\/p>\n","protected":false},"author":17,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1107],"tags":[2488],"class_list":["post-6545","post","type-post","status-publish","format-standard","hentry","category-science","tag-time-reversal-heuristic","entry"],"_links":{"self":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/6545","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/users\/17"}],"replies":[{"embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/comments?post=6545"}],"version-history":[{"count":3,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/6545\/revisions"}],"predecessor-version":[{"id":6548,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/posts\/6545\/revisions\/6548"}],"wp:attachment":[{"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/media?parent=6545"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/categories?post=6545"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/emilkirkegaard.dk\/en\/wp-json\/wp\/v2\/tags?post=6545"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}