Note: snarky, critical. Happy to be proven wrong. Edit: after writing this, I found some more evidence, but it’s not particularly convincing either.
CFAR, The Center for Applied Rationality, is connected to the Less Wrong/rationalist movement. They offer courses where one can:
4-day immersive trainings in applied rationality; form accurate beliefs; learn to get these accurate beliefs into the parts of your mind that affect motivation; explore the edges of your comfort zone.
i.e. a kind of rationality camp, akin to all kinds of other (self-)help camps. So, naturally, given their focus on rationality, we’d expect them to base their methods on the science of what works. So what evidence is there that these kinds of things work in general (‘outside view’ i.e. the prior for this kind of thing), and specifically the teachings about cognitive biases/debiasing etc.? If we go back to 2009, the great psychologist and expert on pseudoscience Scott Lilienfeld reviewed the evidence:
- Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare?. Perspectives on psychological science, 4(4), 390-398. PDF
In 2009, the evidence was far from convincing. Lilienfeld et al notes:
When examining the literature on debiasing techniques against confirmation bias, one is struck by three glaring facts: the paucity of research on the topic, the lack of theoretical coherence among differing debiasing techniques, and the decidedly mixed research evidence concerning their efficacy (Arkes, 1991). Still, there have been a few promising advances. Despite their surface differences, most or all of these techniques are designed to shift cognitive processing largely from what Stanovich and West (2000; see also Epstein, 1994; Kahneman, 2003) referred to as a System 1 mode of thinking (automatic, heuristic) Volume 4—Number 4 to a System 2 (controlled, rule-governed) mode of thinking. This shift may permit System 2 processing to ‘‘override’’ more automatic propensities to consider only one’s point of view (Stanovich & West, 2000).
If one reads Stanovich’s newer books, they are essentially more of the same speculation about what to do, and very little evidence on what works. When rationality tests have been shown to correlate with something nice, it could easily be because they are in part just regular IQ tests. After all, they are tests of specific ability to get something right, and all such tests are known to measure general intelligence to some degree. Stuart Ritchie has a pretty good review of Stanovich’s latest book.
— Stuart Ritchie (@StuartJRitchie) January 11, 2017
In a small study some years ago (Kirkegaard and Nordbjerg2nd 2015), I did analyze the CRT versus a 16 item IQ test (ICAR16) for incremental validity for predicting GPA, and found no such evidence. Given the small sample, the limited sample (n=72 secondary school students) and the limited outcome, it’s obvious that we might just have missed the validity. However, Kuncel did some more rigorous testing and found essentially the same. This can be seen in their ISIR talk from 2015:
The relevant part begins at 24:20. The study seems to have never been published. Others can’t find it either. I emailed all three authors and will update if I get a reply.
Still, maybe the CFAR has produced some more convincing evidence since their start in 2012. They have had 5 years to produce something convincing. So, naturally, we look at Google Scholar to see what we can find. Unfortunately, someone else is using the CFAR abbreviation (‘constant false alarm rate’), so we have search pollution. So, we try with the full name. Since 2012 there are a total of 21 hits, none of which are any kind of evidence, let alone solid, that CFAR-like activities help. Okay, maybe it’s on their website, so we look around. We find nothing except testimonials (the weakest possible form of evidence), and we learn that attending the workshop costs a whopping 3900 USD, 975 per day.
Bringing it all together, we have:
- Strong claims made about efficacy e.g. “CFAR content is highly useful to those facing large and complicated choices in their near future, and this certainly includes people in gap years, undergraduate programs, and graduate/PhD/postdoc programs.”.
- Use of very specialized language, based of science produced by others.
- No evidence of efficacy aside from testimonials.
- Very expensive 4 day course.
The outside view is that CFAR is essentially like every other self-help guru service, and the evidence at hand only makes this more probable. What evidence is there that such activities work (for anything else than making the leaders rich)? Well, generally, not much. There’s not much evidence they don’t work either, since no one seems to have actually done any solid research on them. So one would essentially be taking it on faith; very curious for a rationality community!
Edited to add
After writing this, I found some more evidence, not formally published, but evidence nonetheless.
The 2015 non-randomized follow-up study. Essentially asked participants (n=135) whether their lives improved in various ways 1 year after. It’s not very convincing for a number of reasons they themselves point out: 1) attrition/response bias, 2) self-report/socially desirable responding, 3) no control group, 4) regression towards the mean (people take the course when they are especially in need of help, and then regress towards their normal mean).
I don’t think this changes much with regards to my conclusions above.