You are currently viewing Sam Harris, not much of a neuroscientist

Sam Harris, not much of a neuroscientist

Sam Harris brands himself as a neuroscientist. We can read on his website that:

Sam Harris is a neuroscientist, philosopher, New York Times best-selling author, host of the Making Sense podcast, and creator of the Waking Up app.

I have no particular beef with him. He brings in interesting guests. Bringing Charles Murray on his show was probably the most significant thing he has done for our field.

Wikipedia says he has a neuroscience PhD:

He received a Ph.D. in cognitive neuroscience in 2009 from the University of California, Los Angeles,[24][27][28] using functional magnetic resonance imaging to conduct research into the neural basis of belief, disbelief, and uncertainty.[24][28] His thesis was titled The Moral Landscape: How Science Could Determine Human Values. His advisor was Mark S. Cohen.[29]

So let’s check out his research. Well, he has no profile on ResearchGate. Or on Google Scholar. Semantic Scholar has an automated profile for him which lists 18 publications. These include his books and some semi-popular writings. Only 6 are about neuroscience:

To make sure I didn’t miss any, I checked the Google Scholar profile of his advisor, Mark Cohen, as well as the references cited in teach of the above.

Of the 6 publications, 3 are first authored by Harris. Of these, 1 is a correction to a prior study (that is, it is 3 sentences about a non-technical error). Of the non-first author, one is a poster done by his successor, P. K. Douglas, who also has a Google Scholar page with lots of research published. The only semi-new paper is the 2016 piece, where Harris is the last author (senior role), meaning he probably, maybe, had a role of a guide.

Let’s dive into them. The first:

Fourteen adults (18 – 45 years old; 7 men, 7 women) gave written consent to participate in this study.

Reaction time data were acquired on all subjects (mean reaction time for belief trials = 3.26 seconds; disbelief trails = 3.70 seconds; uncertainty trials = 3.66 seconds). The mean differences in reaction time, although small, were significant (t test for belief vs disbelief and belief vs uncertainty [p(belief < disbelief)  5 x 10^-11 ; p(belief  < uncertainty)  5 x 10^-9 ]). The reaction times of disbelief and uncertainty trials did not differ significantly ( p[uncertainty < disbelief]  0.2).

Those are some very impressive p values with n=14! But this finding has been seen many times, so we have no particular reason to be skeptical.

Author contributions for the first paper:

Conceived and designed the experiments: SH JTK MI MSC. Performed the experiments: JTK. Analyzed the data: SH JTK MI MSC. Contributed reagents/materials/analysis tools: MI MSC. Wrote the paper: SH JTK. Performed all subject recruitment, telephone screenings, and psychometric assessments prior to scanning: AC. Supervised our psychological assessment procedures and consulted on subject exclusions: SB. Gave extensive notes on the manuscript: MSC MI.

So it actually says that Jonas T Kaplan did the actual MRI’ing.

Second study:

We enrolled 54 subjects who were (1) between the ages of 18–30, (2) not taking anti-depressants, (3) neurologically healthy, (4) free of obvious psychiatric illness or suicidal ideation, and (5) native speakers of English as their first language.

Response time data were submitted to a repeated-measures ANOVA with belief (true, false) and statement content (religious, nonreligious) as within-subject variables, and group (nonbeliever, Christian) as a between-subject variable. Response times were significantly longer for false (3.95 s) compared to true (3.70 s) responses (F (1,28) = 33.4, p<.001), and also significantly longer for religious (3.99 s) compared with nonreligious (3.66 s) stimuli (F (1,28) = 18, p<.001). The two-way interaction between belief and content type did not reach significance, but there was a three-way interaction between belief, content type, and group (F (1,28) = 6.06, p<.05). While both groups were quicker to respond “true” than “false” on both categories of stimuli, the effect of truth was especially pronounced for nonbelievers when responding to religious statements (see Supplementary Information: Table S1 and Figure S1).

So they tried a 2-way interaction, and finding p > .05, they tried the 3-way interaction, achieving p < .05. What is the p value?

0.02.

The rest of the article are the usual colored maps of the brain. I don’t see any correction for multiple testing, so the reported z scores are uninterpretable and could just be due to random chance on the fishing expedition. The first article did some kind of multiple comparisons adjustment, but not specified how.

The third article wasn’t done by Harris. He is 2nd author, in this case that probably means that he gets credit for the data. It uses the same 14 person sample as the first study. Basically, the follow-up guy at the same lab used some machine learning methods to reduce the dimensionality of the data. A technical paper.

The fourth article has Harris as last author (of 3):

Forty healthy participants with no history of psychological or neurological disorders were recruited from the University of Southern California community and the surrounding Los Angeles Area (mean age: 24.30 ± 0.92 years, range: 18–39 years, 20 male).

The results are otherwise unremarkable. Most p values are small, a few that were slightly above .05 are interpreted as null findings, unwise! This study is probably the work of the first author Jonas T. Kaplan, who also has a normal looking scientific career on Google Scholar.

And that’s it! 2 papers of his own, sort of, some minor roles in collaboration. I agree, by the usual standard of having a degree in a field, Sam Harris is a neuroscientist. By the standard of actually doing neuroscience, he was hardly a neuroscience to begin with and hasn’t been doing much science for years. If one reads other critics, usually religious people offended by his atheist writings, one can also learn that his PhD was funded by… his own non-profit set up for this purpose. It seems then that he almost bought himself a PhD degree. Funded it himself, got others to do a lot of the work, and did the absolute minimum to get the degree. He is certainly no expert scientist, and scarcely a scientist at all. He does have talent for making podcasts, so it is good that he found something that he was good at.