Q/A with Antonio Regalado for genomics

Antonio Regalado is providing us with a recent series of the arguably best informed (compared to Guardian, Nature news etc.) popular science articles on genomics and its relevance to modern eugenics (embryo selection or genetic engineering) as well as the group differences causation question.

  • https://www.technologyreview.com/s/609204/eugenics-20-were-at-the-dawn-of-choosing-embryos-by-health-height-and-more/
  • https://www.technologyreview.com/s/610251/forecasts-of-genetic-fate-just-got-a-lot-more-accurate/
  • https://www.technologyreview.com/s/610339/dna-tests-for-iq-are-coming-but-it-might-not-be-smart-to-take-one/

As background for some of these, I had an email conversation with him.

Since I spent so much time preparing answers for his questions, I asked him if it would be fine if I posted them here as well, to which he agreed. The reason I spent so much time answering his questions is that I am dissatisfied with most of the coverage of genomics about these matters in popular outlets, and even in the generalist science ones. Antonio requested that I paraphrase him instead of direct quotes.


Antonio:

Antonio asks about the recent Plomin piece in Nature and the use of polygenic scores to explore causes of group differences.

Emil:

Hi Antonio,

I take it that you’re thinking of Plomin & von Stumm 2018 and not Plomin & Deary 2015, also in Nature. Of course, assessing the impact of a paper that came out only a month and a half ago is difficult. The altmetrics provides some guidance, and it seems to be a popular paper, mostly due to Twitter activity. It’s in the 96th centile for papers of its age, so relatively speaking, it’s getting a lot of attention. I don’t know how much of that is just due to it being in Nature.

For people in the field, the review is probably mostly useful for the impact it has on outsiders’ view of the field. For insiders, it breaks little new ground but provides a useful reference/summary one can point to.

As for PGSs being used to explain the IQ gaps for ancestry clusters, as far as I know, there has been not too much recent progress here, owing to interpretational issues with PGSs when used between ancestry groups. This is due to the LD decay problem and the nature of the associations found in GWASs. These are mostly understood to be tag variants, not causal variants, meaning that they are variants close on the genome to the causal variant but don’t do anything themselves. They happen to be statistically linked (in LD) with the causal variant. However, these LD patterns — which variants are statistically linked to others — depends strongly on random drift and founder effects, and thus is quite different between ancestry groups (and increases as their genetic distance increases). See Zanetti and Weale 2016, and my review from 2017. Basically, this line of research awaits either better statistical methods that account for LD decay somehow, more dense arrays that result in closer tagging of variants and thus less LD decay, or the use of ancestrally mixed discovery samples which should also reduce LD decay (see Traylor and Lewis 2016).

The PGSs as they are calculated right now cannot be used to examine sex differences directly because GWASs exclude the sex chromosomes. The autosome genetic variation mixes every generation, so it is not useful here. Any sex differences that may exist are likely due to sex hormonal influences on normal pathways in the body and would be tricky to work out. I don’t see PGSs being of much use here in the near future.

As for siblings and other within family variation, GWASs sometimes use these as validation samples. This sets the bar high because between sibling variation also suffers from some LD decay (due to recombination), and the design effectively controls for any cryptic ancestry that wasn’t controlled earlier (using the usual PCA ancestry approach). For an example, consider the recent GWAS on risk behavior/tolerance (Linnér et al 2018). I don’t recall any IQ GWASs using siblings for their predictive validation, but there’s been at least one study for education: Domingue et al 2015. Furthermore, in an independent sample, the same approach was used for both IQ and education, though strangely an effect was only detected for IQ (Willoughby and Lee 2017). Not sure about the interpretation there. I suggest waiting for a larger sample replication before putting much trust in this result.

Antonio:

Antonio asks 1) why GWASs omit sex chromosomes, 2) the impact of the Plomin and von Stumm review on research, 3) Plomin, his work and how he has avoided controversy, 4) the practical utility of polygenic scores (now to near future).

Emil:

The chips (microarrays, see e.g. here) used for generating the data for GWASs do read the sex chromosomes, and also the mitochondria. However, they generally disregard this data for the GWASs (this 2013 review found ~33% of studies included the X). I’m not sure exactly why that is, maybe because they want to avoid the complications of having no Y data for ~50% their sample, or men only having 1 copy of X compared to female 2 (for healthy subjects). A large number of human genetic diseases are associated with the X chromosome (i.e. X-linked, as doctors call it), so I don’t think there’s any theoretical reason to exclude it a priori. The Y and MT genomes are quite small and probably aren’t too useful for finding the basis of polygenic traits.

As for specific reactions to that article, I don’t know of any in particular. You can find some by checking the article’s altmetrics report. E.g. I see that it has already been cited twice on Wikipedia (incidentally, one of them is in the article on polygenic scores, which I created, though I did not add this reference), and it was mentioned on Marginal Revolution (top blog for economists). Aside from that, seems that most attention is just tweets (so far).

As for Plomin himself, he was featured in the Norwegian science documentary, Hjernevask (Brainwash), which I recommend watching. You can find it online for free with English subs. In my view, Plomin is a great researcher at the top of his career who has worked hard to bring the field to where it is today. He has been careful to avoid controversy. A simple rule to do this is to avoid talking about group differences. Despite him being very careful, one can still find a small number of people attacking him. I also recommend listening to his 2015 interview with BBC. In his interview, he mentioned that one use of IQ tests for the less genetically fortunate is that they see through class differences and finds hidden talent in the working or lower classes, which can then be put to better use through mentoring. I subsequently found a study of exactly this (Card and Giuliano 2015) and emailed it to him. The NYT has recently published an essay pushing essentially the same idea.

The utility of PGSs is a broad question. We are likely to find more uses of them as time goes by, but I’ve seen a number of research uses and a few practical ones as well. For research, PGSs are useful for many reasons. One can use them in datasets that have genomic data but that lacks the family structure that enables the pedigree studies to control for genetic confounding. This has recently been used to study e.g. link between cannabis use and schizophrenia (French et al 2015). A particularly clever proposal involves using the unintuitive statistical effect called conditioning on a collider. When one uses PGSs for this purpose, one can actually use them to help find true environmental effects (Balazard et al 2017). There is also a proposal to use them to ascertain genetic causation from correlation among large numbers of traits (O’Connor and Price 2017). As for practical uses, this article (Chatterjee et al 2016) reviews the use for health issues. I was particularly happy to read about their proposal of using PGSs to locate people who are especially susceptible to lung cancer from smoking and targeting them for cessation. Plomin himself has co-authored a book with an educationist (Asbury and Plomin 2013) where they give various proposals about how to use personal genomics to help guide children’s school and career progression.

As for predictions, I think that in the next few years the coverage of people having had a genome read will approach ~100% for microarray data and a few years later for full genome sequencing. A number of companies will be set up that provide people with predictions for every heritable trait, more or less — indeed, some of these already exist e.g. https://dna.land/, though the predictions they provide are quite poor right now. The accuracy of these predictions will increase over time as the predictive models get better, but are of course limited by the heritability of the trait. Genetic causation for most traits is inherently stochastic (i.e. few traits are 100% heritable), so the best one can do is end up with predictions like “this person has 500% higher likelihood of getting psoriasis” (a disease that runs in my family and which I have). This information is already partially available from family history (i.e. genetic relatives), which is of course why doctors routinely get the family history. However, because each person’s genome is unique (monozygotic twins included), using polygenic scores will provide some additional information, sometimes a lot, sometimes not so much. Parents will certainly look at such information for their children, but since they already know their children from first-hand experience, I don’t think this will change much. Incidentally, the genetic searches for personality traits are not going very well right now (e.g. UK Biobank analysis found a SNP heritability of only 15%; Luciano et al 2017), so there will be some delay for these traits, giving society at large some time to discuss the matter.

Antonio:

Antonio asks about the prospects for increases in validity of the polygenic scores in the near future.

Emil:

I think the IQ model building is proceeding nicely with ~10% of variance now, i.e. correlation of .32 or so. I expect new studies to come out in 2018 boosting this number, say, to 15%. It should be noted that the real validities are higher than the observed ones because they have used validation samples with high measurement error (poor IQ tests), obscuring the true validity. One problem is that the UKBB, the single largest dataset, has a quite poor IQ measure. However, they are collecting more IQ data for it which should improve model training. This underscores a point that Plomin made at ISIR 2016 (no video seems to exist of this talk), that even in the age of genomics, it is still important to have good phenotypic data. This is not just an issue for psychology, but is also true for many disease measures, which are often based on possibly faulty self-report data.

Leave a Reply