Clear Language, Clear Mind

August 9, 2017

Health dysgenics: a very brief review

Filed under: Genetics / behavioral genetics,Medicine,Reproductive genetics — Tags: , , — Emil O. W. Kirkegaard @ 08:42

Woodley reminded me of the dysgenics for health outcomes by linking me to a study about the increasing rates of cancer. I had first reached this conclusion back in 2005 when I realized what it means for evolution that we essentially keep almost everyone alive despite their genetic defects. The problem is quite simple: mutations accumulate and mutations have net negative impact on the functioning of the body. Most of the genome appears not to be relevant for anything (‘junk’ / non-coding), so mutations in these areas don’t do anything. Of mutations that hit other areas, many of them are synonymous, and thus usually have no effect. Mutations in areas that matter generally have negative effects. Why? The human body is an intricate machine and it’s easier to fuck it up when you make random changes to the blueprint/recipe than improving upon it. So, basically, one has to get rid of the harmful mutations that happen and this is done via death and mate choice (preference for healthy partners), collectively: purifying selection. Humans still have mate choice and some natural selection, but the natural selection has been starkly reduced in strength since before medicine that actually works (i.e. not bloodletting etc.), and thus, by mutation-selection balance, the rates of genetic disorders and genetic dispositions for disease should increase. In other words, mutation load for disease in general should increase. Does it?

It’s not quite so simple to answer because of various confounders. The most important ones are improved diagnosing (we have better equipment to spot disorders now) and population aging (older people are more sick). Population aging can be avoided by compared same-aged samples measured in different times. Diagnosis changes are much harder to deal with, and one has to look for data where the diagnostic criteria either did not change for the period in question or changed in a way we can adjust for.

There’s also another issue. For a number of decades, we have been using a clever form of selection: prenatal screening (and preconception screening in some groups), which obviously selects against mutational load for the screened diseases. However, most of this testing is for aneuplodies (mostly Down’s) which usually results in sterile offspring and is thus irrelevant for mutational load for disease (because it is not contributed to the gene pool). However, some of the testing is for specific diseases, usually ones that happen to be quite prevalent in some racial group: Tay-Sachs etc. in Ashkenazis, Charlevoix-Saguenay etc. in Quebecians, Aspartylglucosaminuria etc. in Finns etc. One obviously cannot look for evidence of dysgenics for these diseases as the selection against them distorts the picture.

The studies

I didn’t do a thorough search. In fact, these were the first two studies I found plus the one Michael found. The point of this review is to bring the idea to your mind, not prove it conclusively with an exhaustive review.

Cancer

Cancer incidence increasing globally: The role of relaxed natural selection

Cancer incidence increase has multiple aetiologies. Mutant alleles accumulation in populations may be one of them due to strong heritability of many cancers. The opportunity for the operation of natural selection has decreased in the past ~150 years because of reduction of mortality and fertility. Mutation-selection balance may have been disturbed in this process and genes providing background for some cancers may have been accumulating in human gene pools. Worldwide, based on the WHO statistics for 173 countries the index of the opportunity for selection is strongly inversely correlated with cancer incidence in peoples aged 0-49 and in people of all ages. This relationship remains significant when GDP, life expectancy of older people (e50), obesity, physical inactivity, smoking and urbanization are kept statistically constant for fifteen (15) out of twenty-seven (27) individual cancers incidence rates. Twelve (12) cancers which are not correlated to relaxed natural selection after considering the six potential confounders are largely attributable to external causes like viruses and toxins. Ratios of the average cancer incidence rates of the 10 countries with highest opportunities for selection to the average cancer incidence rates of the 10 countries with lowest opportunities for selection are 2.3 (all cancers at all ages), 2.4 (all cancers in 0-49 years age group), 5.7 (average ratios of strongly genetically based cancers) and 2.1 (average ratios of cancers with less genetic background).

Coeliac disease

Increasing prevalence of coeliac disease over time

Background  The number of coeliac disease diagnoses has increased in the recent past and according to screening studies, the total prevalence of the disorder is around 1%.
Aim  To establish whether the increased number of coeliac disease cases reflects a true rise in disease frequency.
Methods  The total prevalence of coeliac disease was determined in two population-based samples representing the Finnish adult population in 1978–80 and 2000–01 and comprising 8000 and 8028 individuals, respectively. Both clinically–diagnosed coeliac disease patients and previously unrecognized cases identified by serum endomysial antibodies were taken into account.
Results  Only two (clinical prevalence of 0.03%) patients had been diagnosed on clinical grounds in 1978–80, in contrast to 32 (0.52%) in 2000–01. The prevalence of earlier unrecognized cases increased statistically significantly from 1.03% to 1.47% during the same period. This yields a total prevalence of coeliac disease of 1.05% in 1978–80 and 1.99% in 2000–01.
Conclusions  The total prevalence of coeliac disease seems to have doubled in Finland during the last two decades, and the increase cannot be attributed to the better detection rate. The environmental factors responsible for the increasing prevalence of the disorder are issues for further studies.

Arthritis and other rheumatic conditions

Estimates of the prevalence of arthritis and other rheumatic conditions in the United States: Part II

Objective
To provide a single source for the best available estimates of the US prevalence of and number of individuals affected by osteoarthritis, polymyalgia rheumatica and giant cell arteritis, gout, fibromyalgia, and carpal tunnel syndrome, as well as the symptoms of neck and back pain. A companion article (part I) addresses additional conditions.
Methods
The National Arthritis Data Workgroup reviewed published analyses from available national surveys, such as the National Health and Nutrition Examination Survey and the National Health Interview Survey. Because data based on national population samples are unavailable for most specific rheumatic conditions, we derived estimates from published studies of smaller, defined populations. For specific conditions, the best available prevalence estimates were applied to the corresponding 2005 US population estimates from the Census Bureau, to estimate the number affected with each condition.
Results
We estimated that among US adults, nearly 27 million have clinical osteoarthritis (up from the estimate of 21 million for 1995), 711,000 have polymyalgia rheumatica, 228,000 have giant cell arteritis, up to 3.0 million have had self-reported gout in the past year (up from the estimate of 2.1 million for 1995), 5.0 million have fibromyalgia, 4–10 million have carpal tunnel syndrome, 59 million have had low back pain in the past 3 months, and 30.1 million have had neck pain in the past 3 months.
Conclusion
Estimates for many specific rheumatic conditions rely on a few, small studies of uncertain generalizability to the US population. This report provides the best available prevalence estimates for the US, but for most specific conditions more studies generalizable to the US or addressing understudied populations are needed.

Does it matter?

Yes. Treating diseases, especially rare diseases, is extremely expensive. As such, for countries with public health-care, there’s a very strong economic argument in favor of health eugenics via editing or embryo/gamete selection.

Socio-economic burden of rare diseases: A systematic review of cost of illness evidence

Cost-of-illness studies, the systematic quantification of the economic burden of diseases on the individual and on society, help illustrate direct budgetary consequences of diseases in the health system and indirect costs associated with patient or carer productivity losses. In the context of the BURQOL-RD project (“Social Economic Burden and Health-Related Quality of Life in patients with Rare Diseases in Europe”) we studied the evidence on direct and indirect costs for 10 rare diseases (Cystic Fibrosis [CF], Duchenne Muscular Dystrophy [DMD], Fragile X Syndrome [FXS], Haemophilia, Juvenile Idiopathic Arthritis [JIA], Mucopolysaccharidosis [MPS], Scleroderma [SCL], Prader-Willi Syndrome [PWS], Histiocytosis [HIS] and Epidermolysis Bullosa [EB]). A systematic literature review of cost of illness studies was conducted using a keyword strategy in combination with the names of the 10 selected rare diseases. Available disease prevalence in Europe was found to range between 1 and 2 per 100,000 population (PWS, a sub-type of Histiocytosis, and EB) up to 42 per 100,000 population (Scleroderma). Overall, cost evidence on rare diseases appears to be very scarce (a total of 77 studies were identified across all diseases), with CF (n=29) and Haemophilia (n=22) being relatively well studied, compared to the other conditions, where very limited cost of illness information was available. In terms of data availability, total lifetime cost figures were found only across four diseases, and total annual costs (including indirect costs) across five diseases. Overall, data availability was found to correlate with the existence of a pharmaceutical treatment and indirect costs tended to account for a significant proportion of total costs. Although methodological variations prevent any detailed comparison between conditions and based on the evidence available, most of the rare diseases examined are associated with significant economic burden, both direct and indirect.

Economic burden of common variable immunodeficiency: annual cost of disease

Objectives: In the context of the unknown economic burden imposed by primary immunodeficiency diseases, in this study, we sought to calculate the costs associated with the most prevalent symptomatic disease, common variable immunodeficiency (CVID). Methods: Direct, indirect and intangible costs were recorded for diagnosed CVID patients. Hidden Markov model was used to evaluate different disease-related factors and Monte Carlo method for estimation of uncertainty intervals. Results: The total estimated cost of diagnosed CVID is US$274,200/patient annually and early diagnosis of the disease can save US$6500. Hospital admission cost (US$25,000/patient) accounts for the most important expenditure parameter before diagnosis, but medication cost (US$40,600/patients) was the main factor after diagnosis primarily due to monthly administration of immunoglobulin. Conclusion: The greatest cost-determining factor in our study was the cost of treatment, spent mostly on immunoglobulin replacement therapy of the patients. It was also observed that CVID patients’ costs are reduced after diagnosis due to appropriate management.

There’s also lots of these kinds of studies, the second paper summarizes a number of them for this cluster of diseases:

A Spanish study reported that mean annual treatment costs for children and adult PID patients were e 6520 and 17,427, respectively. Total treatment costs spent on IVIg therapy proce- dures in Spain were approximately e 91.8 million annually, of which 94% consisted of drug cost [27] . Another study conducted in Belgium estimated the annual costs for IVIg therapy on an average to be e 12,550 [28].

Galli et al . [29] assessed the economic impact associated with method of treatment of PID patients in Italy. Regarding the monthly treatment costs associated with the treatment of a typ- ical 20 kg child, the study reported antibiotic therapy to cost of e 58,000, Ig cost of e 468,000 and patients ’ hospitalizations cost of e 300,000 for IVIg method.

Haddad et al . [26] conducted a cost analysis study in the French setting and reported the total monthly treatment cost for a patient using hospital-based 20 g IVIg to be e 1192.19, in which approximately 57% of the total treatment cost was spent on Ig preparation and 39% on hospital admission charges. Another investigation on French PID patients demonstrated the yearly cost of hospital-based IVIg to be e 26,880 per patient [30] .

Other cost analysis studies comparing the direct cost impacts of Ig replacement methods reported annual per patient costs for hospital-based IVIg were US$14,124 in Sweden [31] , e 31,027 and e 17,329 for adults and children in Germany, respectively [32] , and e 18,600 in UK [33] . On the basis of one Canadian study, we found that total annual base case expenditure for hospital-based IVIg therapy of children and adults were $14,721 and $23,037 (in Canadian dollars), respectively. The annual per patient cost of Ig was 75%, the cost of physician and nurse care and hospital admission was 16% and the cost of time lost because of treatment was 8% [34]

The Genomic Health Of Ancient Hominins?

Davide Piffer reminded me that there is study of ancient genomes’ health, which finds that:

The genomes of ancient humans, Neandertals, and Denisovans contain many alleles that influence disease risks. Using genotypes at 3180 disease-associated loci, we estimated the disease burden of 147 ancient genomes. After correcting for missing data, genetic risk scores were generated for nine disease categories and the set of all combined diseases. These genetic risk scores were used to examine the effects of different types of subsistence, geography, and sample age on the number of risk alleles in each ancient genome. On a broad scale, hereditary disease risks are similar for ancient hominins and modern-day humans, and the GRS percentiles of ancient individuals span the full range of what is observed in present day individuals. In addition, there is evidence that ancient pastoralists may have had healthier genomes than hunter-gatherers and agriculturalists. We also observed a temporal trend whereby genomes from the recent past are more likely to be healthier than genomes from the deep past. This calls into question the idea that modern lifestyles have caused genetic load to increase over time. Focusing on individual genomes, we find that the overall genomic health of the Altai Neandertal is worse than 97% of present day humans and that Otzi the Tyrolean Iceman had a genetic predisposition to gastrointestinal and cardiovascular diseases. As demonstrated by this work, ancient genomes afford us new opportunities to diagnose past human health, which has previously been limited by the quality and completeness of remains.

The authors themselves note the connection to the proposed recent dysgenic selection:

The genomic health of ancient individuals appears to have improved over time (Figure 3B). This calls into question the idea that genetic load has been increasing in human populations (Lynch 2016). However, there exists a perplexing pattern: ancient individuals who lived within the last few thousand years have healthier genomes, on average, than present day humans. This deviation from the observed temporal trend of improved genomic health opens up the possibility that deleterious mutations have accumulated in human genomes in the recent past. The data presented here do not provide adequate information to address this hypothesis, which we leave for future follow-up studies.

In other words, we expect the recent pattern to look something like this:

June 24, 2017

There’s a lot more to the Mendelians

Filed under: Genomics,Reproductive genetics — Tags: , , — Emil O. W. Kirkegaard @ 19:41

There’s a highly interesting new paper out:

(Mendelians are also known as monogenic disorders because they are inherited in patterns that follow Mendel’s laws.)

Abstract:

Discovering the genetic basis of a Mendelian phenotype establishes a causal link between genotype and phenotype, making possible carrier and population screening and direct diagnosis. Such discoveries also contribute to our knowledge of gene function, gene regulation, development, and biological mechanisms that can be used for developing new therapeutics. As of February 2015, 2,937 genes underlying 4,163 Mendelian phenotypes have been discovered, but the genes underlying ∼50% (i.e., 3,152) of all known Mendelian phenotypes are still unknown, and many more Mendelian conditions have yet to be recognized. This is a formidable gap in biomedical knowledge. Accordingly, in December 2011, the NIH established the Centers for Mendelian Genomics (CMGs) to provide the collaborative framework and infrastructure necessary for undertaking large-scale whole-exome sequencing and discovery of the genetic variants responsible for Mendelian phenotypes. In partnership with 529 investigators from 261 institutions in 36 countries, the CMGs assessed 18,863 samples from 8,838 families representing 579 known and 470 novel Mendelian phenotypes as of January 2015. This collaborative effort has identified 956 genes, including 375 not previously associated with human health, that underlie a Mendelian phenotype. These results provide insight into study design and analytical strategies, identify novel mechanisms of disease, and reveal the extensive clinical variability of Mendelian phenotypes. Discovering the gene underlying every Mendelian phenotype will require tackling challenges such as worldwide ascertainment and phenotypic characterization of families affected by Mendelian conditions, improvement in sequencing and analytical techniques, and pervasive sharing of phenotypic and genomic data among researchers, clinicians, and families.

The issue is worth discussing in a bit of detail, because there are important implications for multiple important areas. First, the sheer scale:

Much remains to be learned. The HGP and subsequent annotation efforts have established that there are ∼19,000 predicted protein-coding genes in humans.9, 10 Nearly all are conserved across the vertebrate lineage and are highly conserved since the origin of mammals ∼150–200 million years ago,11, 12, 13 suggesting that certain mutations in every non-redundant gene will have phenotypic consequences, either constitutively or in response to specific environmental challenges. The continuing pace of discovery of new Mendelian phenotypes and the variants and genes underlying them supports this contention.

Humans have about 19k protein coding regions (which I will call genes here) and these seem to be high preserved over time, meaning that they are very likely to be functional in some important way. It’s possible that a large number of them are critical to life, meaning that any major disorder in them will inevitably cause spontaneous abortion (a common thing for serious disorders, e.g. 99% of Turner syndrome embryos get aborted automatically) or stillbirth. In a sense, then, these genetic disorders all share a few deadly phenotypes, and it will not be possible to find any phenotypes for these genes among the living. This is based on the strong assumption of total loss of function for the mutations, which is not realistic. The authors note that we have animal evidence:

Studies in mice with engineered loss-of-function (LOF) mutations suggest that the majority of the gene knockouts compatible with survival to birth are associated with a recognizable phenotype, whereas ∼30% of gene knockouts lead to in utero or perinatal lethality.50 Of the latter, it remains to be determined whether partial LOF mutations (i.e., hypomorphic alleles) or other classes of mutations (e.g., gain of function, dosage differences due to gene amplification,51 etc.) in the same genes might result in viable phenotypes. Nevertheless, given the high degree of evolutionary conservation between humans and mice, mutations in the majority of non-redundant human protein-coding genes are likely to result in Mendelian phenotypes, most of which remain to be characterized (Figure 2).

While many mutations are large deletions that cause total loss of function, many others are smaller deletions, duplications (e.g. Huntington’s) or alternative versions (SNPs etc.; e.g. thalassemia). There can be multiple disorders per gene because there can be different mutations that cause different problems, e.g. cause a protein to fold in two different, but both problematic ways.

Overall, my takeaway is that 1) there are likely a very large number of Mendelians yet to be discovered among the living, 2) a substantial fraction of genes currently not associated with any disorder are critical for life i.e. cause death to fetus hence there is no phenotype to find aside from that (30% perhaps), 3) it will be hard to find many of the disorders that have relatively mild effects both because of the inherent needle in haystack situation, and because of measurement problems, especially with regards to copy number variants (different numbers of a given sequence of DNA). Proper measurement of complex genetic variation (i.e. non-SNP) will probably require both improved mathematical modeling (I’m looking at you, sparsity/penalization) as well as sequencing. Sequencing is still terrible in a cost-benefit sense, but array data can get us a lot further until sequencing becomes price competitive.

The burden of disease

In aggregate, clinically recognized Mendelian phenotypes compose a substantial fraction (∼0.4% of live births) of known human diseases, and if all congenital anomalies are included, ∼8% of live births have a genetic disorder recognizable by early adulthood.27 This translates to approximately eight million children born worldwide each year with a “serious” genetic condition, defined as a condition that is life threatening or has the potential to result in disability.28 In the US alone, Mendelian disorders collectively affect more than 25 million people and are associated with high morbidity, mortality, and economic burden in both pediatric and adult populations.28,29 Birth defects, of which Mendelian phenotypes compose an unknown but most likely substantial proportion, are the most common cause of death in the first year of life, and each year, more than three million children under the age of 5 years die from a birth defect, and a similar number survive with substantial morbidity. Beyond the emotional burden, each child with a genetic disorder has been estimated to cost the healthcare system a total of $5,000,000 during their lifetime.28,29

It remains a challenge to diagnose many Mendelian phenotypes by phenotypic features and conventional diagnostic testing. In a general clinical genetics setting, the diagnostic rate is ∼50%.30 Across a broader range of rare diseases, diagnostic rates are even lower. For example, in the NIH Undiagnosed Disease Program, the diagnostic rate was, despite state-of-the-art evaluations, 34% in adults and 11% in children.31 Moreover, the time to diagnosis is often prolonged (the “diagnostic odyssey”); in a European survey of the time to diagnosis of eight rare diseases, including cystic fibrosis (MIM: 602421) and fragile X syndrome (MIM: 309550), 25% of families waited between 5 and 30 years for a diagnosis, and the initial diagnosis was incorrect in 40% of these families.32

There are substantial gains to be made from cleaning out these disorders from our common human gene pool. I see no reason to expect rigid Mendelian causation (complete dominance), meaning that heterozygous copies of poor variants likely also cause some milder problems, which are then often missed due to the usual statistical and clinical problems. If we are unlucky, there will be many variants that show heterozygote superiority (such as the sickle cell variant for malaria resistance). I say unlucky because heterozygosity is brittle and would need constant management of reproduction to sustain. If both parents are heterozygous (AB, AB), 50% of their children will be too (AB, BA), but the other 50% will be split between the two homozygous genotypes (AA, BB).

Intervention relevance

Development of new therapeutics to address common diseases that constitute major public-health problems is limited by the ignorance regarding the fundamental biology underlying disease pathogenesis.60 As a consequence, 90% of drugs entering human clinical trials fail, commonly because of a lack of efficacy and/or unanticipated mechanism-based adverse effects.61 Studies of families affected by rare Mendelian phenotypes segregating with large-effect mutations that increase or decrease risk for common disease can directly establish the causal relationship between genes and pathways and common diseases and identify targets likely to have large beneficial effects and fewer mechanism-based adverse effects when manipulated. For example, certain Mendelian forms of high and low blood pressure are due to mutations that cause increases and decreases, respectively, in renal salt reabsorption and net salt balance; these discoveries identified promising new therapeutic targets, such as KCNJ1 (potassium channel, inwardly rectifying, subfamily J, member 1 [MIM: 600359]), for which drugs are now in clinical trials. Understanding the role of salt balance in blood pressure has provided the scientific basis for public-health efforts in more than 30 countries to reduce heart attacks, strokes, and mortality by modest reduction in dietary salt intake.62 Similarly, understanding the physiological effects of CFTR (cystic fibrosis transmembrane conductance regulator [MIM: 602421]) mutations responsible for cystic fibrosis has led to allele-specific therapies that significantly improve pulmonary function in affected individuals.63 Other common-disease drugs based on gene discoveries for Mendelian phenotypes (e.g., orexin antagonists for sleep,64 beta-site APP-cleaving enzyme 1 [BACE1] inhibitors for Alzheimer dementia,65 proprotein convertase, subtilisin/kexin type 9 [PCSK9] monoclonal antibodies to lower low-density lipoprotein levels66) are undergoing advanced clinical trials. Discoveries such as these will directly facilitate the goals of the Precision Medicine Initiative.67

This is an instance of the more general pattern: very complex systems are really hard to understand from first principles. However, by carefully observing existing variation in them, it is possible to make (more) plausible and testable hypotheses about the causal pathways that can be used for interventions. It seems very likely that improved knowledge of Mendelian genetics would allow us to propose and test hypotheses that have higher prior probabilities, which in the end lead us to more easily identify working interventions. To put it metaphorically, there are too many places to dig for gold (=very low, uniform prior landscape), but by using existing genetic variation, we can figure out where it is promising to dig (=more spiky landscape).

If we want to get serious

If we want to get serious about finding important genetic variation, we should genotype all humans for which we have reasonable phenotype data. The Nordic countries are the perfect place to begin because these are extremely rich (despite recent dysfunctional immigration) and importantly, they have very comprehensive ‘social security number’-like systems that cover everybody in the country and is linked to virtually all other public databases, including medical. This means that every diagnosis and every (prescription) treatment are already in the same linkable database. Researchers are beginning to use these data, but they are unfortunately taking a candidate-gene-like approach by only testing for specific relationships, instead of taking a phenome-‘treatome’ wide approach akin to GWASs. Still, they produce interesting studies such as:

Antipsychotics, mood stabilisers, and risk of violent crime

Methods
We used linked Swedish national registers to study 82 647 patients who were prescribed antipsychotics or mood stabilisers, their psychiatric diagnoses, and subsequent criminal convictions in 2006–09. We did within-individual analyses to compare the rate of violent criminality during the time that patients were prescribed these medications versus the rate for the same patients while they were not receiving the drugs to adjust for all confounders that remained constant within each participant during follow-up. The primary outcome was the occurrence of violent crime, according to Sweden’s national crime register.
Findings
In 2006–09, 40 937 men in Sweden were prescribed antipsychotics or mood stabilisers, of whom 2657 (6·5%) were convicted of a violent crime during the study period. In the same period, 41 710 women were prescribed these drugs, of whom 604 (1·4 %) had convictions for violent crime. Compared with periods when participants were not on medication, violent crime fell by 45% in patients receiving antipsychotics (hazard ratio [HR] 0·55, 95% CI 0·47–0·64) and by 24% in patients prescribed mood stabilisers (0·76, 0·62–0·93). However, we identified potentially important differences by diagnosis—mood stabilisers were associated with a reduced rate of violent crime only in patients with bipolar disorder. The rate of violence reduction for antipsychotics remained between 22% and 29% in sensitivity analyses that used different outcomes (any crime, drug-related crime, less severe crime, and violent arrest), and was stronger in patients who were prescribed higher drug doses than in those prescribed low doses. Notable reductions in violent crime were also recorded for depot medication (HR adjusted for concomitant oral medications 0·60, 95% CI 0·39–0·92).

Medication for Attention Deficit–Hyperactivity Disorder and Criminality

Methods
Using Swedish national registers, we gathered information on 25,656 patients with a diagnosis of ADHD, their pharmacologic treatment, and subsequent criminal convictions in Sweden from 2006 through 2009. We used stratified Cox regression analyses to compare the rate of criminality while the patients were receiving ADHD medication, as compared with the rate for the same patients while not receiving medication.

Results
As compared with nonmedication periods, among patients receiving ADHD medication, there was a significant reduction of 32% in the criminality rate for men (adjusted hazard ratio, 0.68; 95% confidence interval [CI], 0.63 to 0.73) and 41% for women (hazard ratio, 0.59; 95% CI, 0.50 to 0.70). The rate reduction remained between 17% and 46% in sensitivity analyses among men, with factors that included different types of drugs (e.g., stimulant vs. nonstimulant) and outcomes (e.g., type of crime).

These studies are not as good as RCTs because while they control for any kind of confounder that’s stable across the lifespan of a person (including genetics and upbringing), they do not control for time-variant confounders. The most obvious one here is that of reverse causality from good and bad period/mood fluctuation that cause people both to act foolishly and stop taking drugs. I know some schizophrenics and this seems plausible to me to some extent.

If we really, really want to get serious, we want to start randomizing treatment across the entire health system. This can done more easily when one has publicly paid health care (take that capitalism!). Here I don’t mean randomizing between placebo and actual treatments (placebo doesn’t work for non-subjective problems), I mean randomizing patients between different treatments for the same problem. In fact, treatment comparison studies are rarely conducted, so we often don’t actually know which treatment is better (see e.g. discussion of MPD vs. Amphetamine for ADHD). Essentially, the way doctors currently work, is that when they have a specific condition they want to treat, they semi-randomly, maybe in conjunction with the patient, choose a treatment plan. Instead, they could ask the patient whether they want to take part in the nation-wide experiment for the good of all (maybe we can give such patients a bumper sticker to use for social signaling just like we do for blood and organ donors now). Many will consent to this, and the doctor will then ask the computer which treatment the patient should get. The computer picks one at random, saves this info in the database, and things move on as usual. This way, we can automatically collect very large datasets of randomized data for any condition where we have multiple approved treatments and we’re unsure which is better. With the added genomic data, we can do pharmacogenomics too (i.e. look for genotype x treatment interactions).

April 12, 2017

Hollywood eugenics and actual eugenics

Filed under: History,Reproductive genetics — Tags: , — Emil O. W. Kirkegaard @ 14:45

I oppose coercive eugenic policies, so don’t try to quote mine my blunt discussion as support for such policies.

Hollywood has a new anti-eugenics movie out, called The Thinning. There’s a trailer, and it will likely spoil the entire movie for you, but I reckon it will be crappy anyway, so:

For those too lazy to watch, it goes like this. To avoid dysgenics/Idiocracy, every generation has to take some standardized IQ/SAT-like test, and those who do very poorly on it, we sterilize execute. The movie is set in a near-future society where we used up a lot of the natural resources, so there’s also a need to keep the population size down (apparently the makers don’t realize this automatically happens due to below replacement fertility in every European country). There’s also some corruption going on with the testing. Of course, the protagonists are some of those deemed unfit, and naturally they don’t want to get executed, so they go on the run. The movie is essentially a new Gattaca.

In the light of this, it seems in order to actually read what the actual eugenicists said. Why don’t we begin with Galton, the originator. Here’s Galton writing on eugenics in 1904 in his paper :

5. [the 5th aim] Persistence in setting forth the national importance of eugenics. There are three stages to be passed through: (I) It must be made familiar as an academic question, until its exact importance has been understood and accepted as a fact. (2) It must be recognized as a subject whose practical development deserves serious consideration. (3) It must be introduced into the national conscience, like a new religion. It has, indeed, strong claims to become an orthodox religious, tenet of the future, for eugenics co-operate with the workings of nature by securing that humanity shall be represented by the fittest races. What nature does blindly, slowly, and ruthlessly, man may do providently, quickly, and kindly. As it lies within his power, so it becomes his duty to work in that direction. The improvement of our stock seems to me one of the highest objects that we can reasonably attempt. We are ignorant of the ultimate destinies of humanity, but feel perfectly sure that it is as noble a work to raise its level, in the sense already explained, as it would be disgraceful to abase it. I see no impossibility in eugenics becoming a religious dogma among mankind, but its details must first be worked out sedulously in the study. Overzeal leading to hasty action would do harm, by holding out expectations of a near golden age, which will certainly be falsified and cause the science to be discredited. The first and main point is to secure the general intellectual acceptance of eugenics as a hopeful and most important study. Then let its principles work into the heart of the nation, which will gradually give practical effect to them in ways that we may not wholly foresee. [my emphasis]

So we can note a few things:

First, Galton is essentially arguing for a humane eugenics, not executions etc. In fact, Galton warned against “overzeal” that “would do harm” and “cause the science to be discredited”! Probably did not imagine the horrors of Nazi extermination policies, but it’s a nice warning given what actually happened. The related science still suffers from the discreditation that followed the horrors of Nazism. For the purposes of improving the gene pool, it is not necessary to kill anyone.

Before reproductive genetics, it was sufficient to alter the rates of breeding. This can be done in many ways and does not necessarily have to involve coercive sterilization policies. One could pay people with better genes to have children or pay people with bad genes to not have children. Payment need not be direct, but could be done via tax exemptions and so on. Luckily for us, the speed of dysgenics has been slow, and so we probably don’t need to do anything authoritarian or even economic, and we can just promote the use of embryo selection and genetic engineering.

The Nazi program to kill genetically defective Germans seems to have been aimed, in part, at opening up hospital space for the injured their warring would create. For a history on eugenics, I recommend reading Eugenics and the Welfare State: Sterilization Policy in Denmark, Sweden, Norway, and Finland. Curiously given the modern political climate, eugenic laws were generally implemented in Scandinavia by progressive, social democrats (left-wing), and was opposed by religious conservatives. After all, eugenics is about improving the gene pool, so it’s a collective matter. Eugenic policies go hand in hand with welfare states. If one has too many people that has to be taken care of due to genetic defects (e.g. the blind), then welfare policies are not possible.

Second, the part about races is not literally about Europeans/Caucasians, Asians, Africans etc., but in the more loose historical sense of “the race of man”. I’ve read a number of Galton’s works and it’s not my impression that he was advocating for Earth to be inherited by essentially one racial group, which would be an extreme version of racial supremacy politics. At least, that’s my understanding of his views.

Third, the text reflects Galton’s idea of imbuing eugenic aims into religions so as to have people follow the principles. To most people today this seems weird, but in the early 1900s when the world was much more religious, it seemed more reasonable. We can also note that many religions and cultures did ban marriages between closely related people, especially nuclear family (siblings, parent-offspring). This prevents inbreeding, so it is eugenic. Of course, other religions and cultures promoted (first) cousin marriages which are quite bad in terms of inbreeding problems.

Here’s a poster/background to use/spread with a direct quote.

March 8, 2017

Comments on Gwern’s Counteracting Dysgenics

Filed under: Differential psychology/psychometrics,Genomics,Reproductive genetics — Tags: , — Emil O. W. Kirkegaard @ 19:26

https://www.gwern.net/Embryo%20selection#counteracting-dysgenics

Gwern has a very long and detailed post/page on embryo selection and related matters. You should definitely read it. He has just added a new section on counteracting dysgenics — the genetic evolution of undesirable traits — and asked for my comments/thoughts.

The background for the discussion are the recent genomic findings supporting dysgenic claims made by Woodley, Lynn, Meisenberg etc. To me these findings are not surprising as all, as I considered the case proven using the non-genomic data. The nice thing about genomic data is that it allows more precise and direct estimates of the magnitude of dysgenics, as well as of course the ease of estimating the selection across all traits in a sample provided that one has genomic models for them. The evolution of one trait does not solely depend on selection on that trait, but also selection on genetically correlated traits (analogous to indirect range restriction in employment testing).

However, given the early stages of our understanding of the genetic architecture of general cognitive ability (GCA), our ability to predict trait levels from DNA is not quite good yet. So far, so best we have achieved in terms of genomic prediction is a correlation of about .30 with educational achievement (this trait correlates ~.80 with GCA). In examining the magnitude of dysgenics for GCA, one has to take this non-perfect predictive validity into account, as well as confounders such as survival bias in the datasets used. Accounting for survival bias is not easy I think.

The point of Gwern’s new comments is to consider how we need to select for GCA solely to counteract the dysgenic selection for this trait. For embryo selection, this problem can be split into two parts: how prevalent embryo selection must be and how effective it must be. The latter can be estimated from available data, but the first is a task for the Good Judgment Project. Gattaca was in many ways prescient.

Estimating of effectiveness

Gwern goes on to do some quick estimates of how effective embryo selection is assuming various parameters for uptake, eggs per extraction, and genomic prediction validity. Uptake of IVF — an essential part of embryo selection — is currently around 1% in the USA. For genomic prediction validity, in general, the hardest traits are those that are difficult to predict from genomic data, in this case meaning those with low heritability, strongly non-additive heritability or primarily rare variation-based heritability which is difficult to build good predictive models for.

If one has to rely on phenotype selection, the trait distribution also matters a lot. A trait like schizophrenia which is highly heritable (~80%), but which is very rare is hard to select against effectively. Even if we prevented the entire affected population from breeding, this would also reduce the prevalence by about 5%. This reminds me trying to weed out recessive disorders in a population, which is also very ineffective when based on phenotypes. For continuous such as GCA or semi-continuous traits such as educational attainment, the phenotype method will work alright.

Gwern estimates that we only the top 33% were allowed to breed, GCA would increase by about 12.93 IQ per generation. However, he seems to have made two errors. First, the use of .66 instead of 2/3 (.666…) in the simulation for estimating the impact of selection for GCA. Second, the use of a heritability of .80 in the same equation. The breeder’s equation takes the additive heritability of a trait. This value seems unlikely to be .80 for GCA, values .50-.71 are usually assumed for this trait. If we use .60, we get a predicted gain of 9.82 IQ. Still very large.

Politics

Regular phenotype based selection is definitely politically infeasible to implement insofar as it involves putting restrictions on who breeds. This is essentially because the past use of sterilization programs is very, very unpopular, presumably because mumble mumble Nazi. If we are to be realistic, one will have to think of non-coercive approaches. Some such approaches involve creating economic incentives to increase the relative fertility of persons with better genomes. This could involve money transfers for children based on indicators of genomic quality, tax cuts and so on. Any such proposal would presumably immediately be attacked on grounds that it increases economic dispersion by giving benefits — directly or indirectly — to richer people (who have higher quality genomes). As such, this does not seem to be a very promising policy route to pursue.

The most promising route to pursue — it seems to me — is to try to establish government funding for reproductive genomics science, and when we get companies offering such services, to establish government funding for having it done. Optimally, to make it free or even give money for poor people to use them. History shows that uptake of new technologies is slowest among the poorer and lower GCA population, and research shows that this population is also the most costly in terms of social resources. As such, making it a prime target for genomic interventions seems a good idea. Paying prospective parents who poor quality genomics — low GCA, bad temper, lazy, obese, suffers of diabetes etc. — to use such interventions might save a lot of money in the future.

October 1, 2016

Medical geneticists on PGD for non-medical purposes

Filed under: Ethics,Reproductive genetics — Tags: — Emil O. W. Kirkegaard @ 16:16

Quoted from Textbook of Human Reproductive Genetics, chapter 13:

PGD within the“autonomy model”

According to what may be called the “autonomy model,” prospective parents are free to use PGD in order to select embryos on the basis of any characteristic they prefer, whether health related or not. Opponents argue that selecting for non-medical characteristics violates the autonomy of the future child as the child is reduced to an object of parental ambitions and ideals. But would embryo selection on the basis of characteristics that do not limit the possible life plans of the future child or that are useful in carrying out almost any life plan (“general purpose means”) really violate the future child’s autonomy? Should one not say that prospective parents undermine the ethical standard only when they deliberately try to direct the child toward a predetermined life? Anyway, the technical possibilities to use embryo selection for “superbabies,” whatever that may be, are regularly widely exaggerated in the mass media.

A paradigm case for the autonomy model in the current context is PGD/sex-selection for non-medical reasons. Sex selection for non-medical reasons is prohibited in many countries. From an ethical point of view, however, this is not evident [17]. Even though individual requests may stem from discriminatory attitudes or stereotyping views regarding the difference between boys and girls, it does not follow that sex selection for non-medical reasons is inherently sexist [18]. The fear that allowing it will result in a distortion of the sex ratio does not seem convincing either, at least not in Western countries, where a preference for boys is weak or absent. Moreover, the suggestion that sex selection for non-medical reasons will reinforce gender stereotypes to the detriment of children’s development and women’s position in society, are speculative at best. Since the conclusion must be that arguments against allowing sex-selection for non-medical reasons are weak, banning the practice may amount to an unjustified infringement of reproductive freedom. However, even if sex selection (limited perhaps to “family balancing”) may be acceptable in itself, a further question still concerns the proportionality of the means. Clearly, the use of preconception sperm selection technologies for this purpose (if safe and effective) is more easily justified than PGD [17].

A second case is PGD for “dysgenic” reasons. The paradigm case regards a deaf couple’s request of PGD in order to selectively transfer embryos affected with (non-syndromic) deafness. The couple may point to psychosocial and developmental risks of hearing children growing up with (two) deaf parents. Concerns include that (young) hearing children will have difficulties in understanding the implications of their parents’ disability and related behavior, that deaf parents will have only limited access to the experiences of hearing children, and that there is a risk of role inversion. Furthermore, applicants may argue that “deafness is not a handicap, but just a variant on the spectre of normalcy.” After all, deaf people have their own rich culture and their own (non-verbal) language. One can reasonably doubt, however, whether the “just a variant” view is tenable; after all, outside the microcosmos of the deaf subculture, deafness is a disability which causes a variety of serious and lifetime challenges. Tough deaf people still can (and usually do) live a reasonably happy life, selection for deafness is at odds with the professional responsibility of the reproductive doctor [11]. The couple’s relational concerns should be tackled by educational support and advice, not by “dysgenic” PGD. Interestingly, ongoing technology development may contribute to solving the current moral puzzle. Until now, cochlear implants are controversial, amongst other things, because their success is patchy. However, when the perfect version of the cochlear implant would become available in the future, parents will clearly harm a child they leave deaf. To select for a deaf child, then, becomes self-defeating.

People often use double standards for genetic vs. non-genetic interventions. For instance, in the above, there is talk of a violation of “the autonomy of the future child as the child is reduced to an object of parental ambitions and ideals”. Curiously, no one complains about parenting violating the same, despite the obvious attempt of parents to mold their children (not with too much success). Funny how parents are allowed to try to mold their children but only if it involves non-genetic means!

See also Nick Bostrom’s The Reversal Test: Eliminating Status Quo Bias in Applied Ethics.

September 5, 2016

Embryo selection and genetic correlated traits: A reanalysis

Filed under: Genetics / behavioral genetics,Math/Statistics,Reproductive genetics — Emil O. W. Kirkegaard @ 23:12

There seem to be ways to post knitr documents to WordPress blogs, but until that’s set up, I will be publishing them over at RPubs and posting a link here. The post begins like this:

In a post published on his website, Gwern investigates the efficiency of embryo selection. It’s impressive work. In a later revision, he added a simulation that examines the effects of selecting for genetically correlated traits.

He relies on real data about genetic correlations and has selected a set of 35 traits of interest. He finds that when one considers selection on a composite trait of the 35 traits, one can make astonishing gains (mean gain 3.64 Z) which become ever larger when one takes into account the genetic correlations between traits (5.20 Z), which tend to be favorable (i.e. positive correlations between traits we want). Unfortunately, this conclusion is based on a couple of mistakes. One can see this intuitively by considering that if one has 10 embryos to select from, it is not possible to do better than about 1/10 on average, no matter if one selects for a single or composite trait, so there must be a mistake somewhere.

Powered by WordPress