Clear Language, Clear Mind

August 8, 2017

WHO on genomics and health, 2002

Filed under: Medicine,Politics,Sociology — Tags: , — Emil O. W. Kirkegaard @ 20:48

I have been tweeting annotated snippets from a WHO report I’m reading. Like this:

Basically, the report does a decent job at summarizing the state of the art in 2002, and has some interesting notes for the future. It also contains a shit ton of socialist politics about reducing inequalities in health, both within and between countries, especially between. Perhaps this is reflection of their ‘medical approach’ approach to health (fix problems so that everybody attains healthy status) instead of the ‘optimizing approach’ where the goal is to just generally improve health without any particular focus on reducing inequality (might even increase it).

I was asked to upload my annotated copy. Here goes:

June 13, 2017

New paper out: Admixture in Argentina (with John Fuerst)

We have a new big paper out:

  • Kirkegaard, E. O. W., & Fuerst, J. (2017). Admixture in Argentina. Mankind Quarterly, 57(4). Retrieved from http://mankindquarterly.org/archive/issue/57-4/4

Abstract

Analyses of the relationships between cognitive ability, socioeconomic outcomes, and European ancestry were carried out at multiple levels in Argentina: individual (max. n = 5,920), district (n = 437), municipal (n = 299), and provincial (n = 24). Socioeconomic outcomes correlated in expected ways such that there was a general socioeconomic factor (S factor). The structure of this factor replicated across four levels of analysis, with a mean congruence coefficient of .96. Cognitive ability and S were moderately to strongly correlated at the four levels of analyses: individual r=.55 (.44 before disattenuation), district r=.52, municipal r=.66, and provincial r=.88. European biogeographic ancestry (BGA) for the provinces was estimated from 25 genomics papers. These estimates were validated against European ancestry estimated from self-identified race/ethnicity (SIRE; r=.67) and interviewer-rated skin brightness (r=.33). On the provincial level, European BGA correlated strongly with scholastic achievement-based cognitive ability and composite S-factor scores (r’s .48 and .54, respectively). These relationships were not due to confounding with latitude or mean temperature when analyzed in multivariate analyses. There were no BGA data for the other levels, so we relied on %White, skin brightness, and SIRE-based ancestry estimates instead, all of which were related to cognitive ability and S at all levels of analysis. At the individual level, skin brightness was related to both cognitive ability and S. Regression analyses showed that SIRE had little detectable predictive validity when skin brightness was included in models. Similarly, the correlations between skin brightness, cognitive ability, and S were also found inside SIRE groups. The results were similar when analyzed within provinces. In general, results were congruent with a familial model of individual and regional outcome differences.

Afterthoughts

We carried out our usual thoro analysis of the predictions of genetic models of cognitive ability/social inequality with regards to admixture. We combined a variety of data sources to estimate mean racial admixture by subnational units, and related these to estimates of cognitive ability (CA) and S factor scores. In this case, we were also able to find individual-level skin tone/color data, as well as really crude cognitive ability data (2-5 items) and a decent number of social measures (>10). All in all, everything was more or less as expected: substantial correlations between European ancestry, CA and S, and some relationships to skin tone as well. The most outlying results were those for the smaller subnational units (districts, municipals) for which our estimates of European ancestry based on SIRE were not strong related to CA/S. Presumably this was due to a variety of factors including sampling error, SIRE x location interactions for predicting ancestry (as seen in Brazil), and ancestry x location interactions for predicting CA/S.

The paper thus is another replication of the general patterns we already saw most other places we already looked. There are still some large American countries left to cover (e.g. Bolivia, Venezuela), but they are hard to get decent data for. We will probably have to rely on the LAPOP survey to estimate many of them.

Some figures of interest

Maps for those who like them.

Main regressions.

April 16, 2017

AI stereotype accuracy too

Filed under: Differential psychology/psychometrics,Immigration — Tags: , — Emil O. W. Kirkegaard @ 05:54

Humans have stereotypes, that is, they hold beliefs about groups of people. When examined, these beliefs tend to reflect reality quite well, sometimes very well (for review, see Jussim’s book). Below is a figure from our own study (Kirkegaard and Bjerrekær, 2016). We recruited a sample of 500 nationally representative Danes, filtered away those who failed control questions. We had everybody try to estimate how large a percentage of persons in 70 national groups were receiving welfare benefits in Denmark. We compared their estimates with the true values from the Danish Statistics agency. The plot shows the the average estimate and the true value.

As can be seen, the accuracy was overwhelming, at r = .70. Such large relationships are very rarely seen in social science.

Recently there has been talk of how algorithms to make predictions are ‘racist’ and ‘sexist’ because they take into account race and sex to make the predictions. This is to say that these social categories have non-zero incremental validity, meaning that ignoring them to make predictions is purposefully making worse predictions than one could, and prima facie irrational. The situation is not even simple from an ethical perspective. Most (US) black crime is targeted against other blacks, so by e.g. granting blacks higher rates of probation, one risks releasing them only to commit more crimes against other blacks. Which is best for the black community at large?

The fact that non-humans also find that these social groups have non-zero validity is hard to square with certain models of stereotypes. Some people believe that stereotypes are massively affected by allegedly biased representation of the same social groups on e.g. TV shows. The theory is that the stereotypes on e.g. TV cause people to adopt stereotypes which then cause social inequality in various ways, usually thru discrimination. This view is hard to square with the fact that AIs and statistical algorithms also find these cues to be useful despite not watching TV shows, and even having any understanding of what they are doing. This finding fits better with the accuracy model of stereotypes which is that humans are able to identify useful statistical cues from their own experiences and from the culture more broadly speaking. Culture, on this view, reflects reality rather than creates it. Of course, there may also be some very inaccurate stereotypes, but in general, they tend to have nontrivial accuracy because they are caused by real differences. We may note that many studies of stereotypes in culture finds that these are too moderate for reality. For instance, Dalliard (2014) notes:

A better test of racial bias in television —and also more pertinent to Kaplan’s concerns about children’s television viewing — is the portrayal of race in fictional crime shows. Unlike news programs, such shows are not constrained by verisimilitude, which means that gross racial biases in the portrayal of crime are possible. However, studies have consistently found that compared to real-life crime statistics, blacks are underrepresented among criminal offenders in crime dramas, while whites are greatly overrepresented (Potter et al. , 1995; Eschholz , 2002; Eschholz et al., 2004; Deutsch & Cavendar, 2008; Case, 2013 ). For example, 75 percent of the violent offenders and suspects in the 2000-01 season of Law & Order were white, whereas in the late 1990s only 13 percent of real-life violent crime suspects were white in New York City where the show was set. For black offenders and suspects, the proportions were 14 percent in the fictional world of Law & Order versus 51 percent in real life. (Eschholz et al., 2004, Table 1.)

I note that stereotype underestimation of real differences is 1) commonly found, including in our study above where stereotypes varied 38% less than real differences (i.e. were not extreme enough), 2) predicted from the accuracy model because if stereotypes are imperfect reflections of reality they should be closer to the mean than reality, and 3) unpredicted from the stereotypes cause inequality-model because, same as before, the cause should show larger dispersion than the effect.

Note, tho, that one possible reply from the usual stereotypes as inaccurate causes of social inequality-position is that the reason computers pick up these cues is that these cues are accurate, but only because of the human stereotypes which cause inequality. This hypothesis is harder to test, but one can note that it is hard to explain the consistency of stereotypes across societies on this model. How come stereotypes just happened to be very similar — e.g. for sex — more or less everywhere while at the same time we find more or less the same sex differences everywhere too? If these were caused by contingent cultural causes, why are they so… universal? Universality suggests a single universal cause — genetics — not many independent causes that happen to align.

March 23, 2016

New papers out! Admixture in the Americas

I forgot to post this blog post at the time of publication as I usually do. However, here it is.

As explored in some previous posts, John Fuerst and I have spent about 1.25 years (!) producing a massive article: published version runs 119 pages; 25k words without the references; 159k characters incl. spaces. We received 6 critical comments by other scholars, to which we also produced a 57-page reply with new analyses. The first article was chosen as a target article in Mankind Quarterly and I recommend reading all the papers. Unfortunately, they are behind a paywall, except for ours:

  • Target paper: https://www.researchgate.net/publication/298214364_Admixture_in_the_Americas_Regional_and_National_Differences
  • Reply paper: https://www.researchgate.net/publication/298214289_The_Genealogy_of_Differences_in_the_Americas

June 16, 2015

The general socioeconomic factor among Colombian departments

Abstract

A dataset was compiled with 17 diverse socioeconomic variables for 32 departments of Colombia and the capital district. Factor analysis revealed an S factor. Results were robust to data imputation and removal of a redundant variable. 14 of 17 variables loaded in the expected direction. Extracted S factors correlated about .50 with the cognitive ability estimate. The Jensen coefficient for the S factor for this relationship was .60.

Keywords: Colombia, departments, social inequality, S factor, general socioeconomic factor, IQ, intelligence, cognitive ability, PISA, cognitive sociology

Files

https://osf.io/92vqd/files/

January 10, 2015

Intelligence, income inequality and prison rates: It’s complicated

There was some talk on Twitter around prison rates and inequality:

And IQ and inequality:

But then what about prison data beyond those given above? I have downloaded the newest data from here ICPS (rate data, not totals).

Now, what about all three variables?

#load mega20d as the datafile
ineqprisoniq = subset(mega20d, select=c("Fact1_inequality","LV2012estimatedIQ","PrisonRatePer100000ICPS2015"))
rcorr(as.matrix(ineqprisoniq),type = "spearman")
                            Fact1_inequality LV2012estimatedIQ PrisonRatePer100000ICPS2015
Fact1_inequality                        1.00             -0.51                        0.22
LV2012estimatedIQ                      -0.51              1.00                        0.16
PrisonRatePer100000ICPS2015             0.22              0.16                        1.00

n
                            Fact1_inequality LV2012estimatedIQ PrisonRatePer100000ICPS2015
Fact1_inequality                         275               119                         117
LV2012estimatedIQ                        119               275                         193
PrisonRatePer100000ICPS2015              117               193                         275

So IQ is slightly positively related to prison rates and so is equality. Positive? Isn’t it bad having people in prison? Well, if the alternative is having them dead… because the punishment for most crimes is death. Although one need not be excessive as the US is. Somewhere in the middle is perhaps best?

What if we combine them into a model?

model = lm(PrisonRatePer100000ICPS2015 ~ Fact1_inequality+LV2012estimatedIQ,ineqprisoniq)
summary = summary(model)
library(QuantPsyc)
lm.beta(model)
prediction = as.data.frame(predict(model))
colnames(prediction) = "Predicted"
ineqprisoniq = merge.datasets(ineqprisoniq,prediction,1)
scatterplot(PrisonRatePer100000ICPS2015 ~ Predicted, ineqprisoniq,
            smoother=FALSE,id.n=nrow(ineqprisoniq))
> summary

Call:
lm(formula = PrisonRatePer100000ICPS2015 ~ Fact1_inequality + 
    LV2012estimatedIQ, data = ineqprisoniq)

Residuals:
    Min      1Q  Median      3Q     Max 
-153.61  -75.05  -31.53   44.62  507.34 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)   
(Intercept)       -116.451     88.464  -1.316  0.19069   
Fact1_inequality    31.348     11.872   2.640  0.00944 **
LV2012estimatedIQ    3.227      1.027   3.142  0.00214 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 113.6 on 114 degrees of freedom
  (158 observations deleted due to missingness)
Multiple R-squared:  0.09434,	Adjusted R-squared:  0.07845 
F-statistic: 5.938 on 2 and 114 DF,  p-value: 0.003523

> lm.beta(model)
Fact1_inequality LV2012estimatedIQ 
        0.2613563         0.3110241

This is a pretty bad model (var%=8), but the directions held from before but were stronger. Standardized betas .25-.31. The R2 seems to be awkwardly low to me given the betas.

More importantly, the residuals are clearly not normal as can be seen above. The QQ-plot is:

QQ_plot

It is concave, so data distribution isn’t normal. To get diagnostic plots, simply use “plot(model)”.

Perhaps try using rank-order data:

ineqprisoniq = as.data.frame(apply(ineqprisoniq,2,rank,na.last="keep")) #rank order the data

And then rerunning model gives:

> summary

Call:
lm(formula = PrisonRatePer100000ICPS2015 ~ Fact1_inequality + 
    LV2012estimatedIQ, data = ineqprisoniq)

Residuals:
     Min       1Q   Median       3Q      Max 
-100.236  -46.753   -8.507   46.986  125.211 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)        1.08557   18.32052   0.059    0.953    
Fact1_inequality   0.84766    0.16822   5.039 1.78e-06 ***
LV2012estimatedIQ  0.50094    0.09494   5.276 6.35e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 54.36 on 114 degrees of freedom
  (158 observations deleted due to missingness)
Multiple R-squared:  0.2376,	Adjusted R-squared:  0.2242 
F-statistic: 17.76 on 2 and 114 DF,  p-value: 1.924e-07

> lm.beta(model)
 Fact1_inequality LV2012estimatedIQ 
        0.4757562         0.4981808

Much better R2, directions the same but betas are stronger, and residuals look normalish from the above. QQ plot shows them not to be even now.

QQplot2

Prediction plots based off the models:

prison prison_rank

So is something strange going on with the IQ, inequality and prison rates? Perhaps something nonlinear. Let’s plot them by IQ bins:

bins = cut(unlist(ineqprisoniq["LV2012estimatedIQ"]),5) #divide IQs into 5 bins
ineqprisoniq["IQ.bins"] = bins
describeBy(ineqprisoniq["PrisonRatePer100000ICPS2015"],bins)
library(gplots)
plotmeans(PrisonRatePer100000ICPS2015 ~ IQ.bins, ineqprisoniq,
          main = "Prison rate by national IQ bins",
          xlab = "IQ bins (2012 data)", ylab = "Prison rate per 100000 (2014 data)")

prison_IQ_bins

That looks like “bingo!” to me. We found the pattern.

What about inequality? The trouble is that the inequality data is horribly skewed with almost all countries have a low and near identical inequality compared with the extremes. The above will (does not) work well. I tried with different bins numbers too. Results look something like this:

bins = cut(unlist(ineqprisoniq["Fact1_inequality"]),5) #divide IQs into 5 bins
ineqprisoniq["inequality.bins"] = bins
plotmeans(PrisonRatePer100000ICPS2015 ~ inequality.bins, ineqprisoniq,
          main = "Prison rate by national inequality bins",
          xlab = "inequality bins", ylab = "Prison rate per 100000 (2014 data)")

prison_inequality

So basically, the most equal countries to the left have low rates, somewhat higher in the unequal countries within the main group and varying and on average lowish among the very unequal countries (African countries without much infrastructure?).

Perhaps this is why the Equality Institute limited their analyses to the group on the left, otherwise they don’t get the nice clear pattern they want. One can see it a little bit if one uses a high number of bins and ignores the groups to the right. E.g. 10 bins:

prison_inequality_10bins

Among the 3 first groups, there is a slight upward trend.

Powered by WordPress