Abstract

Two datasets of socioeconomic data was obtained from different sources. Both were factor analyzed and revealed a general factor (S factor). These factors were highly correlated with each other (.79 to .95), HDI (.68 to .93) and with cognitive ability (PISA; .70 to .78). The federal district was a strong outlier and excluding it improved results.

Method of correlated vectors was strongly positive for all 4 analyses (r’s .78 to .92 with reversing).

Introduction

In a number of recent articles (Kirkegaard 2015a,b,c,d,e), I have analyzed within-country regional data to examine the general socioeconomic factor, if it exists in the dataset (for the origin of the term, see e.g. Kirkegaard 2014). This work was inspired by Lynn (2010) whose datasets I have also reanalyzed. While doing work on another project (Fuerst and Kirkegaard, 2015*), I needed an S factor for Mexican states, if such exists. Since I was not aware of any prior analysis of this country in this fashion, I decided to do it myself.

The first problem was obtaining data for the analysis. For this, one needs a number of diverse indicators that measure important economic and social matters for each Mexican state. Mexico has 31 states and a federal district, so one can use a decent number of indicators to examine the S factor. Mexico is a Spanish speaking country and English comprehension is fairly poor. According to Wikipedia, only 13% of people speak English there. Compare with 86% for Denmark, 64% for Germany and 35% for Egypt.

S factor analysis 1 – Wikipedian data

Data source and treatment

Unlike for the previous countries, I could not easily find good data available in English. As a substitute, I used data from Wikipedia:

These come from various years, are sometimes not given per person, and often have no useful source given. So they are of unknown veracity, but they are probably fine for a first look. The HDI is best thought of as a proxy for the S factor, so we can use it to examine construct validity.

Some variables had data for multiple time-points and they were averaged.

Some data was given in raw numbers. I calculated per capita versions of them using the population data also given.

Results

The variables above minus HDI and population size were factor analyzed using minimum residuals to extract 1 factor. The loadings plot is shown below.

S_wiki

The literacy variables had a near perfect loading on S (.99). Unemployment unexpectedly loaded positively and so did homicides per capita altho only slightly. This could be because unemployment benefits are only in existence in the higher S states such that going unemployed would mean starvation. The homicide loading is possibly due to the drug war in the country.

Analysis 2 – Data obtained from INEG

Data source and treatment

Since the results based on Wikipedia data was dubious, I searched further for more data. I found it on the Spanish-language statistical database, Instituto Nacional De Estadística Y Geografía, which however had the option of showing poorly done English translations. This is not optimal as there are many translation errors which may result in choosing the wrong variable for further analysis. If any Spanish-speaker reads this, I would be happy if they would go over my chosen variables and confirm that they are correct. I ended up with the following variables:

  1. Cost of crime against individuals and households
  2. Cost of crime on economic units
  3. Annual percentage change of GDP at 2008 prices
  4. Crime prevalence rate per 10,000 economic units
  5. Crime prevalence rate per hundred thousand inhabitants aged 18 years and over, by state
  6. Dark figure of crime on economic units
  7. Dark figure (crimes not reported and crimes reported that were not investigated)
  8. Doctors per 100 000 inhabitants
  9. Economic participation of population aged 12 to 14 years
  10. Economic participation of population aged 65 and over
  11. Economic units.
  12. Economically active population. Age 15 and older
  13. Economically active population. Unemployed persons. Age 15 and older
  14. Electric energy users
  15. Employed population by income level. Up to one minimum wage. Age 15 and older
  16. Employed population by income level. More than 5 minimum wages. Age 15 and older
  17. Employed population by income level. Do not receive income. Age 15 and older
  18. Fertility rate of adolescents aged 15 to 19 years
  19. Female mortality rate for cervical cancer
  20. Global rate of fertility
  21. Gross rate of women participation
  22. Hospital beds per 100 thousand inhabitants
  23. Inmates in state prisons at year end
  24. Life expectancy at birth
  25. Literacy rate of women 15 to 24 years
  26. Literacy rate of men 15 to 24 years
  27. Median age
  28. Nurses per 100 000 inhabitants
  29. Percentage of households victims of crime
  30. Percentage of births at home
  31. Percentage of population employed as professionals and technicians
  32. Prisoners rate (per 10,000 inhabitants age 18 and over)
  33. Rate of maternal mortality (deaths per 100 thousand live births)
  34. Rate of inhabitants aged 18 years and over that consider their neighborhood or locality as unsafe, per hundred thousand inhabitants aged 18 years and over
  35. Rate of inhabitants aged 18 years and over that consider their state as unsafe, per hundred thousand inhabitants aged 18 years and over
  36. Rate sentenced to serve a sentence (per 1,000 population age 18 and over)
  37. State Gross Domestic Product (GDP) at constant prices of 2008
  38. Total population
  39. Total mortality rate from respiratory diseases in children under 5 years
  40. Total mortality rate from acute diarrheal diseases (ADD) in population under 5 years
  41. Unemployment rate of men
  42. Unemployment rate of women
  43. Households
  44. Inhabited housings with available computer
  45. Inhabited housings that have toilet
  46. Inhabited housings that have a refrigerator
  47. Inhabited housings with available water from public net
  48. Inhabited housings that have drainage
  49. Inhabited housings with available electricity
  50. Inhabited housings that have a washing machine
  51. Inhabited housings with television
  52. Percentage of housing with piped water
  53. Percentage of housing with electricity
  54. Proportion of population with access to improved sanitation, urban and rural
  55. Proportion of population with sustainable access to improved sources of water supply, in urban and rural areas

There are were data for multiple years for most of them. I used all data from the last 10 years, approximately. For all data with multiple years, I calculated the mean value.

For data given in raw numbers, I calculated the appropriate per unit measures (per person, per economically active person (?), per household).

A matrix plot for all the S factor relevant data (e.g. not population size) is shown below. It shows missing data in red, as well as the relative difference between datapoints. Thus, cells that are completely white or black are outliers compared to the other data.

matrixplot

One variable (inmates per person) has a few missing datapoints.

Multiple other variables had strong outliers. I examined these to determine if they were real or due to data error.

Inspection revealed that the GDP per person data was clearly incorrect for one state (Campeche) but I could not find the source of error. The data is the same as on the website and did not match the data on Wikipedia. I deleted it to be safe.

The GDP change outlier seems to be real (Campeche) which has negative growth. According to this site, it is due to oil fields closing.

The rest of the outliers were hard to say something about due to the odd nature of the data (“dark crime”?), or were plausible. E.g. Mexico City (aka Federal District, the capital) was an outlier on nurses and doctors per capita, but this is presumably due to many large hospitals being located there.

Some data errors of my own were found and corrected but there is no guarantee there are not more. Compiling a large set of data like this frequently results in human errors.

Factor analysis

Since there were only 32 cases — 31 states + federal district — and 47 variables (excluding the bogus GDP per capita one), this gives problems for factor analysis. There are various recommendations, but almost none of them are met by this dataset (Zhao, 2009). To test limits, I decided to try factor analyzing all of the variables. This produced warnings:

The estimated weights for the factor scores are probably incorrect.  Try a different factor extraction method.
In factor.scores, the correlation matrix is singular, an approximation is used
Warning messages:
1: In cor.smooth(R) : Matrix was not positive definite, smoothing was done
2: In cor.smooth(R) : Matrix was not positive definite, smoothing was done
3: In cor.smooth(r) : Matrix was not positive definite, smoothing was done
4: In cor.smooth(r) : Matrix was not positive definite, smoothing was done

Warnings such these do not always mean that the result is nonsense, but they often do. For that reason, I wanted to extract an S factor with a smaller number of variables. From the 47, I selected the following 21 variables as generally representative and interpretable:

  1. GDP.change,              #Economic
  2. Unemploy.men.rate,
  3. Unemploy.women.rate,
  4. Low.income.peap,
  5. High.income.peap,
  6. Prof.tech.employ.pct,
  7. crime.rate.per.adult,   #crime
  8. Inmates.per.pers,
  9. Unsafe.neighborhood.percept.rate,
  10. Has.water.net.per.hh,    #material goods
  11. Elec.pct,
  12. Has.wash.mach.per.hh,
  13. Doctors.per.pers,      #Health
  14. Nurses.per.pers,
  15. Hospital.beds.per.pers,
  16. Total.fertility,
  17. Home.births.pct,
  18. Maternal.death.rate,
  19. Life.expect,
  20. Women.participation,   #Gender equality
  21. Lit.young.women        #education

Note that peap = per economically active person, hh = household.

The selection was made by my judgment call and others may choose different variables.

Automatic reduction of dataset

As a robustness check and evidence against a possible claim that I picked the variables such as to get an S factor that most suited my prior beliefs, I decided to find an automatic method of selecting a subset of variables for factor analysis. I noticed that in the original dataset, some variables overlapped near perfectly. This would mean that whatever they measure, it would get measured twice or more when extracting a factor. Highly correlated variables can also create nonsense solutions, especially when extracting more than 1 factor.

Another piece of insight comes from the fact that for cognitive data, general factors extracted from a less broad selection of subtests are worse measures of general cognitive ability than those from broader selections (Johnson et al, 2008).

Lastly, subtests from different domains tend to be less correlated than those from the same domain (hence the existence of group factors).

Combining all this, it seems a decent idea that to reduce a dataset by 1 variable, one should calculate all the intercorrelations and find the highest one. Then one should remove one of the variables responsible for it. One can do this repeatedly to remove more than 1 variable from a dataset. Concerning the question of which of the two variables to remove, I can think of three ways: always removing the first, always the second, choosing at random. I implemented all three settings and chose the second as the default. This is because in many datasets the first of a set of highly correlated variables is usually the ‘primary one’, E.g. unemployment, unemployment men, unemployment women. The algorithm also outputs step-by-step information concerning which variables was removed and what their correlation was.

Having written the R code for the algorithm, I ran it on the Mexican dataset. I wanted to obtain a solution using the largest possible number of variables without getting a warning from the factor extraction function. So I first removed 1 variable, and then ran the factor analysis. When I received an error, I removed another, and so on. After having removed 20 variables, I no longer received an error. This left the analysis with 27 variables, or 6 more than my chosen selection. The output from the reduction algorithm was:

> s3 = remove.redundant(s, 20)
[1] "Dropping variable number 1"
[1] "Most correlated vars are Good.water.prop and Piped.water.pct r=0.997"
[1] "Dropping variable number 2"
[1] "Most correlated vars are Piped.water.pct and Has.water.net.per.hh r=0.996"
[1] "Dropping variable number 3"
[1] "Most correlated vars are Fertility.teen and Total.fertility r=0.99"
[1] "Dropping variable number 4"
[1] "Most correlated vars are Good.sani.prop and Has.drainage.per.hh r=0.984"
[1] "Dropping variable number 5"
[1] "Most correlated vars are Victims.crime.households and crime.rate.per.adult r=0.97"
[1] "Dropping variable number 6"
[1] "Most correlated vars are Nurses.per.pers and Doctors.per.pers r=0.962"
[1] "Dropping variable number 7"
[1] "Most correlated vars are Lit.young.men and Lit.young.women r=0.938"
[1] "Dropping variable number 8"
[1] "Most correlated vars are Elec.pct and Has.elec.per.hh r=0.938"
[1] "Dropping variable number 9"
[1] "Most correlated vars are Has.wash.mach.per.hh and Has.refrig.per.household r=0.926"
[1] "Dropping variable number 10"
[1] "Most correlated vars are Prisoner.rate and Inmates.per.pers r=0.901"
[1] "Dropping variable number 11"
[1] "Most correlated vars are Unemploy.women.rate and Unemploy.men.rate r=0.888"
[1] "Dropping variable number 12"
[1] "Most correlated vars are Women.participation and Has.computer.per.household r=0.877"
[1] "Dropping variable number 13"
[1] "Most correlated vars are Hospital.beds.per.pers and Doctors.per.pers r=0.87"
[1] "Dropping variable number 14"
[1] "Most correlated vars are Has.computer.per.household and Prof.tech.employ.pct r=0.868"
[1] "Dropping variable number 15"
[1] "Most correlated vars are Unemploy.men.rate and Unemployed.15plus.peap r=0.866"
[1] "Dropping variable number 16"
[1] "Most correlated vars are Has.tv.per.hh and Has.elec.per.hh r=0.864"
[1] "Dropping variable number 17"
[1] "Most correlated vars are Has.elec.per.hh and Has.drainage.per.hh r=0.851"
[1] "Dropping variable number 18"
[1] "Most correlated vars are Median.age and Prof.tech.employ.pct r=0.846"
[1] "Dropping variable number 19"
[1] "Most correlated vars are Home.births.pct and Low.income.peap r=0.806"
[1] "Dropping variable number 20"
[1] "Most correlated vars are Life.expect and Has.water.net.per.hh r=0.796

In my opinion the output shows that the function works. In most cases, the pair of variables found was either a (near-)double measure e.g. percent of population with electricity and percent of households with electricity, or closely related e.g. literacy in men and women. Sometimes however, the pair did not seem to be closely related, e.g. women’s participation and percent of households with a computer.

Since this dataset selected the variable with missing data, I used the irmi() function from the VIM package to impute the missing data (Templ et al, 2014).

Factor loadings: stability

The factor loading plots are shown below.

S_self_all S_self_automatic S_self_chosen

Each analysis relied upon a unique but overlapping selection of variables. Thus, it is possible to correlate the loadings of the overlapping parts for each analysis. This is a measure of loading stability in different factor analytic environments, as also done by Ree and Earles (1993) for general cognitive ability factor (g factor). The correlations were .98, 1.00, .98 (n’s 21, 27, 12), showing very high stability across datasets. Note that it was not possible to use the loadings from the Wikipedian data factor analysis because the variables were not strictly speaking overlapping.

Factor loadings: interpretation

Examining the factor loadings reveals some things of interest. Generally for all analyses, whatever that is generally considered good loads positively, and whatever considered bad loads negatively.

Unemployment (together, men, women) has positive loadings, whereas it ‘should’ have negative loadings. This is perhaps because the lower S factor states have more dysfunctional or no social security nets such that not working means starvation, and that this keeps people from not working. This is merely a conjecture because I don’t know much about Mexico. Hopefully someone more knowledgeable than me will read this and have a better answer.

Crime variables (crime rate, victimization, inmates/prisoner per capita, sentencing rate) load positively whereas it should load negatively. This pattern has been found before, see Kirkegaard (2015e) for a review of S factor studies and crime variables.

Factor scores

Next I correlated the factor scores from all 4 analysis with each other as well as HDI and cognitive ability as measured by PISA tests (the cognitive data is from Fuerst and Kirkegaard, 2015*; the HDI data from Wikipedia). The correlation matrix is shown below.

“regression” method S.all S.chosen S.automatic S.wiki HDI Cognitive ability
S.all 1.00 -0.08 -0.02 0.08 -0.17 -0.12
S.chosen -0.08 1.00 0.93 0.84 0.93 0.65
S.automatic -0.02 0.93 1.00 0.89 0.88 0.74
S.wiki 0.08 0.84 0.89 1.00 0.76 0.78
HDI -0.17 0.93 0.88 0.76 1.00 0.53
Cognitive ability -0.12 0.65 0.74 0.78 0.53 1.00

 

Strangely, despite the similar factor loadings, the factor scores from the factor extracted from all the variables had about no relation to the others. This probably indicates that the factor scoring method could not handle this type of odd case. The default scoring method for the factor analysis is “regression”, but there are a few others. Bartlett’s method yielded results for S.all that fit with the other factors, while none of the others did. See the psych package documentation for details (Revelle, 2015). I changed the extraction method for all the other analyses to Bartlett’s to remove method specific variance. The new correlation table is shown below:

Bartlett’s method S.all S.chosen S.automatic S.wiki HDI.mean Cognitive ability
S.all 1.00 0.79 0.88 0.88 0.68 0.74
S.chosen 0.79 1.00 0.95 0.87 0.93 0.70
S.automatic 0.88 0.95 1.00 0.88 0.89 0.74
S.wiki 0.88 0.87 0.88 1.00 0.75 0.78
HDI.mean 0.68 0.93 0.89 0.75 1.00 0.53
Cognitive ability 0.74 0.70 0.74 0.78 0.53 1.00

 

Intriguingly, now all the correlations are stronger. Perhaps Bartlett’s method is better for handling this type of extraction involving general factors from datasets with low case to variable ratios. It certainly deserves empirical investigation, including reanalysis of prior datasets. I reran the earlier parts of this paper with the Bartlett method. It did not substantially change results. The correlations between loadings across analysis increased a bit (to .98, 1.00, .99).

One possibility however is that the stronger results is just due to Bartlett’s method creating outliers that happen to lie on the regression line. This did not seem to be the case, see scatterplots below.

Correlation_matrix

S factor scores and cognitive ability

The next question is to what degree the within country differences in Mexico can be explained by cognitive ability. The correlations are in the above table as well, they are in the region .70 to .78 for the various S factors. In other words, fairly high. One could plot all of them vs. cognitive ability, but that would give us 4 plots. Instead, I plot only the S factor from my chosen variables since this has the highest correlation with HDI and thus the best claim for construct validity. It is also the most conservative option because of the 4 S factors, it has the lowest correlation with cognitive ability. The plot is shown below:

CA_S_chosen

We see that the federal district is a strong outlier, just like in the study with US states and Washington DC (Kirkegaard, 2015c). One should then remove it and rerun all the analyses. This includes the S factor extractions because the presence of a strong ‘mixed case’ (to be explained further in a future publication) affects the S factor extracted (see again, Kirkegaard, 2015c).

Analyses without Federal District

I reran all the analyses without the federal district. Generally, this did not change much with regards to loadings. Crime and unemployment still had positive loadings.

The loadings correlations across analyses increased to 1.00, 1.00, 1.00.

S.all S.chosen S.automatic S.wiki HDI mean Cognitive ability
S.all 1.00 0.99 0.98 0.93 0.85 0.78
S.chosen 0.99 1.00 0.98 0.94 0.88 0.80
S.automatic 0.98 0.98 1.00 0.90 0.90 0.75
S.wiki 0.93 0.94 0.90 1.00 0.75 0.77
HDI mean 0.85 0.88 0.90 0.75 1.00 0.56
Cognitive ability 0.78 0.80 0.75 0.77 0.56 1.00

 

The factor score correlations increased meaning that the federal district outlier was a source of discrepancy between the extraction methods. This can be seen in the scatterplots above in that  there is noticeable variation in how far from the rest the federal district lies. After this is resolved, the S factors from the INEG dataset are in near-perfect agreement (.99, .98, .98) while the one from Wikipedia data is less so but still respectable (.93, .94, .90). Correlations with cognitive ability also improved a bit.

Method of correlated vectors

In line with earlier studies, I examine whether the measures that are better measures of the latent S factor are also correlated more highly with the criteria variable, cognitive ability.

MCV_S_all MCV_S_automatic MCV_S_chosen MCV_S_wiki

The MCV results are strong: .90 .78 .85 and .92 for the analysis with all variables, chosen variables, automatically chosen variables and Wikipedian variables respectively. Note that these are for the analyses without the federal district, but they were similar with it too.

Discussion and conclusion

Generally, the present analysis reached similar findings to those before, especially with the one about US states. Cognitive ability was a very strong correlate of the S factors, especially once the federal district outlier was removed before the analysis. Further work is needed to find out why unemployment and crime variables sometimes load positively in S factor analyses with regions or states as the unit of analysis.

MCV analysis supported the idea that cognitive ability is related to the S factor, not just some non-S factor source of variance also present in the dataset.

Supplementary material

Data files, R code, figures are available at the Open Science Framework repository.

References

  • Fuerst, J. and Kirkegaard, E. O. W. (2015*). Admixture in the Americas part 2: Regional and National admixture. (Publication venue undecided.)
  • Johnson, W., Nijenhuis, J. T., & Bouchard Jr, T. J. (2008). Still just 1g: Consistent results from five test batteries. Intelligence, 36(1), 81-95.
  • Kirkegaard, E. O. W. (2014). The international general socioeconomic factor: Factor analyzing international rankings. Open Differential Psychology.
  • Kirkegaard, E. O. W. (2015a). S and G in Italian regions: Re-analysis of Lynn’s data and new data. The Winnower.
  • Kirkegaard, E. O. W. (2015b). Indian states: G and S factors. The Winnower.
  • Kirkegaard, E. O. W. (2015c). Examining the S factor in US states. The Winnower.
  • Kirkegaard, E. O. W. (2015d). The S factor in China. The Winnower.
  • Kirkegaard, E. O. W. (2015e). The S factor in the British Isles: A reanalysis of Lynn (1979). The Winnower.
  • Lynn, R. (2010). In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38(1), 93-100.
  • Ree, M. J., & Earles, J. A. (1991). The stability of g across different methods of estimation. Intelligence, 15(3), 271-278.
  • Revelle, W. (2015). psych: Procedures for Psychological, Psychometric, and Personality Research. CRAN
  • Templ, M., Alfons A., Kowarik A., Prantner, B. (2014). VIM: Visualization and Imputation of Missing Values. CRAN
  • Zhao, N. (2009). The Minimum Sample Size in Factor Analysis. Encorewiki.

* = not yet published, year is expected publication year.

Abstract
I reanalyze data reported by Richard Lynn in a 1979 paper concerning IQ and socioeconomic variables in 12 regions of the United Kingdom as well as Ireland. I find a substantial S factor across regions (66% of variance with MinRes extraction). I produce a new best estimate of the G scores of regions. The correlation of this with the S scores is .79. The MCV with reversal correlation is .47.

Introduction
The interdisciplinary academic field examining the effect of general intelligence on large scale social phenomena has been called social ecology of intelligence by Richard Lynn (1979, 1980) and sociology of intelligence by Gottfredson (1998). One could also call it cognitive sociology by analogy with cognitive epidemiology (Deary, 2010; Special issue in Intelligence Volume 37, Issue 6, November–December 2009; Gottfredson, 2004). Whatever the name, it is a field that has received renewed attention recently. Richard Lynn and co-authors report data on Italy (Lynn 2010a, 2010b, 2012a, Piffer and Lynn 2014, see also papers by critics), Spain (Lynn 2012b), China (Lynn and Cheng, 2013) and India (Lynn and Yadav, 2015). Two of his older studies cover the British Isles and France (Lynn, 1979, 1980).

A number of my recent papers have reanalyzed data reported by Lynn, as well as additional data I collected. These cover Italy, India, United States, and China (Kirkegaard 2015a, 2015b, 2015c, 2015d). This paper reanalyzes Lynn’s 1979 paper.

Cognitive data and analysis

Lynn’s paper contains 4 datasets for IQ data that covers 11 regions in Great Britain. He further summarizes some studies that report data on Northern Ireland and the Republic of Ireland, so that his cognitive data covers the entire British Isles. Lynn only uses the first 3 datasets to derive a best estimate of the IQs. The last dataset does not report cognitive scores as IQs, but merely percentages of children falling into certain score intervals. Lynn converts these to a mean (method not disclosed). However, he is unable to convert this score to the IQ scale since the inter-personal standard deviation (SD) is not reported in the study. Lynn thus overlooks the fact that one can use the inter-regional SD from the first 3 studies to convert the 4th study to the common scale. Furthermore, using the intervals one could presumably estimate the inter-personal SD, altho I shall not attempt this. The method for converting the mean scores to the IQ score is this:

  1. Standardize the values by subtracting the mean and dividing by the inter-regional SD.
  2. Calculate the inter-regional SD in the other studies, and find the mean of these. Do the same for the inter-regional means.
  3. Multiple the standardized scores by the mean inter-regional SD from the other studies and add the inter-regional mean.

However, I did not use this method. I instead factor analyzed the four 4 IQ datasets as given and extracted 1 factor (extraction method = MinRes). All factor loadings were strongly positive indicating that G could be reliably measured among the regions. The factor score from this analysis was put on the same scale as the first 3 studies by the method above. This is necessary because the IQs for Northern Ireland and the Republic of Ireland are given on that scale. Table 1 shows the correlations between the cognitive variables. The correlations between G and the 4 indicator variables are their factor loadings (italic).

Table 1 – Correlations between cognitive datasets
Vernon.navy Vernon.army Douglas Davis G Lynn.mean
Vernon.navy 1 0.66 0.92 0.62 0.96 0.92
Vernon.army 0.66 1 0.68 0.68 0.75 0.89
Douglas 0.92 0.68 1 0.72 0.99 0.93
Davis 0.62 0.68 0.72 1 0.76 0.74
G 0.96 0.75 0.99 0.76 1 0.96
Lynn.mean 0.92 0.89 0.93 0.74 0.96 1

 

It can be noted that my use of factor analysis over simply averaging the datasets had little effect. The correlation of Lynn’s method (mean of datasets 1-3) and my G factor is .96.

Socioeconomic data and analysis

Lynn furthermore reports 7 socioeconomic variables. I quote his description of these:

“1. Intellectual achievement: (a) first-class honours degrees. All first-class honours graduates of the year 1973 were taken from all the universities in the British Isles (with the exception of graduates of Birkbeck College, a London College for mature and part-time students whose inclusion would bias the results in favour of London). Each graduate was allocated to the region where he lived between the ages of 11 and 18. This information was derived from the location of the graduate’s school. Most of the data were obtained from The Times, which publishes annually lists of students obtaining first-class degrees and the schools they attended. Students who had been to boarding schools were written to requesting information on their home residence. Information from the Republic of Ireland universities was obtained from the college records.

The total number of students obtaining first-class honours degrees was 3477, and information was obtained on place of residence for 3340 of these, representing 96 06 per cent of the total.
There are various ways of calculating the proportions of first-class honours graduates produced by each region. Probably the most satisfactory is to express the numbers of firsts in each region per 1000 of the total age cohorts recorded in the census of 1961. In this year the cohorts were approximately 9 years old. The reason for going back to 1961 for a population base is that the criterion taken for residence is the school attended and the 1961 figures reduce the distorting effects of subsequent migration between the regions. However, the numbers in the regions have not changed appreciably during this period, so that it does not matter greatly which year is taken for picking up the total numbers of young people in the regions aged approximately 21 in 1973. (An alternative method of calculating the regional output of firsts is to express the output as a percentage of those attending university. This method yields similar figures.)

2. Intellectual achievement: (b) Fellowships of the Royal Society. A second measure of intellectual achievement taken for the regions is Fellowships of the Royal Society. These are well-known distinctions for scientific work in the British Isles and are open equally to citizens of both the United Kingdom and the Republic of Ireland. The population consists of all Fellows of the Royal Society elected during the period 1931-71 who were born after the year 1911. The number of individuals in this population is 321 and it proved possible to ascertain the place of birth of 98 per cent of these. The Fellows were allocated to the region in which they were born and the numbers of Fellows born in each region were then calculated per million of the total population of the region recorded in the census of 1911. These are the data shown in Table 2. The year 1911 was taken as the population base because the majority of the sample was born between the years 1911-20, so that the populations in 1911 represent approximately the numbers in the regions around the time most of the Fellows were born. (The populations of the regions relative to one another do not change greatly over the period, so that it does not make much difference to the results which census year is taken for the population base.)

3. Per capita income. Figures for per capita incomes for the regions of the United Kingdom are collected by the United Kingdom Inland Revenue. These have been analysed by McCrone (1965) for the standard regions of the UK for the year 1959/60. These results have been used and a figure for the Republic of Ireland calculated from the United Nations Statistical Yearbook.

4. Unemployment. The data are the percentages of the labour force unemployed in the regions for the year 1961 (Statistical Abstracts of the UK and of Ireland).

5. Infant mortality. The data are the numbers of deaths during the first year of life expressed per 1000 live births for the year 1961 (Registrar Generals’ Reports).

6. Crime. The data are offences known to the police for 1961 and expressed per 1000 population (Statistical Abstracts of the UK and of Ireland).

7. Urbanization. The data are the percentages of the population living in county boroughs, municipal boroughs and urban districts in 1961 (Census).”

Lynn furthermore reports historical achievement scores as well as an estimate of inter-regional migration (actually change in population which can also be due to differential fertility). I did not use these in my analysis but they can be found in the datafile in the supplementary material.

Since there are 13 regions in total and 7 variables, I can analyze all variables at once and still almost conform to the rule of thumb of having a case-to-variable ratio of 2 (Zhao, 2009). Table 2 shows the factor loadings from this factor analysis as well as the correlation with G for each socioeconomic variable.

Table 2 – Correlations between S, S indicators, and G
Variable S G
Fellows.RS 0.92 0.92
First.class 0.55 0.58
Income 0.99 0.72
Unemployment -0.85 -0.79
Infant.mortality -0.68 -0.69
Crime 0.83 0.52
Urbanization 0.88 0.64
S 1 0.79

 

The crime variable had a strong positive loading on the S factor and also a positive correlation with the G factor. This is in contrast to the negative relationship found at the individual-level between the g factor and crime variables at about r=-.2 (Neisser 1996). The difference in mean IQ between criminal and non-criminal samples is usually around 7-15 points depending on which criminal group (sexual, violent and chronic offenders score lower than other offenders; Guay et al, 2005). Beaver and Wright (2011) found that IQ of countries was also negatively related to crime rates, r’s range from -.29 to -.58 depending on type of crime variable (violent crimes highest). At the level of country of origin groups, Fuerst and Kirkegaard (2014a) found that crime variables had strong negative loadings on the S factor (-.85 and -.89) and negative correlations with country of origin IQ. Altho not reported in the paper, Kirkegaard (2014b) found that the loading of 2 crime variables on the S factor in Norway among country of origin groups was -.63 and -.86 (larceny and violent crime; calculated using the supplementary material using the fully imputed dataset). Kirkegaard (2015a) found S loadings of .16 and -.72 of total crime and intentional homicide variables in Italy. Among US states, Kirkegaard (2015c) found S loadings of -.61 and -.71 for murder rate and prison rate. The scatter plot is shown in Figure 1.

Figure 1 – Scatter plot of regional G and S

G_S

 

 

 

 

 

 

 

 

 

 

So, the most similar finding in previous research is that from Italy. There are various possible explanations. Lynn (1979) thinks it is due to large differences in urbanization (which loads positively in multiple studies, .88 in this study). There may be some effect of the type of crime measurement. Future studies could examine this question by employing many different crime variables. My hunch is that it is a combination of differences in urbanization (which increases crime), immigration of crime prone persons into higher S areas, and differences in the justice system between areas.

Method of correlated vectors (MCV)

As done in the previous analysis of S factors, I performed MCV analysis to see whether the G factor was the reason for the association with the G factor score. S factor indicators with negative loadings were reversed to avoid inflating the result (these are marked with “_r” in the plot). The result is shown in Figure 2.

Figure 2 – MCV scatter plot

MCV

 

 

 

 

 

 

As in the previous analyses, the relationship was positive even after reversal.

Per capita income and the FLynn effect

An interesting quote from the paper is:

This interpretation [that the first factor of his factor analysis is intelligence] implies that the mean population IQs should be regarded as the cause of the other variables. When causal relationships between the variables are considered, it is obvious that some of the variables are dependent on others. For instance, people do not become intelligent as a consequence of getting a first-class honours degree. Rather, they get firsts because they are intelligent. The most plausible alternative causal variable, apart from IQ, is per capita income, since the remaining four are clearly dependent variables. The arguments against positing per capita income as the primary cause among this set of variables are twofold. First, among individuals it is doubtful whether there is any good evidence that differences in income in affluent nations are a major cause of differences in intelligence. This was the conclusion reached by Burt (1943) in a discussion of this problem. On the other hand, even Jencks (1972) admits that IQ is a determinant of income. Secondly, the very substantial increases in per capita incomes that have taken place in advanced Western nations since 1945 do not seem to have been accompanied by any significant increases in mean population IQ. In Britain the longest time series is that of Burt (1969) on London schoolchildren from 1913 to 1965 which showed that the mean IQ has remained approximately constant. Similarly in the United States the mean IQ of large national samples tested by two subtests from the WISC has remained virtually the same over a 16 year period from the early 1950s to the mid-1960s (Roberts, 1971). These findings make it doubtful whether the relatively small differences in per capita incomes between the regions of the British Isles can be responsible for the mean IQ differences. It seems more probable that the major causal sequence is from the IQ differences to the income differences although it may be that there is also some less important reciprocal effect of incomes on IQ. This is a problem which could do with further analysis.

Compare with Lynn’s recent overview of the history of the FLynn effect (Lynn, 2013).

References

  • Beaver, K. M.; Wright, J. P. (2011). “The association between county-level IQ and county-level crime rates”. Intelligence 39: 22–26. doi:10.1016/j.intell.2010.12.002
  • Deary, I. J. (2010). Cognitive epidemiology: Its rise, its current issues, and its challenges. Personality and individual differences, 49(4), 337-343.
  • Guay, J. P., Ouimet, M., & Proulx, J. (2005). On intelligence and crime: A comparison of incarcerated sex offenders and serious non-sexual violent criminals. International journal of law and psychiatry, 28(4), 405-417.
  • Gottfredson, L. S. (1998). Jensen, Jensenism, and the sociology of intelligence. Intelligence, 26(3), 291-299.
  • Gottfredson, L. S. (2004). Intelligence: is it the epidemiologists’ elusive” fundamental cause” of social class inequalities in health?. Journal of personality and social psychology, 86(1), 174.
  • Intelligence, Special Issue: Intelligence, health and death: The emerging field of cognitive epidemiology. Volume 37, Issue 6, November–December 2009
  • Kirkegaard, E. O. W., & Fuerst, J. (2014a). Educational attainment, income, use of social benefits, crime rate and the general socioeconomic factor among 71 immigrant groups in Denmark. Open Differential Psychology.
  • Kirkegaard, E. O. W. (2014b). Crime, income, educational attainment and employment among immigrant groups in Norway and Finland. Open Differential Psychology.
  • Kirkegaard, E. O. W. (2015a). S and G in Italian regions: Re-analysis of Lynn’s data and new data. The Winnower.
  • Kirkegaard, E. O. W. (2015b). Indian states: G and S factors. The Winnower.
  • Kirkegaard, E. O. W. (2015c). Examining the S factor in US states. The Winnower.
  • Kirkegaard, E. O. W. (2015d). The S factor in China. The Winnower.
  • Lynn, R. (1979). The social ecology of intelligence in the British Isles. British Journal of Social and Clinical Psychology, 18(1), 1-12.
  • Lynn, R. (1980). The social ecology of intelligence in France. British Journal of Social and Clinical Psychology, 19(4), 325-331.
  • Lynn, R. (2010a). In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38(1), 93-100.
  • Lynn, R. (2010b). IQ differences between the north and south of Italy: A reply to Beraldo and Cornoldi, Belacchi, Giofre, Martini, and Tressoldi. Intelligence, 38(5), 451-455.
  • Lynn, R. (2012a). IQs in Italy are higher in the north: A reply to Felice and Giugliano. Intelligence, 40(3), 255-259.
  • Lynn, R. (2012b). North-south differences in Spain in IQ, educational attainment, per capita income, literacy, life expectancy and employment. Mankind Quarterly, 52(3/4), 265.
  • Lynn, R. (2013). Who discovered the Flynn effect? A review of early studies of the secular increase of intelligence. Intelligence, 41(6), 765-769.
  • Lynn, R., & Cheng, H. (2013). Differences in intelligence across thirty-one regions of China and their economic and demographic correlates. Intelligence, 41(5), 553-559.
  • Lynn, R., & Yadav, P. (2015). Differences in cognitive ability, per capita income, infant mortality, fertility and latitude across the states of India. Intelligence, 49, 179-185.
  • Neisser, Ulric et al. (February 1996). “Intelligence: Knowns and Unknowns”. American Psychologist 52 (2): 77–101, 85.
  • Piffer, D., & Lynn, R. (2014). New evidence for differences in fluid intelligence between north and south Italy and against school resources as an explanation for the north–south IQ differential. Intelligence, 46, 246-249.
  • Zhao, N. (2009). The Minimum Sample Size in Factor Analysis. Encorewiki.

 

As Jason Malloy has mentioned, it is strange that in the race intelligence debates, people usually cite the same few studies over and over:

Shortly after writing that post, I decided that more needed to be written about transracial adoption research as a behavior genetic experiment. Arthur Jensen, Richard Lynn, and J. Philippe Rushton have all cited the Minnesota Transracial Adoption Study, as well as several IQ studies of transracially adopted Asians, in support of the hereditarian position. And Richard Nisbett has referenced several other adoption studies that suggest no racial gaps. However, I suspected there was more data for transracially adopted children than what this small cadre of scientists had already discussed (at the very least for important variables other than intelligence); research that could give us a more complete picture of what these unusual children become, and what this can tell us about the causes of ethnic differences in socially valued outcomes.

One way of finding hard to find studies is going thru articles that cite popular reviews of the topic. Sometimes, this is not possible if the reviews have thousands of citations. However, sometimes, it is. In this case, I used the review: Kim, W. J. (1995). International adoption: A case review of Korean children. Child Psychiatry and Human Development, 25(3), 141-154. Then simply look up the studies that cite it (101 results on Scholar). We are looking for studies that report country or area of origin as well as relevant criteria variables such as IQ scores, GPA, educational attainment/achievement, income, socioeconomic status/s factor, crime rates, use of public benefits and so on.

One such study is: Lindblad, F., Hjern, A., & Vinnerljung, B. (2003). Intercountry Adopted Children as Young Adults—‐A Swedish Cohort Study. American Journal of Orthopsychiatry, 73(2), 190-202. The abstract is promising:

In a national cohort study, the family and labor market situation, health problems, and education of 5,942 Swedish intercountry adoptees born between 1968 and 1975 were examined and compared with those of the general population, immigrants, and a siblings group—all age matched—in national registers from 1997 to 1999. Adoptees more often had psychiatric problems and were longtime recipients of social welfare. Level of education was on par with that of the general population but lower when adjusted for socioeconomic status.

The sample consists of:

There were 5,942 individuals in the adoptee study group: 3,237 individuals were born in the Far East (2,658 were born in South Korea), 1,422 in South Asia, 871 in Latin America, and 412 in Africa. In the other study groups there were 1,884 siblings, 8,834 European immigrants, 3,544 non-European immigrants, and 723,154 individuals in the general population.

So, by the usual standards, this is a very large study. We are interested in the region of birth results. They are in two tables:

Table 7Table 8

We can note that the Far East — i.e. mostly South Korean, presumably the rest are North Korean (?), Chinese, Japanese, Vietnamese (?) — usually gets the better outcomes. They were less often married, mixed results for living with parents, more likely to have a university degree, less likely to have only primary school, more likely to be in the workforce, less likely to be unemployed, less likely to receive welfare, mixed results for hospital admissions for substance abuse, much less likely to be admitted for alcohol abuse (likely to be due to Asian alcohol flush syndrome), less likely to be admitted for a psychiatric diagnosis, and less likely to receive disability pension.

It would probably have been better if one could aggregate the results and look at the general socioeconomic factor instead. It is not possible to do so with the above results, since there are only 4 cases and 11 variables. One could calculate a score by choosing some or all of the variables. Or one could assign them factor loadings manually and then calculate scores. I calculated a unit-weighted score based on all but the first two indicators (married and living with parents since these are not socioeconomically important). Two indicators (uni degree and workforce) were reversed (by 1/OR). I also calculated the median score which is resistant to outliers (e.g. the alcohol abuse indicator). Results:

 

Socioeconomic outcomes by region of origin, and estimated S scores
Group Latin America Africa South Asia Far East
Uni.degree 2.50 1.43 1.67 1
Only.primary.ed 1.60 1.50 1.00 1
Workforce 1.43 1.43 1.11 1
Unemployed 1.30 0.90 1.30 1
Welfare.use 1.90 1.50 1.30 1
Substance.abuse 2.70 0.70 1.00 1
Alcohol.abuse 4.50 4.90 3.60 1
Psychiatric.diag 1.50 1.40 1.20 1
Disability.pension 1.30 1.80 1.30 1
Mean.S 2.08 1.73 1.50 1
Median.S 1.60 1.43 1.30 1

 

It is interesting to see that the Africans did better than the Latin Americans. Perhaps there is something strange going on. Perhaps the Latin Americans are from countries with high African% admixture. Or perhaps it’s some kind of selection effect.

In their discussion they write:

There were considerable differences between adoptees from different geographical regions with better outcomes in many respects for children from the Far East, in this context mainly South Korea. Sim­ilar positive adjustment results concerning Asian adoptees have been presented previously. For in­ stance, an excellent prognosis concerning adjustment and identity development in Chinese adoptees in Britain was described (Bagley, 1993). A Dutch group recently presented data about academic achievement and intelligence in 7-year-old children adopted in in­ fancy (Stams, Juffer, Rispens, & Hoksbergen, 2000). The South Korean group had high IQs with 31% above a score of 120. Pre- and postnatal care before adoption seems to be particularly well organized in South Korea (Kim, 1995), which may be one important reason for the positive outcome. The differences among the geographic regions may also, however, be due to a large number of other factors such as differ­ences in nutrition, motives behind the adoption, qual­ity of care in the orphanage-foster home before the adoption, genetic dispositions, and Swedish preju­dices against “foreign-looking” people. Another ex­planation may be a larger number of younger infants in the South Korean group. However, that is not pos­sible to verify from our register data.

The usual cultural explanations.

I have also contacted the Danish statistics office to hear if they have Danish data.

References

Abstract

I analyze the S factor in Italian states by reanalyzing data published by Lynn (2010) as well as new data compiled from the Italian statistics agency (7 and 10 socioeconomic variables, respectively). The S factors from the datasets are highly correlated (.92) and both are strongly correlated with a G factor from PISA scores (.93 and .88).

Introduction

One can study a given human trait at many levels. Probably the most common is the individual level. The next-most common the inter-national, and the least common perhaps the intra-national. This last one can be done at various level too, e.g. state, region, commune, and city. These divisions usually vary by country.

The study of general intelligence (GI) at these higher levels has been called the ecology of intelligence by Richard Lynn (1979, 1980) and the sociology of intelligence by Gottfredson (1998). Lynn’s two old papers cited before actually contain quite a bit of data which can be re-analyzed too. I will do so in a future post. I also decided that this series of posts will have to turn into one big paper with a review and meta-analysis. There are strong patterns in the data not previously explored or synthesized by researchers.

Lynn has published a number of papers on the regions of Italy (2010a, 2010b, 2012 and Lynn and Piffer 2014) and it is this topic I turn to in this post.

Lynn’s 2010 data

True to his style, Lynn (2010a) contains the raw data used for his analysis. It is fortunate because it means anyone can re-analyze them. His paper contains the following variables:

  1. 3x PISA subtests: reading, math, science
  2. An average of these PISA scores
  3. An IQ derived from the average
  4. Stature for 1855, 1910, 1927 and 1980
  5. Per capita income for 1970 and 2003
  6. Infant mortality for 1955 and 1999
  7. Literacy 1880
  8. Years of education 1951, 1971 and 2001
  9. Latitude.

These data are given for 12 Italian regions.

Lynn himself merely did correlational analysis and discussed the results. The data however can be usefully factor analyzed to extract a G (from the three PISA subtests) and S factor (from all the socioeconomic variables). I imported the data into R.

Lynn’s choice of variables is quite odd. They are not all from the same years, presumably because he picked them from various other papers instead of going to the Italian statistics website to fetch some himself. This opens the question of how to analyze them. I did this: I did a factor analysis (MinRes, default settings for fa() from the psych package) on the new socioeconomic data only, old data only, and all of it. The two factor analyses of the limited datasets did not reveal anything interesting not shown in the full analysis, so I only show results from the full analysis. Note that by doing this, I broke the rule of thumb about the number of variables per case (at least 2) because there are 7 variables in my analysis but only 13 cases with full data. The loading plot is:

S_loadings

This plot reveals no surprises.

The loadings for the G factor with the PISA subtests were all .99, so it is pointless to post a plot. The scatter plot for G and S is:

S_G

And MCV with reversing:

MCV_r

New data

Being dissatisfied with the data Lynn reported, I decided to collect more data. The PISA 2012 results have PISA scores for more regions than before which allows for an analysis with more cases. This also means that one can use more variables in the factor analysis. The new PISA data has 22 regions, so one can use about 11 variables. However, due to some missing data, only 21 regions were available for analysis (Südtirol had some missing data). So I settled on using 10 variables.

To get data for the analysis, I followed the approach taken in the previous post on the S factor in US states. I went to the official statistics bank, IStat, and fetched data for the regions. Like before, for MCV to work well, one needs a diverse selection of variables, so that there is diversity in their S loadings (not just direction of loading). I settled on the following 10 variables:

  1. Political participation index, 9 years
  2. Percent with normal weight, 9 years
  3. Percent smokers, 10 years
  4. Intentional homicide rate, 4 years
  5. Total crime rate, 4 years
  6. Unemployment, 10 years
  7. Life expectancy males, 10 years
  8. Total fertility rate, 10 years
  9. Interpersonal trust index, 5 years
  10. No savings percent, 10 years

For all variables, I calculated the mean for all years. I fetched the last 10 years for all data when available.

For cognitive data, I fetched the regional scores for reading, mathematics and science subtests from PISA 2012, Annex B2.

Factor analysis

I proceeded like above. The loadings plot is:

S2_loadings

There are two odd results. Total crime rate has a slight positive loading (.16) while intentional homicide rate has a strong negative loading (-.72). Lynn reported a similar finding in his 1980 paper on Britain. He explained it as being due to urbanization, which increases population density which increases crime rates (more opportunities, more interpersonal conflicts). An alternative hypothesis is that the total crime rate is being increased by immigrants who live mostly in the north. Perhaps one can get crime rates for natives only to test this. A third hypothesis is that it has to do with differences in the legal system, for instance, prosecutor practice in determining which actions to pull into the legal system.

The second odd result is that fertility has a positive loading. Generally, it has been found that fertility has a slight negative correlation with GI and s factor at the individual level, see e.g. Lynn (2011). It has also been found that internationally, GI has a strong negative relationship, -.5 to -.7 depending on measure, to fertility (Shatz, 2008; Lynn and Harvey, 2008). I also found something similar, -.5, when I examined Danish immigrant groups by country of origin (Kirkegaard, 2014). However, if one examines European countries only, one sees that fertility is relatively ‘high’ (a bit below 2) in the northern countries (Nordic countries, UK), and low in the southern and eastern countries. This means that the correlation of fertility between countries in Europe and IQ (e.g. PISA) is positive. Maybe this has some relevance to the current finding. Maybe immigrants are pulling the fertility up in the northern regions.

There is little to report from the factor analysis of PISA results. All loadings between .98 and .99.

Scatter plot of S and G

S2_G2

MCV with reversing

MCV2_r

Inter-dataset scatter plots

To examine the inter-dataset stability of factor scores:

S_S2 G_G2

For one case, the Lynn dataset had data for a merged region. I merged the two regions in the new dataset to match it up against the one from Lynn’s. This is the conservative choice. One could have used Lynn’s data for both regions instead which would have increased the sample size by 1.

Discussion

The results for the regional G and S in Italian regions are especially strong. They rival even the international S factor in their correlation with the G estimates. Italy really is a very divided country. Stability across datasets was very strong too, so Lynn’s odd choice of data was not inflating the results.

MCV worked better in the dataset with more and more diverse indicator variables for S, as would be expected if the correlation was artificially low in the first dataset due to restriction of range in the S loadings.

Supplementary material

All project files (R source code, data files, plots) are available on the Open Science Framework repository.

Thanks to Davide Piffer for catching an error + help in matching the regions up from the two datasets.

References

  • Gottfredson, L. S. (1998). Jensen, Jensenism, and the sociology of intelligence. Intelligence, 26(3), 291-299.
  • Kirkegaard, E. O. (2014). Criminality and fertility among danish immigrant populations. Open Differential Psychology.
  • Lynn, R. (1979). The social ecology of intelligence in the British Isles. British Journal of Social and Clinical Psychology, 18(1), 1-12.
  • Lynn, R. (1980). The social ecology of intelligence in France. British Journal of Social and Clinical Psychology, 19(4), 325-331.
  • Lynn, R., & Harvey, J. (2008). The decline of the world’s IQ. Intelligence, 36(2), 112-120.
  • Lynn, R. (2010a). In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38(1), 93-100.
  • Lynn, R. (2010b). IQ differences between the north and south of Italy: A reply to Beraldo and Cornoldi, Belacchi, Giofre, Martini, and Tressoldi. Intelligence, 38(5), 451-455.
  • Lynn, R. (2011). Dysgenics: Genetic deterioration in modern populations. Second edition. Westport CT.
  • Lynn, R. (2012). IQs in Italy are higher in the north: A reply to Felice and Giugliano. Intelligence, 40(3), 255-259.
  • Piffer, D., & Lynn, R. (2014). New evidence for differences in fluid intelligence between north and south Italy and against school resources as an explanation for the north–south IQ differential. Intelligence, 46, 246-249.
  • Shatz, S. M. (2008). IQ and fertility: A cross-national study. Intelligence, 36(2), 109-111.

S_IQ2_noDC

Abstract

I analyzed the S factor in US states by compiling a dataset of 25 diverse socioeconomic indicators. Results show that Washington DC is a strong outlier, but if it is excluded, then the S factor correlated strongly with state IQ at .75.

Ethnoracial demographics of the states are related to the state’s IQ and S in the expected order (White>Hispanic>Black).

Introduction and data sources

In my previous two posts, I analyzed the S factor in 33 Indian states and 31 Chinese regions. In both samples I found strongish S factors and they both correlated positively with cognitive estimates (IQ or G). In this post I used cognitive data from McDaniel (2006). He gives two sets of estimated IQs based on SAT-ACT and on NAEP. Unfortunately, they only correlate .58, so at least one of them is not a very accurate estimate of general intelligence.

His article also reports some correlations between these IQs and socioeconomic variables: Gross State Product per capita, median income and percent poverty. However, data for these variables is not given in the article, so I did not use them. Not quite sure where his data came from.

However, with cognitive data like this and the relatively large number of datapoints (50 or 51 depending on use of District of Colombia), it is possible to do a rather good study of the S factor and its correlates. High quality data for US states are readily available, so results should be strong. Factor analysis requires a case to variable ratio of at least 2:1 to deliver reliable results (Zhao, 2009). So, this means that one can do an S factor analysis with about 25 variables.

Thus, I set out to find about 25 diverse socioeconomic variables. There are two reasons to gather a very diverse sample of variables. First, for method of correlated vectors to work (Jensen, 1998), there must be variation in the indicators’ loading on the factor. Lack of variation causes restriction of range problems. Second, lack of diversity in the indicators of a latent variable leads to psychometric sampling error (Jensen, 1994; review post here for general intelligence measures).

My primary source was The 2012 Statistical Abstract website. I simply searched for “state” and picked various measures. I tried to pick things that weren’t too dependent on geography. E.g. kilometer of coast line per capita would be very bad since it’s neither socioeconomic and very dependent (near 100%) on geographical factors. To increase reliability, I generally used all data for the last 10 years and averaged them. Curious readers should see the datafile for details.

I ended up with the following variables:

  1. Murder rate per 100k, 10 years
  2. Proportion with high school or more education, 4 years
  3. Proportion with bachelor or more education, 4 years
  4. Proportion with advanced degree or more, 4 years
  5. Voter turnout, presidential elections, 3 years
  6. Voter turnout, house of representatives, 6 years
  7. Percent below poverty, 10 years
  8. Personal income per capita, 1 year
  9. Percent unemployed, 11 years
  10. Internet usage, 1 year
  11. Percent smokers, male, 1 year
  12. Percent smokers, female, 1 year
  13. Physicians per capita, 1 year
  14. Nurses per capita, 1 year
  15. Percent with health care insurance, 1 year
  16. Percent in ‘Medicaid Managed Care Enrollment’, 1 year
  17. Proportion of population urban, 1 year
  18. Abortion rate, 5 years
  19. Marriage rate, 6 years
  20. Divorce rate, 6 years
  21. Incarceration rate, 2 years
  22. Gini coefficient, 10 years
  23. Top 1%, proportion of total income, 10 years
  24. Obesity rate, 1 year

Most of these are self-explanatory. For the economic inequality measures, I found 6 different measures (here). Since I wanted diversity, I chose the Gini and the top 1% because these correlated the least and are well-known.

Aside from the above, I also fetched the racial proportions for each state, to see how they relate the S factor (and the various measures above, but to get these, run the analysis yourself).

I used R with RStudio for all analyses. Source code and data is available in the supplementary material.

Missing data

In large analyses like this there are nearly always some missing data. The matrixplot() looks like this:

matrixplot

(It does not seem possible to change the font size, so I have cut off the names at the 8th character.)

We see that there aren’t many missing values. I imputed all the missing values with the VIM package (deterministic imputation using multiple regression).

Extreme values

A useful feature of the matrixplot() is that it shows in greytone the relatively outliers for each variable. We can see that some of them have some hefty outliers, which may be data errors. Therefore, I examined them.

The outlier in the two university degree variables is DC, surely because the government is based there and there is a huge lobbyist center. For the marriage rate, the outlier is Nevada. Many people go there and get married. Physician and nurse rates are also DC, same reason (maybe one could make up some story about how politics causes health problems!).

After imputation, the matrixplot() looks like this:

matrixplot_after

It is pretty much the same as before, which means that we did not substantially change the data — good!

Factor analyzing the data

Then we factor analyze the data (socioeconomic data only). We plot the loadings (sorted) with a dotplot:

S_loadings_US

We see a wide spread of variable loadings. All but two of them load in the expected direction — positive are socially valued outcomes, negative the opposite — showing the existence of the S factor. The ‘exceptions’ are: abortion rate loading +.60, but often seen as a negative thing. It is however open to discussion. Maybe higher abortion rates can be interpreted as less backward religiousness or more freedom for women (both good in my view). The other is marriage rate at -.19 (weak loading). I’m not sure how to interpret that. In any case, both of these are debatable which way the proper desirable direction is.

Correlations with cognitive measures

And now comes the big question, does state S correlate with our IQ estimates? They do, the correlations are: .14 (SAT-ACT) and .43 (NAEP). These are fairly low given our expectations. Perhaps we can work out what is happening if we plot them:

S_IQ1S_IQ2

Now we can see what is going on. First, the SAT-ACT estimates are pretty strange for three states: California, Arizona and Nevada. I note that these are three adjacent states, so it is quite possibly some kind of regional testing practice that’s throwing off the estimates. If someone knows, let me know. Second, DC is a huge outlier in S, as we may have expected from our short discussion of extreme values above. It’s basically a city state which is half-composed of low s (SES) African Americans and half upper class related to government.

Dealing with outliers – Spearman’s correlation aka. rank-order correlation

There are various ways to deal with outliers. One simple way is to convert the data into ranked data, and just correlate those like normal. Pearson’s correlations assume that the data are normally distributed, which is often not the case with higher-level data (states, countries). Using rank-order gets us these:

S_IQ1_rank S_IQ2_rank

So the correlations improved a lot for the SAT-ACT IQs and a bit for the NAEP ones.

Results without DC

Another idea is simply excluding the strange DC case, and then re-running the factor analysis. This procedure gives us these loadings:

S_loadings_noDC

(I have reversed them, because they were reversed e.g. education loading negatively.)

These are very similar to before, excluding DC did not substantially change results (good). Actually, the factor is a bit stronger without DC throwing off the results (using minres, proportion of var. = 36%, vs. 30%). The reason this happens is that DC is an odd case, scoring very high in some indicators (e.g. education) and very poorly in others (e.g. murder rate).

The correlations are:

S_IQ1_noDCS_IQ2_noDC

So, not surprisingly, we see an increase in the effect sizes from before: .14 to .31 and .43 to .69.

Without DC and rank-order

Still, one may wonder what the results would be with rank-order and DC removed. Like this:

S_IQ1_noDC_rankS_IQ2_noDC_rank

So compared to before, effect size increased for the SAT-ACT IQ and decreased slightly for the NAEP IQ.

Now, one could also do regression with weights based on some metric of the state population and this may further change results, but I think it’s safe to say that the cognitive measures correlate in the expected direction and with the removal of one strange case, the better measure performs at about the expected level with or without using rank-order correlations.

Method of correlated vectors

The MCV (Jensen, 1998) can be used to test whether a specific latent variable underlying some data is responsible for the observed correlation between the factor score (or factor score approximation such as IQ — an unweighted sum) and some criteria variable. Altho originally invented for use on cognitive test data and the general intelligence factor, I have previously used it in other areas (e.g. Kirkegaard, 2014). I also used it in the previous post on the S factor in India (but not China because there was a lack of variation in the loadings of socioeconomic variables on the S factor).

Using the dataset without DC, the MCV result for the NAEP dataset is:

MCV_NAEP_noDC

So, again we see that MCV can reach high r’s when there is a large number of diverse variables. But note that the value can be considered inflated because of the negative loadings of some variables. It is debatable whether one should reverse them.

Racial proportions of states and S and IQ

A last question is whether the states’ racial proportions predict their S score and their IQ estimate. There are lots of problems with this. First, the actual genomic proportions within these racial groups vary by state (Bryc, 2015). Second, within ‘pure-breed’ groups, general intelligence varies by state too (this was shown in the testing of draftees in the US in WW1). Third, there is an ‘other’ group that also varies from state to state, presumably different kinds of Asians (Japanese, Chinese, Indians, other SE Asia). Fourth, it is unclear how one should combine these proportions into an estimate used for correlation analysis or model them. Standard multiple regression is unsuited for handling this kind of data with a perfect linear dependency, i.e. the total proportion must add up to 1 (100%). MR assumes that the ‘independent’ variables are.. independent of each other. Surely some method exists that can handle this problem, but I’m not familiar with it. Given the four problems above, one will not expect near-perfect results, but one would probably expect most going in the right direction with non-near-zero size.

Perhaps the simplest way of analyzing it is correlation. These are susceptible to random confounds when e.g. white% correlates differentially with the other racial proportions. However, they should get the basic directions correct if not the effect size order too.

Racial proportions, NAEP IQ and S

For this analysis I use only the NAEP IQs and without DC, as I believe this is the best subdataset to rely on. I correlate this with the S factor and each racial proportion. The results are:

Racial group NAEP IQ S
White 0.69 0.18
Black -0.5 -0.42
Hispanic -0.38 -0.08
Other -0.26 0.2

 

For NAEP IQ, depending on what one thinks of the ‘other’ category, these have either exactly or roughly the order one expects: W>O>H>B. If one thinks “other” is mostly East Asian (Japanese, Chinese, Korean) with higher cognitive ability than Europeans, one would expect O>W>H>B. For S, however, the order is now O>W>H>B and the effect sizes much weaker. In general, given the limitations above, these are perhaps reasonable if somewhat on the weak side for S.

Estimating state IQ from racial proportions using racial IQs

One way to utilize all the four variable (white, black, hispanic and other) without having MR assign them weights is to assign them weights based on known group IQs and then calculate a mean estimated IQ for each state.

Depending on which estimates for group IQs one accepts, one might use something like the following:

State IQ est. = White*100+Other*100+Black*85+Hispanic*90

Or if one thinks other is somewhat higher than whites (this is not entirely unreasonable, but recall that the NAEP includes reading tests which foreigners and Asians perform less well on), one might want to use 105 for the other group (#2). Or one might want to raise black and hispanic IQs a bit, perhaps to 88 and 93 (#3). Or do both (#4) I did all of these variations, and the results are:

Variable Race.IQ Race.IQ2 Race.IQ3 Race.IQ4
Race.IQ 1 0.96 1 0.93
Race.IQ2 0.96 1 0.96 0.99
Race.IQ3 1 0.96 1 0.94
Race.IQ4 0.93 0.99 0.94 1
NAEP IQ 0.67 0.56 0.67 0.51
S 0.41 0.44 0.42 0.45

 

As far as I can tell, there is no strong reason to pick any of these over each other. However, what we learn is that the racial IQ estimate and NAEP IQ estimate is somewhere between .51 and .67, and the racial IQ estimate and S is somewhere between .41 and .45. These are reasonable results given the problems of this analysis described above I think.

Added March 11: New NAEP data

I came across a series of posts by science blogger The Audacious Epigone, who has also estimated IQs based on NAEP data. He has done this three times (for 2013, 2009 and 2005 data), so along with McDaniels estimates, this gives us 4 non-identical estimates. First, we check their intercorrelations, which should be very high, r>.9, for this kind of data. Second, we extract the general factor and use it as the best estimate of NAEP IQ for the states (I deleted DC again). Third, we see how all 5 variables relate to S from before.

Results:

NAEP.IQ.13 NAEP.IQ.09 NAEP.IQ.05 NAEP M. NAEP.1
NAEP.IQ.09 0.96        
NAEP.IQ.05 0.83 0.89      
NAEP M. 0.88 0.93 0.96    
NAEP.1 0.95 0.99 0.95 0.97  
S 0.81 0.76 0.64 0.69 0.75

 

Where NAEP.1 is the general NAEP factor. We see that intercorrelations between NAEP estimates are not that high, they average only .86. Their loadings on the common factor is very high tho, .95 to .99. Still, this should result in improved results due to measurement error. And it does, NAEP IQ x S is now .75 from .69.

Scatter plot

NAEP_S_new

Supplementary material

Data files and R source code available on the Open Science Framework repository.

References

Bryc, K., Durand, E. Y., Macpherson, J. M., Reich, D., & Mountain, J. L. (2015). The Genetic Ancestry of African Americans, Latinos, and European Americans across the United States. The American Journal of Human Genetics, 96(1), 37-53.

Jensen, A. R., & Weng, L. J. (1994). What is a good g?. Intelligence, 18(3), 231-258.

Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.

Kirkegaard, E. O. W. (2014). The international general socioeconomic factor: Factor analyzing international rankings. Open Differential Psychology.

McDaniel, M. A. (2006). State preferences for the ACT versus SAT complicates inferences about SAT-derived state IQ estimates: A comment on Kanazawa (2006). Intelligence, 34(6), 601-606.

Zhao, N. (2009). The Minimum Sample Size in Factor Analysis. Encorewiki.org.

Abstract

I analyze the S factor in Chinese states using data obtained from Lynn and Cheng as well as new data obtained from the Chinese statistical agency. I find that S correlates .42 with IQ and .48 with ethnic Han%.

Introduction

Richard Lynn has been publishing a number of papers on IQ in regions/areas/states within countries along with various socioeconomic correlates. However, usually his and co-authors analysis is limited to reporting the correlation matrix. This is a pity, because the data allow for a more interesting analysis with the S factor (see Kirkegaard, 2014). I have previously re-analyzed Lynn and Yadav (2015) in a blogpost to be published in a journal sometime ‘soon’. In this post I re-analyze the data reported in Lynn and Cheng (2013) as well as more data I obtained from the official Chinese statistical database.

Data sources

In their paper, they report 6 variables: 1) IQ, 2) sample size for IQ measurement, 3) % of population Ethnic Han, 4) years of education, 5) percent of higher education (percent with higher education?), and 6) GDP per capita. This only includes 3 socioeconomic variables — the bare minimum for S factor analyses — so I decided to see if I could find some more.

I spent some time on the database and found various useful variables:

  • Higher education per capita for 6 years
  • Medical technical personnel for 5 years
  • Life expectancy for 1 year
  • Proportion of population illiterate for 9 years
  • Internet users per capita for 10 years
  • Invention patents per capita for 10 years
  • Proportion of population urban for 9 years
  • Scientific personnel for 8 years

I used all available data for the last 10 years in all cases. This was done to increase reliability of the measurement, just in case there was some and reduce transient effects. In general tho regional differences were very consistent thruout the years, so this had little effect. One could do factor analysis and get the factor scores, but this would make the score hard to understand for the reader.

For the variable with data for multiple years, I calculated the average yearly intercorrelation to see how reliable the measure were. In all but one case, the average intercorrelation was >=.94 and the last case it was .86. There would be little to gain from factor analyzing these data and using the scores instead of just averaging the years preserves interpretable data. Thus, I averaged the variables for each year to produce one variable. This left me with 11 socioeconomic variables.

Examining the S factor and MCV

Next step was to factor analyze the 11 variables and see if one general factor emerged with the right direction of loadings. It did in fact, the loadings are as follows:

S_loadings

All the loadings are in the expected direction. Aside from the one negative loading (illiteracy), they are all fairly strong. This means that MCV (method of correlated vectors) analysis is rather useless, since there is little inter-loading variation. One could probably fix this by going back to the databank and fetching some variables that are worse measures of S and that varies more.

Doing the MCV anyway results in r=.89 (inflated by the one negative loading). Excluding the negative loading gives r=.38, which is however solely due to the scientific personnel datapoint. To properly test it, one needs to fetch more data that varies more in its S loading.

MCV

S and, IQ and Han%

We are now ready for the two main results, i.e. correlation of S with IQs and % ethnic Han.

S_IQS_Han

Correlations are of moderate strength, r.=.42 and r=.48. This is somewhat lower than found in analyses of Danish and Norwegian immigrant groups (Kirkegaard 2014, r’s about .55) and much lower than that found between countries (r=.86) and lower than that found in India (r=.61). The IQ result is mostly due to the two large cities areas of Beijing and Shanghai, so the results are not that convincing. But they are tentative and consistent with previous results.

Han ethnicity seems to be a somewhat more reasonable predictor in this dataset. It may not be due to higher general intelligence, they may have some other qualities that cause them to do well. Perhaps more conscientious, or more rule-conforming which is arguably rather important in authoritarian societies like China.

Supplementary material

The R code and datasets are available at the Open Science Foundation repository for this study.

References

G_S

Abstract

I reanalyze data published by Lynn and Yadav (2015) for Indian states. I find both G and S factors which correlate at .61.

The statistical language R is used thruout the paper and the code is explained. The paper thus is both an analysis as a walkthru of how to conduct this type of study.

Introduction

Richard Lynn and Prateek Yadav (2015) have a new paper out in Intelligence reporting various cognitive measures, socioeconomic outcomes and environmental factors in some 33 states and areas of India. Their analyses consist entirely of reporting the correlation matrix, but they list the actual data in two tables as well. This means that someone like me can reanalyze it.

They have data for the following variables:

 

1.
Language Scores Class III (T1). These data consisted of the language scores of class III 11–12 year old school students in the National Achievement Survey (NAS) carried out in Cycle-3 by the National Council of Educational Research and Training (2013). The population sample comprised 104,374 students in 7046 schools across 33 states and union territories (UTs). The sample design for each state and UT involved a three-stage cluster design which used a combination of two probability sampling methods. At the first stage, districts were selected using the probability proportional to size (PPS) sampling principle in which the probability of selecting a particular district depended on the number of class 5 students enrolled in that district. At the second stage, in the chosen districts, the requisite number of schools was selected. PPS principles were again used so that large schools had a higher probability of selection than smaller schools. At the third stage, the required number of students in each school was selected using the simple random sampling (SRS) method. In schools where class 5 had multiple sections, an extra stage of selection was added with one section being sampled at random using SRS.

The language test consisted of reading comprehension and vocabulary, assessed by identifying the word for a picture. The test contained 50 items and the scores were analyzed using both Classical Test Theory (CTT) and Item Response Theory (IRT). The scores were transformed to a scale of 0–500 with a mean of 250 and standard deviation of 50. There were two forms of the test, one in English and the other in Hindi.

2.

Mathematics Scores Class III (T2). These data consisted of the mathematics scores of Class III school students obtained by the same sample as for the Language Scores Class III described above. The test consisted of identifying and using numbers, learning and understanding the values of numbers (including basic operations), measurement, data handling, money, geometry and patterns. The test consisted of 50 multiple-choice items scored from 0 to 500 with a mean score was set at 250 with a standard deviation of 50.

3.

Language Scores Class VIII (T3). These data consisted of the language scores of class VIII (14–15 year olds) obtained in the NAS (National Achievement Survey) a program carried out by the National Council of Educational Research and Training, 2013) Class VIII (Cycle-3).The sampling methodology was the same as that for class III described above. The population sample comprised 188,647 students in 6722 schools across 33 states and union territories. The test was a more difficult version of that for class III, and as for class III, scores were analyzed using both Classical Test Theory (CTT) and Item Response Theory (IRT), and were transformed to a scale of 0–500 with a mean 250.

4.

Mathematics Scores Class VIII (T4). These data consisted of the mathematics scores of Class VIII (14–15 year olds) school students obtained by the same sample as for the Language Scores Class VIII described above. As with the other tests, the scores were transformed to a scale of 0–500 with a mean 250 and standard deviation of 50.

5.

Science Scores Class VIII (T5). These data consisted of the science scores of Class VIII (14–15 year olds) school students obtained by the same sample as for the Language Scores Class VIII described above. As with the other tests, the scores were transformed to a scale of 0–500 with a mean 250 and standard deviation of 50. The data were obtained in 2012.

6.

Teachers’ Index (TI). This index measures the quality of the teachers and was taken from the Elementary State Education Report compiled by the District Information System for Education (DISE, 2013). The data were recorded in September 2012 for teachers of grades 1–8 in 35 states and union territories. The sample consisted of 1,431,702 schools recording observations from 199.71 million students and 7.35 million teachers. The teachers’ Index is constructed from the percentages of schools with a pupil–teacher ratio in primary greater than 35, and the percentages single-teacher schools, teachers without professional qualification, and female teachers (in schools with 2 and more teachers).

7.

Infrastructure Index (II). These data were taken from the Elementary State Education Report 2012–13 compiled by the District Information System for Education (2013). The sample was the same as for the Teachers’ Index described above. This index measures the infrastructure for education and was constructed from the percentages of schools with proper chairs and desks, drinking water, toilets for boys and girls, and with kitchens.

8.

GDP per capita (GDP per cap). These data are the net state domestic product of the Indian states in 2008–09 at constant prices given by the Reserve Bank of India (2013). Data are not available for the Union Territories.

9.

Literacy Rate (LR). This consists of the percentage of population aged 7 and above in given in the 2011 census published by the Registrar General and Census Commission of India (2011).

10.

Infant Mortality Rate (IMR). This consists of the number of deaths of infants less than one year of age per 1000 live births in 2005–06 given in the National Family Health Survey, Infant and Child Mortality given by the Indian Institute of Population Sciences (2006).

11.

Child Mortality Rate (CMR). This consists of the number of deaths of children 1–4 years of age per 1000 live births in the 2005–06 given by the Indian Institute of Population Sciences (2006).

12.

Life Expectancy (LE). This consists of the number of years an individual is expected to live after birth, given in a 2007 survey carried out by Population Foundation of India (2008).

13.

Fertility Rate (FR). This consists of the number of children born per woman in each state and union territories in 2012 given by Registrar General and Census Commission of India (2012).

14.

Latitude (LAT). This consists of the latitude of the center of the state.

15.

Coast Line (CL). This consists of whether states have a coast line or are landlocked and is included to examine whether the possession of a coastline is related to the state IQs.

16.

Percentage of Muslims (MS). This is included to examine a possible relation to the state IQs.

 

This article will include the R code line for line commented as a helping exercise for readers not familiar with R but who can perhaps be convinced to give it a chance! :)

library(devtools) #source_url
source_url("https://osf.io/j5nra/?action=download&version=2") #mega functions from OSF
#source("mega_functions.R")
library(psych) #various
library(car) #scatterplot
library(Hmisc) #rcorr
library(VIM) #imputation

This loads a variety of libraries that are useful.

Getting the data into R

cog = read.csv("Lynn_table1.csv",skip=2,header=TRUE,row.names = 1) #load cog data
socio = read.csv("Lynn_table2.csv",skip=2,header=TRUE,row.names = 1) #load socio data

The files are the two files one can download from ScienceDirect: Lynn_table1 Lynn_table2 The code makes it read it assuming values are divided by comma (CSV = comma-separated values), skips the first two lines because they do not contain data, loads the first line as headers, and uses the first column as rownames.

Merging data into one object

Ideally, I’d like all the data as one object for easier use. However, since it comes it two, it has to be merged. For this purpose, I rely upon a dataset merger function I wrote some months ago to handle international data. It can however handle any merging of data where one wants to match rows by name from different datasets and combine them into one dataset. This function, merge_datasets(), is found in the mega_functions we imported earlier.

However, first, it is a good idea to make sure the names do match when they are supposed to. To check this we can type:

cbind(rownames(cog),rownames(socio))

I put the output into Excel to check for mismatches:

Andhra Pradesh Andhra Pradesh TRUE
Arunachal Pradesh Arunachal Pradesh TRUE
Bihar Bihar TRUE
Chattisgarh Chattisgarh TRUE
Goa Goa TRUE
Gujarat Gujarat TRUE
Haryana Haryana TRUE
Himanchal Pradesh Himanchal Pradesh TRUE
Jammu Kashmir Jammu & Kashmir FALSE
Jharkhand Jharkhand TRUE
Karnataka Karnataka TRUE
Kerala Kerala TRUE
Madhya Pradesh Madhya Pradesh TRUE
Maharashtra Maharashtra TRUE
Manipur Manipur TRUE
Meghalaya Meghalaya TRUE
Mizoram Mizoram TRUE
Nagaland Nagaland TRUE
Odisha Odisha TRUE
Punjab Punjab TRUE
Rajashthan Rajasthan FALSE
Sikkim Sikkim TRUE
Tamil Nadu TamilNadu FALSE
Tripura Tripura TRUE
Uttarkhand Uttarkhand TRUE
Uttar Pradesh Uttar Pradesh TRUE
West Bengal West Bengal TRUE
A & N Islands A & N Islands TRUE
Chandigarh Chandigarh TRUE
D & N Haveli D & N Haveli TRUE
Daman & Diu Daman & Diu TRUE
Delhi Delhi TRUE
Puducherry Puducherry TRUE

 

So we see that the order is the same, however, we see that there are three that doesn’t match despite being supposed to. We can fix this discrepancy by using the rownames of one dataset for the other:

rownames(cog) = rownames(socio) #use rownames from socio for cog

This makes the rownames of cog the same as those for socio. Now they are ready for merging.

Incidentally, since the order is the same, we could have simply merged with the command:

cbind(cog, socio)

However it is good to use merge_datasets() since it is so much more generally useful.

Missing and broken data

Next up, we examine missing data and broken data.

#examine missing data
miss.table(socio)
miss.table(cog)
table(miss.case(socio))
matrixplot(socio)

The first, miss.table(), is another custom function from mega_functions. It outputs the number of missing values per variable. The outputs are:

Lit  II  TI GDP IMR CMR FER  LE LAT  CL  MS 
  0   0   0   6   0   4   0   0   0   0   0
T1 T2 T3 T4 T5 CA 
 0  0  0  0  0  0

So we see that there are 10 missing values in the socio and 0 in cog.

Next we want to see how these are missing. We can do this e.g. by plotting it with a nice function like matrixplot() (from VIM) or by tabling the missing cases. Output:

 0  1  2 
27  2  4

matrixplot

So we see that there are a few cases that miss data from 1 or 2 variables. Nothing serious.

One could simply ignore this, but that would be not utilizing the data to the full extent possible. The correct solution is to impute data rather than removing cases with missing data.

However, before we do this, look at the TI variable above. The greyscale shows the standardized values of the datapoints. So in this variable we see that there is one very strong outlier. If we take a look back at the data table, we see that it is likely an input error. All the other datapoints have values between 0 and 1, but the one for Uttarkhand has 247,200.595… I don’t see how the input error happened the so best way is to remove it:

#fix broken datapoint
socio["Uttarkhand","TI"] = NA

Then, we impute the missing data in the socio variable:

#impute data
socio2 = irmi(socio, noise.factor = 0) #no noise

The second parameter is used for multiple imputation, which we don’t use here. Setting it as 0 means that the imputation is deterministic and hence exactly reproducible for other researchers.

Finally, we can compare the non-imputed dataset to the imputed one:

#compare desp stats
describe(socio)
describe(socio2)
round(describe(socio)-describe(socio2),2) #discrepancy values, rounded

The output is large, so I won’t show it here, but it shows that the means, sd, range, etc. of the variables with and without imputation are similar which means that we didn’t completely mess up the data by the procedure.

Finally, we merge the data to one dataset:

#merge data
data = merge.datasets(cog,socio2,1) # merge above

Next, we want to do factor analysis to extract the general socioeconomic factor and the general intelligence factor from their respective indicators. And then we add them back to the main dataset:

#factor analysis
fa = fa(data[1:5]) #fa on cognitive data
fa
data["G"] = as.numeric(fa$scores)

fa2 = fa(data[7:14]) #fa on SE data
fa2
data["S"] = as.numeric(fa2$scores)

Columns  1-5 are the 5 cognitive measures. Cols 7:14 are the socioeconomic ones. One can disagree about the illiteracy variable, which could be taken as belonging to cognitive variables, not the socioeconomic ones. It is similar to the third cognitive variable which is some language test. I follow the practice of the authors.

The output from the first factor analysis is:

    MR1     h2   u2 com
T1 0.40 0.1568 0.84   1
T2 0.10 0.0096 0.99   1
T3 0.46 0.2077 0.79   1
T4 0.93 0.8621 0.14   1
T5 0.92 0.8399 0.16   1
Proportion Var 0.42

This is using the default settings, which is minimum residuals. Since the method used typically does not matter except for PCA on small datasets, this is fine.

All loadings are positive as expected, but T2 is only slightly so.

We put the factor scores back into the dataset and call it “G” (Rindermann, 2007).

The factor analysis output for socioeconomic variables is:

      MR1    h2   u2 com
Lit  0.79 0.617 0.38   1
II   0.36 0.128 0.87   1
TI   0.91 0.824 0.18   1
GDP  0.76 0.579 0.42   1
IMR -0.92 0.842 0.16   1
CMR -0.85 0.721 0.28   1
FER -0.84 0.709 0.29   1
LE   0.14 0.019 0.98   1
Proportion Var 0.55

Strong positive loadings for: proportion of population literate (LIT), teacher index (TI), GDP, medium positive for infrastructure index (II), weak positive for life expectancy (LE). Strong negative for infant mortality rate (IMR), child mortality rate (CMR) and fertility. All of these are in the expected direction.

Then we extract the factor scores and add them back to the dataset and call them “S”.

Correlations

Finally, we want to check out the correlations with G and S.

#Pearson results
results = rcorr2(data)
View(results$r)  #view all correlations
results$r[,18:19] #S and G correlations
results$n #sample size

#Spearman
results.s = rcorr2(data, type="spearman") #spearman
View(results.s$r) #view all correlations

#discrepancies
results.c = results$r-results.s$r

We look at both the Pearson and Spearman correlations because data may not be normal and may have outliers. Spearman’s is resistant to these problems. The discrepancy values are how larger the Pearson is than the Spearman.

There are too many correlations to output here, so we focus on those involving G and S (columns 18:19).

 Variable G S
T1 0.41 0.41
T2 0.10 -0.39
T3 0.48 0.16
T4 0.97 0.62
T5 0.96 0.53
CA 0.87 0.38
Lit 0.66 0.81
II 0.45 0.37
TI 0.40 0.93
GDP 0.40 0.78
IMR -0.60 -0.94
CMR -0.54 -0.87
FER -0.56 -0.86
LE 0.01 0.14
LAT -0.53 -0.34
CL -0.63 -0.54
MS -0.24 -0.08
G 1.00 0.59
S 0.59 1.00

 

So we see that G and S correlate at .59, fairly high and similar to previous within country results with immigrant groups (.54 in Denmark, .59 in Norway Kirkegaard (2014a), Kirkegaard and Fuerst (2014)) but not quite as high as the between country results (.86-.87 Kirkegaard (2014b)). Lynn and Yadav mention that data exists for France, Britain and the US. These can serve for reanalysis with respect to S factors at the regional/state level.

Finally, we want may to plot the main result:

#Plots
title = paste0("India: State G-factor correlates ",round(results$r["S","G"],2)," with state S-factor, N=",results$n["S","G"])
scatterplot(S ~ G, data, smoother=FALSE, id.n=nrow(data),
            xlab = "G, extracted from 5 indicators",
            ylab = "S, extracted from 11 indicates",
            main = title)

G_S

It would be interesting if one could obtain genomic admixture measures for each state and see how they relate, since this has been found repeatedly elsewhere and is a strong prediction from genetic theory.

Update

Lynn has sent me the correct datapoint. It is 0.595. The imputed value was around .72. I reran the analysis with this value and imputed the rest. It doesn’t change much. The new results are slightly stronger.

  New results   Discrepancy scores
G S G S
T1 0.41 0.42 0.00 -0.01
T2 0.10 -0.37 0.00 -0.02
T3 0.48 0.18 0.00 -0.02
T4 0.97 0.63 0.00 -0.02
T5 0.96 0.54 0.00 -0.01
CA 0.87 0.40 0.00 -0.02
Lit 0.66 0.81 0.00 -0.01
II 0.45 0.37 0.00 0.00
TI 0.42 0.92 -0.02 0.01
GDP 0.40 0.78 0.00 0.00
IMR -0.60 -0.95 0.00 0.00
CMR -0.54 -0.87 0.00 0.00
FER -0.56 -0.86 0.00 -0.01
LE 0.01 0.14 0.00 0.00
LAT -0.53 -0.35 0.00 0.01
CL -0.63 -0.54 0.00 0.00
MS -0.24 -0.09 0.00 0.00
G 1.00 0.61 0.00 -0.01
S 0.61 1.00 -0.01 0.00

 

Method of correlated vectors

This study is special in that we have two latent variables each with its own set of indicator variables. This means that we can use Jensen’s method of correlated vectors (MCV; Jensen (1998)), and also a new version which I shall creatively dub “double MCV”, DMCV using both latent factors instead of only one.

The method consists of correlating the factor loadings of a set of indicator variables for a factor with the correlations of each indicator variable with a criteria variable. Jensen used this with the general intelligence factor (g-factor) and its subtests with criteria variables such as inbreeding depression in IQ scores and brain size.

So, to do regular MCV in this study, we first choose either the S and G factor. Then we correlate the loadings of each indicator with its correlation with the criteria variable, i.e. the S/G factor we didn’t choose.

Doing this analysis is in fact very easy here, because the results reported in the table above with S and G is exactly that which we need to correlate.

## MCV
#Double MCV
round(cor(results$r[1:14,18],results$r[1:14,19]),2)
#MCV on G
round(cor(results$r[1:5,18],results$r[1:5,19]),2)
#MCV on S
round(cor(results$r[7:14,18],results$r[7:14,19]),2)

The results are: .87, .89, and .97. In other words, MCV gives a strong indication that it is the latent traits that are responsible for the observed correlations.

References

Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger.

Kirkegaard, E. O. W. (2014a). Crime, income, educational attainment and employment among immigrant groups in Norway and Finland. Open Differential Psychology.

Kirkegaard, E. O. W., & Fuerst, J. (2014). Educational attainment, income, use of social benefits, crime rate and the general socioeconomic factor among 71 immigrant groups in Denmark. Open Differential Psychology.

Kirkegaard, E. O. W. (2014b). The international general socioeconomic factor: Factor analyzing international rankings. Open Differential Psychology.

Lynn, R., & Yadav, P. (2015). Differences in cognitive ability, per capita income, infant mortality, fertility and latitude across the states of India. Intelligence, 49, 179-185.

Rindermann, H. (2007). The g‐factor of international cognitive ability comparisons: The homogeneity of results in PISA, TIMSS, PIRLS and IQ‐tests across nations. European Journal of Personality, 21(5), 667-706.

So, she came up with:

So I decided to try it out, since I’m taking a break from reading Lilienfeld which I had been doing that for 5 hours straight or so.

So the question is whether inbreeding measures have incremental validity over IQ and Islam, which I have previously used to examine immigrant performance in a number of studies.

So, to get the data into R, I OCR’d the PDF in Abbyy FineReader since this program allows for easy copying of table data by row or column. I only wanted column 1-2 and didn’t want to deal with the hassle of importing it with spreadsheet problems (which need a consistent separator, e.g. comma or space). Then I merged it with the megadataset to create a new version, 2.0d.

Then I created a subset of the data with variables of interest, and renamed them (otherwise results would be unwieldy). Intercorrelations are:

row.names Cousin% CoefInbreed IQ Islam S.in.DK
1 Cousin% 1.00 0.52 -0.59 0.78 -0.76
2 CoefInbreed 0.52 1.00 -0.28 0.40 -0.55
3 IQ -0.59 -0.28 1.00 -0.27 0.54
4 Islam 0.78 0.40 -0.27 1.00 -0.71
5 S.in.DK -0.76 -0.55 0.54 -0.71 1.00

 

Spearman’ correlations, which are probably better due to the non-normal data:

row.names Cousin% CoefInbreed IQ Islam S.in.DK
1 Cousin% 1.00 0.91 -0.63 0.67 -0.73
2 CoefInbreed 0.91 1.00 -0.55 0.61 -0.76
3 IQ -0.63 -0.55 1.00 -0.23 0.72
4 Islam 0.67 0.61 -0.23 1.00 -0.61
5 S.in.DK -0.73 -0.76 0.72 -0.61 1.00

 

The fairly high correlations of inbreeding measures with IQ and Islam mean that their contribution will likely be modest as incremental validity.

However, let’s try modeling them. I create 7 models of interest and compile the primary measure of interest from them, R2 adjusted, into an object. Looks like this:

row.names R2 adj.
1 S.in.DK ~ IQ+Islam 0.5472850
2 S.in.DK ~ IQ+Islam+CousinPercent 0.6701305
3 S.in.DK ~ IQ+Islam+CoefInbreed 0.7489312
4 S.in.DK ~ Islam+CousinPercent 0.6776841
5 S.in.DK ~ Islam+CoefInbreed 0.7438711
6 S.in.DK ~ IQ+CousinPercent 0.5486674
7 S.in.DK ~ IQ+CoefInbreed 0.4979552

 

So we see that either of them adds a fair amount of incremental validity to the base model (line 1 vs. 2-3). They are in fact better than IQ if one substitutes them in (1 vs. 4-5). They can also substitute for Islam, but only with about the same predictive power (1 vs 6-7).

Replication for Norway

Replication for science is important. Let’s try Norwegian data. The Finnish and Dutch data are well-suited for this (too few immigrant groups, few outcome variables i.e. only crime)

Pearson intercorrelations:

row.names CousinPercent CoefInbreed IQ Islam S.in.NO
1 CousinPercent 1.00 0.52 -0.59 0.78 -0.78
2 CoefInbreed 0.52 1.00 -0.28 0.40 -0.46
3 IQ -0.59 -0.28 1.00 -0.27 0.60
4 Islam 0.78 0.40 -0.27 1.00 -0.72
5 S.in.NO -0.78 -0.46 0.60 -0.72 1.00

 

Spearman:

row.names CousinPercent CoefInbreed IQ Islam S.in.NO
1 CousinPercent 1.00 0.91 -0.63 0.67 -0.77
2 CoefInbreed 0.91 1.00 -0.55 0.61 -0.71
3 IQ -0.63 -0.55 1.00 -0.23 0.75
4 Islam 0.67 0.61 -0.23 1.00 -0.47
5 S.in.NO -0.77 -0.71 0.75 -0.47 1.00

 

These look fairly similar to Denmark.

And the regression results:

row.names R2 adj.
1 S.in.NO ~ IQ+Islam 0.5899682
2 S.in.NO ~ IQ+Islam+CousinPercent 0.7053999
3 S.in.NO ~ IQ+Islam+CoefInbreed 0.7077162
4 S.in.NO ~ Islam+CousinPercent 0.6826272
5 S.in.NO ~ Islam+CoefInbreed 0.6222364
6 S.in.NO ~ IQ+CousinPercent 0.6080922
7 S.in.NO ~ IQ+CoefInbreed 0.5460777

 

Fairly similar too. If added, they have incremental validity (line 1 vs. 2-3). They perform better than IQ if substituted but not as much as in the Danish data (1 vs. 4-5). They can also substitute for Islam (1 vs. 6-7).

How to interpret?

Since inbreeding does not seem to have any direct influence on behavior that is reflected in the S factor, it is not so easy to interpret these findings. Inbreeding leads to various health problems and lower g in offspring, the latter which may have some effect. However, presumably, national IQs already reflect the lowered IQ from inbreeding, so there should be no additional effect there beyond national IQs. Perhaps inbreeding results in other psychological problems that are relevant.

Another idea is that inbreeding rates reflect non-g psychological traits that are relevant to adapting to life in Denmark. Perhaps it is a useful measure of clanishness, would be reflected in hostility towards integration in Danish society (such as getting an education, or lack of sympathy/antipathy towards ethnic Danes and resulting higher crime rates against them), which would be reflected in the S factor.

The lack of relatively well established causal routes for interpreting the finding makes me somewhat cautious about how to interpret this.


 

##Code for mergining cousin marriage+inbreeding data with megadataset
inbreed = read.table("clipboard", sep="\t",header=TRUE, row.names=1) #load data from clipboard
source("merger.R") #load mega functions
mega20d = read.mega("Megadataset_v2.0d.csv") #load latest megadataset
names = as.abbrev(rownames(inbreed)) #get abbreviated names
rownames(inbreed) = names #set them as rownames

#merge and save
mega20e = merge.datasets(mega20d,inbreed,1) #merge to create v. 2.0e
write.mega(mega20e,"Megadataset_v2.0e.csv") #save it

#select subset of interesting data
dk.data = subset(mega20e, selec=c("Weighted.mean.consanguineous.percentage.HobenEtAl2010",
                                  "Weighted.mean.coefficient.of.inbreeding.HobenEtAl2010",
                                  "LV2012estimatedIQ",
                                  "IslamPewResearch2010",
                                  "S.factor.in.Denmark.Kirkegaard2014"))
colnames(dk.data) = c("CousinPercent","CoefInbreed","IQ","Islam","S.in.DK") #shorter var names
rcorr = rcorr(as.matrix(dk.data)) #correlation object
View(round(rcorr$r,2)) #view correlations, round to 2
rcorr.S = rcorr(as.matrix(dk.data),type = "spearman") #spearman correlation object
View(round(rcorr.S$r,2)) #view correlations, round to 2

#Multiple regression
library(QuantPsyc) #for beta coef
results = as.data.frame(matrix(data = NA, nrow=0, ncol = 1)) #empty matrix for results
colnames(results) = "R2 adj."
models = c("S.in.DK ~ IQ+Islam", #base model,
           "S.in.DK ~ IQ+Islam+CousinPercent", #1. inbreeding var
           "S.in.DK ~ IQ+Islam+CoefInbreed", #2. inbreeding var
           "S.in.DK ~ Islam+CousinPercent", #without IQ
           "S.in.DK ~ Islam+CoefInbreed", #without IQ
           "S.in.DK ~ IQ+CousinPercent", #without Islam
           "S.in.DK ~ IQ+CoefInbreed") #without Islam

for (model in models){ #run all the models
  fit.model = lm(model, dk.data) #fit model
  sum.stats = summary(fit.model) #summary stats object
  summary(fit.model) #summary stats
  lm.beta(fit.model) #standardized betas
  results[model,] = sum.stats$adj.r.squared #add result to results object
}
View(results) #view results

##Let's try Norway too
no.data = subset(mega20e, selec=c("Weighted.mean.consanguineous.percentage.HobenEtAl2010",
                                  "Weighted.mean.coefficient.of.inbreeding.HobenEtAl2010",
                                  "LV2012estimatedIQ",
                                  "IslamPewResearch2010",
                                  "S.factor.in.Norway.Kirkegaard2014"))

colnames(no.data) = c("CousinPercent","CoefInbreed","IQ","Islam","S.in.NO") #shorter var names
rcorr = rcorr(as.matrix(no.data)) #correlation object
View(round(rcorr$r,2)) #view correlations, round to 2
rcorr.S = rcorr(as.matrix(no.data),type = "spearman") #spearman correlation object
View(round(rcorr.S$r,2)) #view correlations, round to 2

results = as.data.frame(matrix(data = NA, nrow=0, ncol = 1)) #empty matrix for results
colnames(results) = "R2 adj."
models = c("S.in.NO ~ IQ+Islam", #base model,
           "S.in.NO ~ IQ+Islam+CousinPercent", #1. inbreeding var
           "S.in.NO ~ IQ+Islam+CoefInbreed", #2. inbreeding var
           "S.in.NO ~ Islam+CousinPercent", #without IQ
           "S.in.NO ~ Islam+CoefInbreed", #without IQ
           "S.in.NO ~ IQ+CousinPercent", #without Islam
           "S.in.NO ~ IQ+CoefInbreed") #without Islam

for (model in models){ #run all the models
  fit.model = lm(model, no.data) #fit model
  sum.stats = summary(fit.model) #summary stats object
  summary(fit.model) #summary stats
  lm.beta(fit.model) #standardized betas
  results[model,] = sum.stats$adj.r.squared #add result to results object
}
View(results) #view results

There was some talk on Twitter around prison rates and inequality:

And IQ and inequality:

But then what about prison data beyond those given above? I have downloaded the newest data from here ICPS (rate data, not totals).

Now, what about all three variables?

#load mega20d as the datafile
ineqprisoniq = subset(mega20d, select=c("Fact1_inequality","LV2012estimatedIQ","PrisonRatePer100000ICPS2015"))
rcorr(as.matrix(ineqprisoniq),type = "spearman")
                            Fact1_inequality LV2012estimatedIQ PrisonRatePer100000ICPS2015
Fact1_inequality                        1.00             -0.51                        0.22
LV2012estimatedIQ                      -0.51              1.00                        0.16
PrisonRatePer100000ICPS2015             0.22              0.16                        1.00

n
                            Fact1_inequality LV2012estimatedIQ PrisonRatePer100000ICPS2015
Fact1_inequality                         275               119                         117
LV2012estimatedIQ                        119               275                         193
PrisonRatePer100000ICPS2015              117               193                         275

So IQ is slightly positively related to prison rates and so is equality. Positive? Isn’t it bad having people in prison? Well, if the alternative is having them dead… because the punishment for most crimes is death. Although one need not be excessive as the US is. Somewhere in the middle is perhaps best?

What if we combine them into a model?

model = lm(PrisonRatePer100000ICPS2015 ~ Fact1_inequality+LV2012estimatedIQ,ineqprisoniq)
summary = summary(model)
library(QuantPsyc)
lm.beta(model)
prediction = as.data.frame(predict(model))
colnames(prediction) = "Predicted"
ineqprisoniq = merge.datasets(ineqprisoniq,prediction,1)
scatterplot(PrisonRatePer100000ICPS2015 ~ Predicted, ineqprisoniq,
            smoother=FALSE,id.n=nrow(ineqprisoniq))
> summary

Call:
lm(formula = PrisonRatePer100000ICPS2015 ~ Fact1_inequality + 
    LV2012estimatedIQ, data = ineqprisoniq)

Residuals:
    Min      1Q  Median      3Q     Max 
-153.61  -75.05  -31.53   44.62  507.34 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)   
(Intercept)       -116.451     88.464  -1.316  0.19069   
Fact1_inequality    31.348     11.872   2.640  0.00944 **
LV2012estimatedIQ    3.227      1.027   3.142  0.00214 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 113.6 on 114 degrees of freedom
  (158 observations deleted due to missingness)
Multiple R-squared:  0.09434,	Adjusted R-squared:  0.07845 
F-statistic: 5.938 on 2 and 114 DF,  p-value: 0.003523

> lm.beta(model)
Fact1_inequality LV2012estimatedIQ 
        0.2613563         0.3110241

This is a pretty bad model (var%=8), but the directions held from before but were stronger. Standardized betas .25-.31. The R2 seems to be awkwardly low to me given the betas.

More importantly, the residuals are clearly not normal as can be seen above. The QQ-plot is:

QQ_plot

It is concave, so data distribution isn’t normal. To get diagnostic plots, simply use “plot(model)”.

Perhaps try using rank-order data:

ineqprisoniq = as.data.frame(apply(ineqprisoniq,2,rank,na.last="keep")) #rank order the data

And then rerunning model gives:

> summary

Call:
lm(formula = PrisonRatePer100000ICPS2015 ~ Fact1_inequality + 
    LV2012estimatedIQ, data = ineqprisoniq)

Residuals:
     Min       1Q   Median       3Q      Max 
-100.236  -46.753   -8.507   46.986  125.211 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)        1.08557   18.32052   0.059    0.953    
Fact1_inequality   0.84766    0.16822   5.039 1.78e-06 ***
LV2012estimatedIQ  0.50094    0.09494   5.276 6.35e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 54.36 on 114 degrees of freedom
  (158 observations deleted due to missingness)
Multiple R-squared:  0.2376,	Adjusted R-squared:  0.2242 
F-statistic: 17.76 on 2 and 114 DF,  p-value: 1.924e-07

> lm.beta(model)
 Fact1_inequality LV2012estimatedIQ 
        0.4757562         0.4981808

Much better R2, directions the same but betas are stronger, and residuals look normalish from the above. QQ plot shows them not to be even now.

QQplot2

Prediction plots based off the models:

prison prison_rank

So is something strange going on with the IQ, inequality and prison rates? Perhaps something nonlinear. Let’s plot them by IQ bins:

bins = cut(unlist(ineqprisoniq["LV2012estimatedIQ"]),5) #divide IQs into 5 bins
ineqprisoniq["IQ.bins"] = bins
describeBy(ineqprisoniq["PrisonRatePer100000ICPS2015"],bins)
library(gplots)
plotmeans(PrisonRatePer100000ICPS2015 ~ IQ.bins, ineqprisoniq,
          main = "Prison rate by national IQ bins",
          xlab = "IQ bins (2012 data)", ylab = "Prison rate per 100000 (2014 data)")

prison_IQ_bins

That looks like “bingo!” to me. We found the pattern.

What about inequality? The trouble is that the inequality data is horribly skewed with almost all countries have a low and near identical inequality compared with the extremes. The above will (does not) work well. I tried with different bins numbers too. Results look something like this:

bins = cut(unlist(ineqprisoniq["Fact1_inequality"]),5) #divide IQs into 5 bins
ineqprisoniq["inequality.bins"] = bins
plotmeans(PrisonRatePer100000ICPS2015 ~ inequality.bins, ineqprisoniq,
          main = "Prison rate by national inequality bins",
          xlab = "inequality bins", ylab = "Prison rate per 100000 (2014 data)")

prison_inequality

So basically, the most equal countries to the left have low rates, somewhat higher in the unequal countries within the main group and varying and on average lowish among the very unequal countries (African countries without much infrastructure?).

Perhaps this is why the Equality Institute limited their analyses to the group on the left, otherwise they don’t get the nice clear pattern they want. One can see it a little bit if one uses a high number of bins and ignores the groups to the right. E.g. 10 bins:

prison_inequality_10bins

Among the 3 first groups, there is a slight upward trend.

www.goodreads.com/book/show/875481.Race

gen.lib.rus.ec/book/index.php?md5=5624936a816b96dd3e6a4af6808ee69b

I had seen references to this book in a number of places which got me curious. I am somewhat hesitant to read older books since I know much of what they discuss is dated and has been superseded by newer science. Sometimes, however, science (or the science culture) has gone wrong so one may actually learn more reading an older book than a newer one. Since fewer people read older books, one can sometimes find relevant but forgotten facts in them. Lastly, they can provide much needed historical information about the development of thinking about some idea or of some field. All of these remarks are arguably relevant to the race/population genetics controversy.

Still, I did not read the book immediately altho I had a PDF of it. I ended up starting to read it more or less at random due to a short talk I had with John Fuerst about it (we are writing together on racial admixture, intelligence and socioeconomic outcomes in the Americas and also wrote a paper on immigrant performance in Denmark).

So, the book really is dated. It spends hundreds of pages on arcane fysical anthropology which requires one to master human anatomy. Most readers don’t master this discipline, so these parts of the book are virtually un-understandable. However, they do provide one with the distinct impression of how one did fysical anthropology in old times. Lots of observations of cranium, other bones, noses, eyes+lids, teeth, lips, buttocks, etc., and then try to find clusters in these data manually. No wonder they did not reach that high agreement. The data are too scarce to find clusters and humans not sufficiently good at cluster analysis at the intuitive level. Still, they did notice some patterns that are surely correct, such as the division between various African populations, Ainu vs. Japanese, that Europeans are Asians are closer related, that Afghans etc. belong to the European supercluster etc. Clearly, these pre-genetic ideas were not all totally wrong headed. Here’s the table of Races+Subraces from the end of the book. They seem reasonably in line with modern evidence.

table

Some quotes:

The story of 7 ‘kinds’ of mosquitoes.

[Dobzhansky’s definition = ‘Species in sexual cross-fertilizing organisms can be defined as groups of populations which are reproductively isolated to the extent that the exchange of genes between them is absent or so slow that the genetic differences are not diminished or swamped.’]

Strict application of Dobzhansky’s definition results in certain very similar animals being assigned to different species. The malarial mosquitoes and their relatives provide a remarkable example of this. The facts are not only extreme­ly interesting from the purely scientific point of view, but also of great practical importance in the maintenance of public health in malarious districts. It was discovered in 1920 that one kind of the genus Anopheles, called elutus, could be distinguished from the well-known malarial mosquito, A. maculipennis, by certain minute differences in the adult, and by the fact that its its eggs looked different; but for our detailed knowledge of this subject we are mainly indebted to one Falleroni, a retired inspector of public health in Italy, who began in 1924 to breed Anopheles mosquitoes as a hobby. He noticed that several different kinds of eggs could be distinguished, that the same female always laid eggs having the same appearance, and that adult females derived from those eggs produced eggs of the same type. He realized that although the adults all appeared similar, there were in fact several different kinds, which he could recognize by the markings on their eggs. Falleroni named several different kinds after his friends, and the names he gave are the accepted ones today in scientific nomenclature.

It was not until 1931 that the matter came to the attention of L. W. Hackett, who, with A. Missiroli, did more than anyone else to unravel the details of this curious story.(449,447.448] The facts are these. There are in Europe six different kinds of Anopheles that cannot be distinguished with certainty from one another in the adult state, however carefully they are examined under the microscope by experts; a seventh kind, elutus, can be distinguished by minor differences if its age is known. The larvae of two of the kinds can be distinguished from one another by minute differences (in the type of palmate hair on the second segment, taken in conjunction with the number of branches of hair no. 2 on the fourth and fifth segments). Other supposed differences between the kinds, apart from those in the eggs, have been shown to be unreal.

In nature the seven kinds are not known to interbreed, and it is therefore necessary, under Dobzhansky’s definition, to regard them all as separate species.

The mates of six of the seven species have the habit of ‘swarming’ when ready to copulate. They join in groups of many individuals, humming, high in the air; suddenly the swarm bursts asunder and rejoins. The females recognize the swarms of males of their own species, and are attracted towards them. Each female dashes in, seizes a male, and flies off, copulating.

With the exceptions mentioned, the only visible differences between the species occur at the egg-stage. The eggs of six of the seven species are shown in Fig. 8 (p. 76).

6 anopheles

It will be noticed that each egg is roughly sausage-shaped, with an air-filled float at each side, which supports it in the water in which it is laid. The eggs of the different species are seen to differ in the length and position of the floats. The surface of the rest of the egg is covered all over with microscopic finger-shaped papillae, standing up like the pile of a carpet. It is these papillae that are responsible for the distinctive patterns seen on the eggs of the different species. Where the papillae are long and their tips rough, light is reflected to give a whitish appearance; where they are short and smooth, light passes through to reveal the underlying surface of the egg, which is black. The biological significance of these apparently trivial differences is unknown.

From the point of view of the ethnic problem the most interesting fact is this. Although the visible differences between the species are trivial and confined or almost confined to the egg-stage, it is evident that the nervous and sensory systems are different, for each species has its own habits. The males of one species (atroparvus) do not swarm. It has already been mentioned that the females recognize the males of their own species. Some of the species lay their eggs in fresh water, others in brackish. The females of some species suck the blood of cattle, and are harmless to man; those of other species suck the blood of man, and in injecting their saliva transmit malaria to him.

Examples could be quoted of other species that are distinguishable from one another by morphological differences no greater than those that separate the species of Anopheles; but the races of a single species—indeed, the subraces of a single race—are often distinguished from one another, in their typical forms, by obvious differences, affecting many parts of the body. It is not the case that species are necessarily very distinct, and races very similar. [p. 74ff]

Nature is very odd indeed! More on Wiki.

Some very strange examples of abnormalities of this sort have been recorded by reputable authorities. Buffon quotes two examples of an ‘amour violent’ between a dog and a sow. In one case the dog was a large spaniel on the property of the Comte de Feuillee, in Burgundy. Many persons witnessed ‘the mutual ardour of these two animals; the dog even made prodigious and oft-repeated efforts to copulate with the sow, but the unsuitability of their reproductive organs prevented their union.’ Another example, still more remarkable, occurred on Buffon’s own property. A miller kept a mare and a bull in the same stable. These two animals developed such a passion for one another that on all occasions when the mare was on heat, over a period of several years, the bull copulated with her three or four times a day, whenever he was free to do so. The act was witnessed by all the inhabitants of the place. [p. 92]

Of smelly Japanese:

There is, naturally enough, a correlation between the development of the axillary organ and the smelliness of the secretion of this gland (and probably this applies also to the a glands of the genito-anal region). Briefly, the Europids and Negrids are smelly, the Mongolids scarcely or not at all. so far as the axillary secretion is concerned. Adachi. who has devoted more study to this subject than anyone else, has summed up his findings in a single, short sentence: ‘The Mongolids are essentially an odourless or very slightly smelly race with dry ear-wax.’(5] Since most of the Japanese are free or almost free from axillary smell, they are very sensitive to its presence, of which they seem to have a horror. About 10% of Japanese have smelly axillae. This is attributed to remote Ainuid ancestry, since the Ainu are invariably smelly, like most other Europids, and a tendency to smelliness is known to be inherited among the Japanese. 151 The existence of the odour is regarded among Japanese as a disease, osmidrosis axillae which warrants (or used to warrant) exemption from military service. Certain doctors specialize in its treatment, and sufferers are accustomed to enter hospital. [p. 173]

Japan always take these things to a new level.

Measurements of adult stature, made on several thousand pairs of persons, show a rather close correspondence with these figures, namely, 0 507, 0-322, 0-543, and 0-287 respectively.(172) It will be noticed that the correlations are all somewhat higher than one would expect; that is to say, the members of each pair are, on average, rather more nearly of the same height than the simple theory would suggest. This is attributed in the main to the tendency towards assortative mating, the reality of which had already been recognized by Karl Pearson and Miss Lee in their paper published in 1903. [p. 462]

I didn’t know assortative mating was recognized so far back. This may be a good source to understand the historical development of understanding of assortative mating.

The reference is: Pearson, K. &  Lee,  A.,  1903.  ‘On  the  laws  of  inheritance  in  man.  I.  Inheritance  of  physical characters.’  Biometrika,  2, 357—462.

Definition of intelligence?

What has been said on p. 496 may now be rewritten in the form of a short definition of intelligence, in the straightforward, everyday sense of that word. It is the ability to perceive, comprehend, and reason, combined with the capacity to choose worth-while subjects for study, eagerness to acquire, use, transmit, and (if possible) add to knowledge and understanding, and the faculty for sustained effort towards these ends (cf. p. 438). One might say briefly that a person is intelligent in so far as his cognitive ability and personality tend towards productiveness through mental activity. [p. 495ff]

Baker prefers a broader definition of “intelligence” which includes certain non-cognitive parts. He uses “cognitive ability” like many people do now a days use “general cognitive ability”.

And now surely at the end of the book, the evil master-racist privileged white male John Baker tells us what to do with the information we just learned in the book:

Here, on reaching the end of the book, 1 must repeat some words that I wrote years ago when drafting the Introduction (p. 6), for there is nothing in the whole work that would tend to contradict or weaken them:
Every ethnic taxon of man includes many persons capable of living responsible and useful lives in the communities to which they belong, while even in those taxa that are best known for their contributions to the world’s store of intellectual wealth, there are many so mentally deficient that they would be inadequate members of any society. It follows that no one can claim superiority simply because he or she belongs to a particular ethnic taxon. [p. 534]

So, clearly according to our anti-racist heroes, Baker tells us to revel in our (sorry Jayman if you are reading!) European master ancestry, right?

edited: removed joke because public image -_-