It works in practice, but does it work in (my) theory?

There’s a certain type of person that doesn’t produce any empirical contribution to “Reducing the heredity-environment uncertainty”. Instead, they contribute various theoretical arguments which they take to undermine the empirical data others give. Usually, these people have a background in philosophy or some other theoretical field. A recent example of this pattern is seen on Reddit, where Jinkinson Payne Smith (u/EverymorningWP) made this thread:

Heritability and Heterogeneity: The Irrelevance of Heritability in Explaining Differences between Means for Different Human Groups or Generations” includes (on its page 398, section 2.1) some interesting paragraphs that decisively refute the claims of Neven Sesardic regarding “heritability”. One particularly relevant quote is this one: “The shortcomings I describe involve matters of logic and methodology; empirical considerations are beside the point.” So those who wish to use the “hitting-them-over-the-head” style* so common of behavior geneticists, involving the deflection conceptual, logical criticisms to focus on narrow technical issues, should keep in mind that superficial empirical concerns are not the only ones worth taking seriously.

*The term “hitting them over the head” was coined by Aaron Panofsky in his 2014 book Misbehaving Science. As defined by Burt & Simons (2015, p. 104), “This approach involves dodging criticisms by misrepresenting arguments and insinuating that critics are politically motivated and reject scientific truths as well as focusing on a few “‘tractable’ empirical objections” while “ignoring the deeper theoretical objections””.

So: It works in practice, but does it work in (my) theory? These philosophy arguments are useless. Any physics professor knows this well because they get a lot of emails allegedly refuting relativity and quantum mechanics using thought experiments and logical arguments (like Time Cube). These arguments convince no one, even if one can’t find the error in the argument immediately (like in the ontological argument). It works the same way for these anti-behavioral genetics theoretical arguments. If these want to be taken seriously, they should produce 1) contrasting models, 2) that produce empirically testable predictions, and 3) show that these fit with their model and do not fit with the current behavioral/quantitative genetics models.

For a historical example of this, see Jensen’s reply (pp. 451ff) to Schönemann’s sophistry (chapter 18) along the same lines regarding an obscure and empirically irrelevant problem in factor analysis (factor score indeterminacy). An excerpt:

Components analysis and factor analys is were invented and developed by the pioneers of differential psychology as a means of dealing with substantive problems in the measurement and anal ysis of human abilities. The first generation of factor analysts—psychologists such as Spearman, Burt, and Thurstone—were first of all psychologists, with a primary interest in the structure and nature of individual differ ences. For them factor analysis was but one methodological means of advancing empirical research and theory in the domain of abilities. But in subsequent generations experts in factor analysis have increasingly become more narrowly specialized. They show little or no interest in psychology, but confine their thinking to the ‘pure mathematics’ of factor analysis, without reference to any issues of substantive or theoretical importance. For some it is methodology for methodology’s sake, isolated from empirical realities, and disdainful of substantive problems and ‘dirty data’. Cut off from its origin, which was rooted in the study of human ability, some of the recent esoterica in factor analysis seem like a sterile, self-contained intellectual game, good fun perhaps, but having scarcely more relevance to anything outside itself than the game of chess. Schönemann is impressive as one of the game’s grandmasters. The so-called ‘factor indeterminacy’ problem, which is an old issue recognized in Spearman’s time, has, thanks to Schönemann, been revived as probably the most esoteric weapon in the ‘IQ controversy’.

A useful follow-up here is also:

  • Jensen, A. R., and Weng, J.J. (1994). “What is a good g?Intelligence, 18, 231—258.

which shows that if one extracts the g factor in lots of different ways, factor scores from these all correlate .99 with each other, so the fact that one cannot precisely give the true scores for a given set of data is empirically irrelevant.

I must say that I do feel some sympathy with Jinkinson’s approach. I am myself somewhat of a verbal tilt person who used to study philosophy (for bachelor degree), and who used to engage in some of these ‘my a priori argument beats your data’ type arguments. I eventually wised up, I probably owe some of this to my years of drinking together with the good physicists at Aarhus University, who do not care so much for such empirically void arguments.

Leave a Reply