Someone sent me this email:

Hello Emil I was wondering if you knew about any similar papers to this
one:
https://www.sciencedirect.com/science/article/abs/pii/S0160289608001591
Also, what are your thoughts on mutualism (which I assume you think is
false)? Does mutualism being wrong disprove gxe theories of
heritability? I know it disproves the Flynn model

How about these two? There’s probably many more of these if one searches.
I don’t care much about these various models of g factor. I consider the matter empirically undetermined at present. Personally, I think g is a useful index of some brain meta-property. This is basically just following what Arthur Jensen noted on occasion. Since the brain is a neural network, whatever g exactly is, it is related to the network models. Thus, I am happy to welcome more network psychometrics and don’t consider these a challenge to g in any important sense. The challenge for these network models will be to explain typical findings in research, such as the inability of training efforts to improve general intelligence despite increases in specific skills. Some recent work on this:

Ultimately, this work needs to be merged with the related work on artificial intelligence. The human brain is a supremely complicated neural network, and it’s going to be impossible to understand for us. One needs orders of magnitude more complex neural networks to understand a given neural network. Only super-human levels of intelligence can really understand the human neural network. What we can do is collect a lot of detailed data, and then train computers to make predictions for us, and based on these, we can get some useful summary of the underlying model. I consider the various efforts in the neuroscience of intelligence to be actions towards this goal, even though our efforts are really basic at the moment. I follow this field only from a distance as I lack the time to learn to use neurodata, and also these datasets are still mostly hidden away (e.g. UK Biobank, Human Connectome). If researchers wanted rapid progress, they would pool their various private datasets, publish it with some prediction challenge on Kaggle with a decent reward to attract some good talent from the machine learning community. ISIR could spearhead such a challenge, and indeed should. But everybody knows psychologists aren’t serious people, so I am not holding my breath.

Edited to add. I should note that the above research of networks is based on cognitive data, whereas where we need to go is network models of neurodata. However, since the cognitive data is a kind of crude mirror of the neurodata, I think whatever advances we make based on ‘training on’ this will be transferable to the neurodata in time to come. There are also some neuroscience papers using network models. Something like this theory paper is what I have in mind.