The most direct way to test this is to employ latent variable modeling (SEM/CFA) and correlate the general factors from different IQ batteries. Below I quote the abstracts from all such studies I am aware of.
Johnson, W., Bouchard Jr, T. J., Krueger, R. F., McGue, M., & Gottesman, I. I. (2004). Just one g: consistent results from three test batteries. Intelligence, 32(1), 95-107.
The concept of a general intelligence factor or g is controversial in psychology. Although the controversy swirls at many levels, one of the most important involves g’s identification and measurement in a group of individuals. If g is actually predictive of a range of intellectual performances, the factor identified in one battery of mental ability tests should be closely related to that identified in another dissimilar aggregation of abilities. We addressed the extent to which this prediction was true using three mental ability batteries administered to a heterogeneous sample of 436 adults. Though the particular tasks used in the batteries reflected varying conceptions of the range of human intellectual performance, the g factors identified by the batteries were completely correlated (correlations were .99, .99, and 1.00). This provides further evidence for the existence of a higher-level g factor and suggests that its measurement is not dependent on the use of specific mental ability tasks.
Johnson, W., Nijenhuis, J. T., & Bouchard Jr, T. J. (2008). Still just 1g: Consistent results from five test batteries. Intelligence, 36(1), 81-95.
In a recent paper, Johnson, Bouchard, Krueger, McGue, and Gottesman (2004) addressed a long-standing debate in psychology by demonstrating that the g factors derived from three test batteries administered to a single group of individuals were completely correlated. This finding provided evidence for the existence of a unitary higher-level general intelligence construct whose measurement is not dependent on the specific abilities assessed. In the current study we constructively replicated this finding utilizing five test batteries. The replication is important because there were substantial differences in both the sample and the batteries administered from those in the original study. The current sample consisted of 500 Dutch seamen of very similar age and somewhat truncated range of ability. The batteries they completed included many tests of perceptual ability and dexterity, and few verbally oriented tests. With the exception of the g correlations involving the Cattell Culture Fair Test, which consists of just four matrix reasoning tasks of very similar methodology, all of the g correlations were at least .95. The lowest g correlation was .77. We discuss the implications of this finding.
The Cattell battery is a nonverbal battery with only 4 subtests. Lower correlation likely due to psychometric sampling error.
Keith, T. Z., Kranzler, J. H., & Flanagan, D. P. (2001). What does the Cognitive Assessment System (CAS) measure? Joint confirmatory factor analysis of the CAS and the Woodcock-Johnson Tests of Cognitive Ability. School Psychology Review, 30(1), 89-119.
Results of recent research by Kranzler and Keith (1999) raised important questions concerning the construct validity of the Cognitive Assessment System (CAS; Naglieri & Das, 1997), a new test of intelligence based on the planning, attention, simultaneous, and sequential (PASS) processes theory of human cognition. Their results indicated that the CAS lacks structural fidelity, leading them to hypothesize that the CAS Scales are better understood from the perspective of Cattell-Horn-Carroll (CHC) theory as measures of psychometric g, processing speed, short-term memory span, and fluid intelligence/broad visualization. To further examine the constructs measured by the CAS, this study reports the results of the first joint confirmatory factor analysis (CFA) of the CAS and a test of intelligence designed to measure the broad cognitive abilities of CHC theory—the Wood-cock-Johnson Tests of Cognitive Abilities-3rd Edition (WJ III; Woodcock, McGrew, & Mather, 2001). In this study, 155 general education students between 8 and 11 years of age (M = 9.81) were administered the CAS and the WJ III. A series of joint CFA models was examined from both the PASS and the CHC theoretical perspectives to determine the nature of the constructs measured by the CAS. Results of these analyses do not support the construct validity of the CAS as a measure of the PASS processes. These results, therefore, question the utility of the CAS in practical settings for differential diagnosis and intervention planning. Moreover, results of this study and other independent investigations of the factor structure of preliminary batteries of PASS tasks and the CAS challenge the viability of the PASS model as a theory of individual differences in intelligence.
The correlation between g factors was .98.
Floyd, R. G., Bergeron, R., Hamilton, G., & Parra, G. R. (2010). How do executive functions fit with the Cattell–Horn–Carroll model? Some evidence from a joint factor analysis of the Delis–Kaplan executive function system and the Woodcock–Johnson III tests of cognitive abilities. Psychology in the Schools, 47(7), 721-738.
This study investigated the relations among executive functions and cognitive abilities through a joint exploratory factor analysis and joint confirmatory factor analysis of 25 test scores from the Delis–Kaplan Executive Function System and the Woodcock–Johnson III Tests of Cognitive Abilities. Participants were 100 children and adolescents recruited from general education classrooms. Principal axis factoring followed by an oblique rotation yielded a six-factor solution. The Schmid–Leiman transformation was then used to examine the relations between specific cognitive ability factors and a general factor. A variety of hypothesis-driven models were also tested using confirmatory factor analysis. Results indicated that all tests measure the general factor, and 24 tests measure at least one of five broad cognitive ability factors outlined by the Cattell–Horn–Carroll theory of cognitive abilities. These results, with limitations considered, add to the body of evidence supporting the confluence of measures of executive functions and measures of cognitive abilities derived from individual testing.
Correlations between latent g’s were .99 and 1.00.
Psychometric g is the largest, most general, and most predictive factor underlying individual differences across cognitive tasks included in intelligence tests. Given that the overall score from intelligence tests is interpreted as an index of psychometric g, we examined the correlations between general factors extracted from individually administered intelligence tests using data from five samples of children and adolescents (n = 83 to n = 200) who completed at least two of six intelligence tests. We found strong correlations between the general factors indicating that these intelligence tests measure the same construct, psychometric g. A total of three general-factor correlations exceeded .95, but two other correlations were somewhat lower (.89 and .92). In addition, specific ability factors correlated highly across tests in most (but not all) cases. School psychologists and other professionals should know that psychometric g and several specific abilities are measured in remarkably similar ways across a wide array of intelligence tests.
Lower results may be due to sampling error and temporal changes related to growth.
Results like these are not limited to the general intelligence construct. Using ordinary explorative factor analysis, I found that the general factor (S factor) extracted from the 54 variables of the Social Progress Index correlated .98 with that extracted from the 42 variable Democracy Index. Perhaps this would be closer to 1.0 had I used structural equation modeling (a question left for another time).
Kirkegaard, E. O. W. (2014). The international general socioeconomic factor: Factor analyzing international rankings. Open Differential Psychology.