Anon writes me:


Sorry for not tweeting this at you, I don’t have a twitter.
I saw this post:
and this is more conjecture than anything, but perhaps when people fill out personality surveys and they say that they have a lot of some kind of trait, what that really means is that they have a lot of that trait in comparison to the groups of people they primarily interact with. Since races, or any arbitrary group tends to disproportionately interact with members of their group (in part discussed here: ), perhaps this inherently biases group differences research in personality towards there being no differences even above and beyond the effects of publication bias.
I have thought of a way to test for this. If we had the personality version of the tests of IQ tests for racial bias measured by racial differences in predictive validity,
then should personality tests not be racially biased in their predictive validity, that may be evidence that to the extent that personality self reports are relative, the races have the same reference point which would be further evidence of racial equality beyond the self report data.

I replied:

The thing you are talking about is called the reference group effect, and also the shifting standards model. Check out e.g.:

To expand a bit. Such measurement biases should show up in analyses of data using DIF (differential item functioning) and MGCFA (multi-group confirmatory factor analysis). These methods only work on bias that isn’t found at the same level across all items or tests, however. Such studies appear to be somewhat rare, however. Stuff like this is in the right direction:

Although previous research has examined cross-cultural differences in personality, many of these studies neglected to first establish that the measures being used were equivalent in meaning across cultures. Using samples of Chinese, Greek, and American respondents, the measurement equivalence of the Big Five Mini-Markers [Saucier, G. (1994). Mini-markers: A brief version of Goldberg’s unipolar Big-Five markers. Journal of Personality assessment, 63, 506–516] was assessed using confirmatory factor analysis. The results indicate that all of the scales demonstrate configural invariance, but fail to show metric or scalar invariance. Several adjectives from these scales were found to exhibit bias at the item-level. The practical implications of these results are discussed and future research is suggested.