Learning – theory and practice

This is a rewritten version of this old post.

Motive

Most people want to learn stuff. Some people prioritize learning higher than other people. But what is true for all people, is that they should learn as much as possible in the time they spend on it. This essay is about just that, optimizing learning speed.

The concept of information concentration

Think of a given chemical that is soluble in water. The water can contain more or less of this chemical. We call this the concentration. It is the same way with language and information. Think of language as a way of communicating info. Expresses vary in how much information ‘contain’ (communicate). From this, we can think of the amount of information per unit of language as the information concentration.

Relevant information

There is a lot of information communicated by a stream of language, not all of which we care about. There are two conditions for being relevant information: 1) It must concern the topic in which we are interested. 2) It must be information we do not already possses. In some cases, one is just generally curious, so the first condition may be easily satisfied. In other cases, one seeks information about a very particular topic. One sometimes does read a nonfiction book twice, but usually it is because one did get all of the information from it the first time because one read it too fast or without paying sufficient attention. Let R be the fraction of relevant information to the total amount of information.

Speed of the language stream

The speed of which we are exposed to the language stream varies. In reading, this various with reading speed, which is adjustable to some degree. In listening, it depends if it is live or not. If it isn’t live, then one can perhaps speed up the language stream. For instance, if one is watching a lecture in VLC, one can speed up the playback (this feature is also found in some Youtube videos now, e.g. for Khan Academy). If the speech stream is live, one can perhaps ask the speaker to speak faster. But in many cases this is not possible, such as lectures, or perhaps the person is already at his maximum speed (maximum speed without starting to be incomprehensible to the listener).

There are physical limitations as to how fast things can go. Even given a very low information concentration, sooner or later it does not work to increase the speed of the speech stream. There are limits on how fast speech can be while one still is able to recognize the words. For reading there is a similar limit. It is possible to increase one’s reading speed, especially with training. But there are limits, sooner or later it is simply not possible to recognize the words any faster due to physical limitations of eye movement.

Speed of the information stream

If one combines the two concepts introduced above, speed of language stream and information concentration, one gets the new concept of speed of the information stream. It is the speed by which information is communicated by the language stream. This is generally faster for written language streams, since reading a word is faster than listening to it.

Generally, the higher the speed of the information stream, the smaller a fraction of the information will one acquire. If one is reading very fast, one will miss out on a lot of details. Just how high a fraction of the information is acquired depends on many factors such as intelligence, mastery of the language, tiredness, interest in the topic and so on.

Skipping irrelevant information

Usually, some parts of the information stream will be irrelevant language as defined above. When this happens, we want to skip ahead in the language stream to the next part where it contains relevant information. If the language stream is non-live speech or written, one can generally skip past the irrelevant parts. It depends on whether the irrelevant parts are grouped together. If the speaker/writer just spreads irrelevant info throughout the stream, then it is more difficult to skip the irrelvant parts.

If stream is live (spoken or written), one can generally not skip. Although in some cases e.g. live streaming over the internet/TV, one can stop paying attention to the stream and get back to it when it begins containing relevant information again. If it is a one-to-one conversation, then one can perhaps ask the speaker to skip ahead. This may be considered rude. If it is a one-to-many live speech situation, one can generally not skip past. This is the case with live lectures.

Conceptually, the idea of skipping information is to keep R (the fraction of relevant information) as close to 1 as possible.

Attention span, mental energy, and available time

Some people have a hard time staying focused, that is, how long they can concentrate at a time before needing a break. Let’s call this concentration time.

Some people have inexhaustible amounts of mental energy. They can spend their entire day learning without getting mentally tired. Other people cannot. Let’s call this mental energy.

I imagine that concentration time and mental energy correlate positively, but probably not perfectly. Thus, there are going to be people who can spend practically all day learning as long as it is not in intervals of more than 1 hour. On the other hand, there are going to be people who only have the mental energy to learn for 4 hours a day, but can keep concentration up for 4 hours straight.

Music seems to have some effects on levels of mental energy. Music perhaps works only for when one is reading, although it seems possible that some people could get something out of listening to music while listening to speech as well. I find that I can go on for hours if I listen to the right music. Generally one wants to avoid distracting music. Often this happens when one starts paying attention to the lyrics instead of the language stream. For this reason, I generally prefer music with either no lyrics, or inaudible lyrics. In that way I don’t get distracted.

The relationships between time spent, speed of the language stream, information concentration and mental energy are probably not so straightforward. Perhaps when one is near max speed of acquiring information, the consumption rate of mental energy is higher than if one was learning at 80% speed. Depending on one’s levels of mental energy then, it might be an idea to not work as full speed, but slow down so as to not run out of mental energy during the day.

Time constraints are another issue. If one has close to every hour one is awake, then one should spread out one’s mental energy over the entire day to optimize the information acquired per day. If one only has a few hours a day to learn, learning at max speed will be more important.

Information stream speed and concentration

My thoughts tend to wander. This happens especially when my brain is not working at maximum or close to maximum capacity. When I do, I stop paying attention to the stream of language before me and think of other stuff. In that way, I’m not learning very much from the stream.

For my focus to work efficiently, I need to avoid language streams with too low information density. For this reason, I almost never use spoken language because I cannot effectively vary the speed so as to match the information stream speed that I want.

Self-control and distractions

People differ in their levels of self-control. In regards to learning, a relevant aspect is the ability to avoid getting distracted by other things. Most relevant today is perhaps the ability to avoid spending many hours talking about irrelevant matters on instant messengers, and not spending a lot of time on social media sites (Facebook, Google+, Twitter) or other similar sites (reddit, 4chan, 9gag).

People who have poor self-control may need to take measures to avoid falling prey to such temptations. Perhaps a good idea is to not read on a computer where there is a browser ready nearby that can take one to one of the sites mentioned before. I do most of my serious reading on a tablet in my bed, where there are fewer distractions. You may need to log off instant messengers, turn off the phone etc. to avoid distractions.

If you are not getting things done, you may need to sit down and think about how you want to structure your learning. Many people have trouble concentrating at home. In that case, it may help to go to a local library. If learning is important to you, you should find ways to optimize it. Perhaps this involves turning off the computer.

If you own a TV, you should sell it immediately. TVs are a waste of time.

Lectures and study groups

To put things together. We want to avoid irrelevant information. This is important when choosing which source to learn from. Lectures are generally not a good way of learning, so avoid them if possible. If not possible, then spend time at lectures reading. This may involve drowning out the noise of the lecture with music in a headset. I have done this extensively for lectures at high school and university, with and without music.

If one still wants to watch lectures, even socially, then it might still be an idea to stay at home. This is because the teachers vary in their teaching abilities as well. So, perhaps one can find better lectures on the same subject on the internet and watch those instead. If one can, then one can perhaps arrange a study group at home where one watches the lectures. This way one can also skip stuff that isn’t relevant if that is true for everybody in the group.

Since people vary so much in intelligence, learning speed, mental energy etc., it may be a good idea to learn by yourself instead of using study groups. If you have to use a study group, make sure to end up in one with people with a similar desire to learn as you have.

Review: Misbehaving Science: Controversy and the Development of Behavior Genetics (Aaron Panofsky)

www.goodreads.com/book/show/18526647-misbehaving-science

libgen.org/book/index.php?md5=ac86923b7bf1ed0639abf0e1c22810f8

The book is a sociologist trying to interpret the history of behavior genetics into sociology theories. I didn’t pay much attention to their theorizing, being familiar with that kind of nonsense or useless theory. It generally employs the kind of kind of terminology that sociologists are known for: reductionism here, genetic determinism there, racism, eugenics, Nazi, blahblah. It is somewhat dated despite just being released. This is the nature of legacy publishers, since it takes so long go get thru their machinery. It spends a lot of time talking about how the molecular (GWA) studies did not fulfill the dreams of behavior geneticists. This is however semi-moot now due to the fact that recent studies have replicated findings of g-genes and used GCTA to estimate heritability values that make extreme environmentalism impossible to hold onto.

It, however, did contain a lot of interesting quotes from unnamed persons, and various other stuff. It is recommended for those who have an interest in the history of behavior genetics and the race and IQ debate. I cannot give it 4 or 5 stars despite it being interesting due to the aforementioned problems.

Criticism of BMI and simple IQ tests – a conceptual link?

BMI is often used a proxy for fat percent or similar measures. BMI has a proven track record of predictive power of many health conditions, yet it still receives lots of criticism due to the fact that it gives misleading results for some groups, notably body builders. There is a conceptual link here with the criticism of simple IQ tests, such as Raven’s which ‘only measure ability to spot figures’. Nonverbal matrix tests such as Raven’s or Cattell’s do indeed not measure g as well as more diverse batteries do (Johnson et al, 2008). These visual tests could be similarly criticized for not working well on those with bad eyesight. However, they are still useful for a broad sample of the population.

Criticisms like this strike me as an incarnation of the perfect solution/Nirvana fallacy:

The perfect solution fallacy (aka the nirvana fallacy) is a fallacy of assumption: if an action is not a perfect solution to a problem, it is not worth taking. Stated baldly, the assumption is obviously false. The fallacy is usually stated more subtly, however. For example, arguers against specific vaccines, such as the flu vaccine, or vaccines in general often emphasize the imperfect nature of vaccines as a good reason for not getting vaccinated: vaccines aren’t 100% effective or 100% safe. Vaccines are safe and effective; however, they are not 100% safe and effective. It is true that getting vaccinated is not a 100% guarantee against a disease, but it is not valid to infer from that fact that nobody should get vaccinated until every vaccine everywhere prevents anybody anywhere from getting any disease the vaccines are designed to protect us from without harming anyone anywhere.

Any measure that has more than 0 validity can be useful in the right circumstances. If a measure has some validity and is easy to administer (BMI or non-verbal pen and paper group tests), they can be very useful even if they have less validity than better measures (fat% test or full battery IQ tests).

Anyway, BMI should probably/perhaps retired now because we have found a more effective (but surely not the best either!) measure:

Our aim was to differentiate the screening potential of waist-to-height ratio (WHtR) and waist circumference (WC) for adult cardiometabolic risk in people of different nationalities and to compare both with body mass index (BMI). We undertook a systematic review and meta-analysis of studies that used receiver operating characteristics (ROC) curves for assessing the discriminatory power of anthropometric indices in distinguishing adults with hypertension, type-2 diabetes, dyslipidaemia, metabolic syndrome and general cardiovascular outcomes (CVD). Thirty one papers met the inclusion criteria. Using data on all outcomes, averaged within study group, WHtR had significantly greater discriminatory power compared with BMI. Compared with BMI, WC improved discrimination of adverse outcomes by 3% (P < 0.05) and WHtR improved discrimination by 4–5% over BMI (P < 0.01). Most importantly, statistical analysis of the within-study difference in AUC showed WHtR to be significantly better than WC for diabetes, hypertension, CVD and all outcomes (P < 0.005) in men and women.
For the first time, robust statistical evidence from studies involving more than 300 000 adults in several ethnic groups, shows the superiority of WHtR over WC and BMI for detecting cardiometabolic risk factors in both sexes. Waist-to-height ratio should therefore be considered as a screening tool. (Ashwell et al, 2012)

Ashwell, M., Gunn, P., & Gibson, S. (2012). Waist‐to‐height ratio is a better screening tool than waist circumference and BMI for adult cardiometabolic risk factors: systematic review and meta‐analysis. obesity reviews, 13(3), 275-286.

Johnson, W., Nijenhuis, J. T., & Bouchard Jr, T. J. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36(1), 81-95.

Handbooks of intelligence

Sometimes, one sees references to this or that handbook of intelligence. I have not previously read any of these, not even in part, because I could not find any anywhere online. However, today I took a new look and found to my surprise three such books:

Sternberg, R. J. (Ed.). (2000). Handbook of intelligence. Cambridge University Press.

Sternberg, R. J. (Ed.). (2004). International handbook of intelligence. Cambridge University Press.

Sternberg, R. J., & Kaufman, S. B. (Eds.). (2011). The Cambridge handbook of intelligence. Cambridge University Press.

I was actually looking for one chapter in the first book:

Loehlin, John (2000). “Group Differences in Intelligence”. In Robert J. Sternberg. The Handbook of Intelligence. Cambridge University Press

which I saw on Wikipedia when I was re-reading the Minnesota Transracial Adoption Study study article. Skimming the contents of the books, leaves one not surprised given that the editor is Sternberg who does little quality research himself, endlessly promotes his triarchic theory even tho every study I’ve seen of it shows that it does not fit the data better than traditional g models, and despite it lacking support from mainstream scholars in the field.

One might thus wonder why Sternberg edits all these books, if his opinions are not mainstream. One can only speculate, but presumably because he’s at a rich university and has politically respectable opinions. Surely, printing these behemoth books costs a fortune.

Still, some of the chapters are by respectable authors (looking at the newest handbook, I see: Mackintosh, Fagan, Haier, Nettelbeck, Rindermann, Deary, and Hunt), and will surely be useful to have access to. :)

Review: The Roma: A Balkan Underclass (Jelena Cvorovic)

www.goodreads.com/book/show/23621169-the-roma

Richard Lynn is so nice to periodically send me books for free. He is working on establishing his publisher, of course, and so needs media coverage.

In this case, he sent me a new book on the Roma by Jelena Cvorovic who was also present at the London conference on intelligence in the spring 2014. She has previously published a number of papers on the Roma from her field studies. Of most interest to differential psychologists (such as me), is that they obtain very low scores on g tests not generally seen outside SS Africa. In the book, she reviews much of the literature on the Roma, covering their history, migration in Europe, religious beliefs and other strange cultural beliefs. For instance, did you know that many Roma consider themselves ‘Egyptians’? Very odd! Her review also covers the more traditional stuff like medical problems, sociological conditions, crime rates and the like. Generally, they do very poorly, probably only on par with the very worst performing immigrant groups in Scandinavia (Somalia, Lebanese, Syrians and similar). Perhaps they are part of the reason why people from Serbia do so poorly in Denmark. Perhaps they are mostly Roma? There are no records of more specific ethnicities in Denmark for immigrant groups to my knowledge. Similar puzzles concern immigrants coded as “stateless” which are presumably mostly from Palestine, immigrants from Israel (perhaps mostly Muslims?) and reversely immigrants from South Africa (perhaps mostly Europeans?).

Another interesting part of the book concerns the next last chapter covering the Roma kings. I had never heard of these, but apparently there are or were a few very rich Romas. They built elaborate castles for their money which one can now see in various places in Eastern Europe. After they lost their income (which was due to black market trading during communism and similar activities), they seem to have reverted to the normal Roma pattern of unemployment, fast life style, crime and state benefits. This provides another illustration of the idea that if a group of persons for some reason acquire wealth, it will not generally boost their g or other capabilities, and their wealth will go away again once the particular circumstance that gave rise to it disappears. Other examples of this pattern are the story of Nauru and people who get rich from sports but are not very clever (e.g. African American athletes such as Mike Tyson). Oil States have also not seen any massive increase in g due to their oil riches nor are people who win lotteries known to suddenly acquire higher g. Clearly, there cannot be a strong causal link from income to g.

In general, this book was better than expected and definitely worth a read for those interesting in psychologically informed history.

Admixture in the Americas: Introduction, partial correlations and IQ predictions based on ancestry

For those who have been living under a rock (i.e. not following my on Twitter), John Fuerst have been very good at compiling data from published research. Have a look at Human Varieties with the tag Admixture Mapping. He asked me to help him analyze it and write it up. I gladly obliged, you can read the draft here. John thinks we should write it all into one huge paper instead of splitting it up as is standard practice. The standard practice is perhaps not entirely just for gaming the reputation system, but also because writing huge papers like that can seem overwhelming and may take a long time to get thru review.

So the project summarized so far is this:

  • Genetic models of trait admixture predict that mixed groups will be in-between the two source population in the trait in proportion to their admixture.
  • For psychological traits such as general intelligence (g), this has previously primarily been studied unsystematically in African Americans, but this line of research seems to have dried up, perhaps because it became too politically sensitive over there.
  • However, there have been some studies using the same method, just examining illness-related traits (e.g. diabetes). These studies usually include socioeconomic variables as controls. In doing so, they have found robust correlations between admixture at the individual level and socioeconomic outcomes: income, occupation, education and the like.
  • John has found quite a lot of these and compiled the results into a table that can be found here.
  • The results clearly show the expected results, namely that more European ancestry is associated with more favorable outcomes, more African or American less favorable outcomes. A few of them are non-significant, but none contradicts. A meta-analysis of this would find a very small p value indeed.
  • One study actually included cognitive measures as co-variates and found results in the generally expected direction. See material under the headline “Cognitive differences in the Americans” in the draft file.
  • There is no necessity that one has to look at the individual level. One can look at the group level too. For this reason John has compiled data about the ancestry proportions of American countries and Mexican regions.
  • For the countries, he has tested this against self-identified proportions, CIA World Factbook estimates, skin reflection data and stuff like that, see: humanvarieties.org/2014/10/19/racial-ancestry-in-the-americas-part-1-genomic-continental-racial-admixture-estimate-and-validation/ The results are pretty solid. The estimates are clearly in the right ballpark.
  • Now, genetic models of the world distribution of general intelligence clearly predict that these estimates will be strongly related to the countries’ estimated mean levels of general intelligence. To test this John has carried out a number of multiple regressions with various controls such as parasite prevalence or cold weather along with European ancestry with the dependent variable being skin color and national achievement scores (PISA tests and the like). Results are in the expected directions even with controls.
  • Using the Mexican regional data, John has compared the Amerindian estimates with PISA scores, Raven’s scores, and Human Development Index (a proxy for S factor (see here and here)). Post is here: humanvarieties.org/2014/10/15/district-level-variation-in-continental-racial-admixture-predicts-outcomes-in-mexico/

This is where we are. Basically, the data is all there, ready to be analyzed. Someone needs to do the other part of the grunt work, namely running all the obvious tests and writing everything up for a big paper. This is where I come in.

The first I did was to create an OSF repository for the data and code since John had been manually keeping track of versions on HV. Not too good. I also converted his SPSS datafile to one that works on all platforms (CSV with semi-colons).

Then I started writing code in R. First I wanted to look at the more obvious relationships, such as that between IQ and ancestry estimates (ratios). Here I discovered that John had used a newer dataset of IQ estimates Meisenberg had sent him. However, it seems to have wrong data (Guatemala) and covers fewer relevant countries (25 vs. 35) vs. than the standard dataset from Lynn and Vanhanen 2012 (+Malloyian fixes) that I have been using. So for this reason I merged up John’s already enormous dataset (126 variables) with the latest Megadataset (365 variables), to create the cleverly named supermegadataset to be used for this study.

IQ x Ancestry zero-order correlations

Here’s the three scatterplots:

Americas_Euro_Ancestry_IQ12data

IQ_amer

IQ_Afro

So the reader might wonder, what is wrong with the Amerindian data? Why is about nill? Simply inspecting it reveals the problem. The countries with low Amerindian ancestry have very mixed European vs. African which keeps the mean around 80-85 thus creating no correlation.

Partial correlations

So my idea was this, as I wrote it in my email to John:

Hey John,I wrote my bachelor in 4 days (5 pages per day), so now I’m back to working on more interesting things. I use the LV12 data because it seems better and is larger.

One thing that had been annoying me that was correlations between ancestry and IQ do not take into account that there are three variables that vary, not just two. Remember that odd low correlation Amer x IQ r=.14 compared with Euro x IQ = .68 and Afr x IQ = -.66. The reason for this, it seems to me, is that the countries with low Amer% are a mix of high and low Afr countries. That’s why you get a flat scatterplot. See attached.

Unfortunately, one cannot just use MR with these three variables, since the following equation is true of them 1 = Euro+Afr+Amer. They are structurally dependent. Remember that MR attempts to hold the other variables constant while changing one. This is impossible.
The solution is seems to me is to use partial correlations. In this way, one can partial out one of them and look at the remaining two. There are six possible ways to do this:Amer x IQ, partial out Afr = -.51
Amer x IQ, partial out Euro = .29
Euro x IQ, partial out Afr = .41
Euro x IQ, partial out Amer = .70
Afr x IQ, partial out Euro = -.37
Afr x IQ, partial out Amer = -.76
Assuming that genotypically, Amer=85, Afr=80, Euro=97 (or so), then these results are completed as expected direction wise. In the first case, we remove Afr, so we are comparing Amer vs. Euro. We expect negative since Amer<Euro
In two, we expect positive because Amer>Afr
In three, we expect positive because Euro>Amer
In four, we expect positive because Euro>Afr
In five, we expect negative because Afr<Amer
In six, we expect negative because Afr<Euro
All six predictions were as expected. The sample size is quite small at N=34 and LV12 isn’t perfect, certainly not for these countries. The overall results are quite reasonable in my review.
Estimates of IQ directly from ancestry
But instead merely looking at it via correlations or regressions, one can try to predict the IQs directly from the ancestry. Simple create a predicted IQ based on the proportions and these populations estimated IQs. I tried a number of variations, but they were all close to this: Euro*95+Amer*85+Afro*70. The reason to use Euro 95 and not, say, 100 is that 100 is the IQ of Northern Europeans, in particular the British (‘Greenwich Mean IQ’). The European genes found in the Americans are mostly from Spain and Portugal, which have estimated IQs of 96.6 and 94.4 (mean = 95.5). This creates a problem since the US and Canada are not mostly from these somewhat lower IQ Europeans, but the error source is small (one can always just try excluding them).

So, does the predictions work? Yes.

Now, there is another kind of error with such estimates, called elevation. It refers to getting the intervals between countries right, but generally either over or underestimating them. This kind of error is undetectable in correlation analysis. But one can calculate it by taking the predicted IQs and subtracting the measured IQs, and then taking the mean of these values. Positive values mean that one is overestimating, negative means underestimation. The value for the above is: 1.9, so we’re overestimating a little bit, but it’s fairly close. A bit of this is due to USA and CAN, but then again, LCA (St. Lucia) and DMA (Dominica) are strong negative outliers, perhaps just wrong estimates by Lynn and Vanhanen (the only study for St. Lucia is this, but I don’t have the norms so I can’t calculate the IQ).

I told Davide Piffer about these results and he suggested that I use his PCA factor scores instead. Now, these are not themselves meaningful, but they have the intervals directly estimated from the genetics. His numbers are: Africa: -1.71; Native American: -0.9; Spanish: -0.3. Ok, let’s try:

PCA_predicted_IQs

Astonishingly, the correlation is almost the same. .01 from. However, this fact is less overwhelming than it seems at first because it arises simply because the correlations between the three racial estimates is .999 (95.5

Review: Writing Systems: An Introduction to Their Linguistic Analysis

www.goodreads.com/book/show/16641082-writing-systems

libgen.org/search.php?req=coulmas+writing&open=0&view=simple&phrase=1&column=def

I read this book as part of background reading for my bachelor (which im writing here) after seeing it referred to in a few other books. As a textbook it seems fine, except for the chapter dealing with psycholinguistics. Nearly all the references in this section are clearly dated, and the author is not up to speed.

Some quotes and comments.

Over time, the gap between spelling and pronunciation is bound to widen
in alphabetic orthographies, as spoken forms change and written forms are retained.
Many of the so-called ‘silent’ letters in French can be explained in this way. Catach
(1978: 65) states that 12.83 per cent of letters are mute letters in French, that is,
letters that have no phonetic interpretation whatever.

Imagine how much money and time has been spent on typing silent letters. Several hundred years of typing 13% more letters than necessary. 13% more paper use. Remember when books were actually expensive.

14 ways of writing u in English

A neat little overview. English is probably unique in this degree of linguistic insanity.

hebrew

Perhaps that’s where the name of the danish letter J comes from (jʌð). I always wondered.

We are like sailors who must rebuild their boat on the open sea without ever
being able to take it apart in a dock and reassemble it from scratch. -Otto Neurath

I have seen this one before, but i couldnt verify it via Wikiquote while writing this (on laptop).

The conflicting views about the role of phonological recoding in flu-
ent reading are mirrored in a long-standing controversy that pervades reading
teaching methods. On one hand, the phonics and decoding method views read-
ing as a process that converts written forms of language to speech forms and
then to meaning. A teaching method, consequently, should emphasize phonolog-
ical knowledge. As one leading proponent of the phonics/decoding approach puts
it, ‘phonological skills are not merely concomitants or by-products of reading
ability; they are true antecedents that may account for up to 60 per cent of the
variance in children’s reading ability’ (Mann 1991: 130). On the other hand, the
whole-word method sees reading as a form of communication that consists of the
reception of information through the written form, the recovery of meaning being
the essential purpose. ‘Since it is the case that learning to recognize whole words
is necessary to be a fluent reader, therefore, the learning of whole words right
from the start may be easier and more effective’ (Steinberg, Nagata and Aline
2001: 97).

This sounds like another case of social scientists identifying g without realizing it. Phonological awareness surely correlates with g.

onlinelibrary.wiley.com/doi/10.1002/(SICI)1099-0909(199806)4:2%3C73::AID-DYS104%3E3.0.CO;2-%23/abstract

This study shows that a factor analysis of 4 WAIS subtests + a phonological awareness test. PA had a loading on g of .61.

See also: psycnet.apa.org/journals/psp/86/1/174/ and www.sciencedirect.com/science/article/pii/S0160289697900167

Although a general correlation between literacy rate and prosperity can be ob-
served, relatively poor countries with high literacy rates, such as Vietnam and Sri Lanka, and very rich countries with residual illiteracy, such as the United States, do exist.

This pattern is easily explainable if one knows that the national g of Vietnam and Sri Lanka is around world average, while the US is much higher. The high illiteracy rate of the US is becus of their minority populations of hispanics and african americans.

Discrimination against females in grant applications or publication bias?

While looking for peer-review related studies, I came across a meta-analysis of gender bias in grant applications. That sounds good.

Bornmann, L., Mutz, R., & Daniel, H. D. (2007). Gender differences in grant peer review: A meta-analysis. Journal of Informetrics, 1(3), 226-238.

Abstract
Narrative reviews of peer review research have concluded that there is negligible evidence of gender bias in the awarding of grants based on peer review. Here, we report the findings of a meta-analysis of 21 studies providing, to the contrary, evidence of robust gender differences in grant award procedures. Even though the estimates of the gender effect vary substantially from study to study, the model estimation shows that all in all, among grant applicants men have statistically significant greater odds of receiving grants than women by about 7%.

Sounds intriguing? Knowing that publication bias especially on part of the authors would be a problem with these kind of studies (strong political motivation to publish studies finding bias against females), I immediately searched for key words related to publication bias… but didn’t find any. Then I skimmed the article. Very odd? Who does a meta-analysis on stuff like this, or most stuff anyway, without checking for publication bias?

TL;DR funnel plot. Briefly, publication bias happens when authors tend to send in papers that would results they liked rather than studies that failed to find results they liked. It can also result from biased reviewing. Since most social scientists believe in the magic of p<.05, this means scholars tend to publish studies meeting that arbitrary demand and not those who didn’t. Furthermore, when scholars do a lot of sciencing, gathering a lot of data, they are also more likely to submit due to having a larger amount of time invested in the project. These two together means that there is an interaction between effect size and direction and the N. People who spent lots of time doing a huge study will generally be very likely to publish it even tho the results weren’t as expected. But people who run small papers will to a higher degree not bother about writing up a small paper with negative results. This means that there will be a negative correlation between sample size and the preferred outcome, in this case bias against females.

Back to the meta-analysis. Luckily, the authors provide the data to calculate any bias. Their Table 1 has sample size (“Number of submitted applications”) and two datapoints one can calculate effect size from (“Proportion of women among all applicants”, “Proportion of women among approved applicants”). Simply subtract number of women who got a grant from the ones who submitted to get a female disfavor measure. (Well, actually it may just be that women write worse applications, so it does not imply bias at all.) Then correlate this with the sample size.

So there data is here. The funnel plot is below.

Funnel plot of studies examining gender bias in grant applications 2

There was indeed signs of publication bias. The simple N x effect size did not reach p <.05. However there is some question as to which measure of sample size one should use. The distribution is clearly not linear, thus violating the assumptions of linear regression/Pearson correlations. This page lists a variety of other options, the best perhaps being standard error, which however is not given by the authors. Here’s a funnel plot for log transformed sample sizes. This one has p=.04, so barely below the magic line.

Funnel plot of studies examining gender bias in grant applications log