Goodreads. Libgen.

This book is background material for CGPGrey’s great short film:

So, if you saw that and are more curious, perhaps this book is for you. If the film above is not interesting to you, the book will be useless. Generally, the film conveys the topic better than the book, but the book of course contains more information.

The main flaw of the book is that the authors speculate on various economic and educational changes but without knowing about differential psychology and behavior genetics. For instance, they note that the median income has been falling. They don’t seem to consider that this may be in part due to a changing population in the US (relatively fewer Europeans, more Hispanics). Another example is that they look at the average income of people with only high-school over time and compare them with those with college. They don’t realize that due to the increased uptake of college education, the mean GMA of people with only high school has been falling steadily. So it does not necessarily have anything to do with educational attainment as they think.

The most interesting section was this:

What This Problem Needs Are More Eyeballs and Bigger Computers

If this response is at least somewhat accurate—if it captures something about how innovation and economic growth work in the real world—then the best way to accelerate progress is to increase our capacity to test out new combinations of ideas. One excellent way to do this is to involve more people in this testing process, and digital technologies are making it possible for ever more people to participate. We’re interlinked by global ICT [Information and Communication Technology], and we have affordable access to masses of data and vast computing power. Today’s digital environment, in short, is a playground for large-scale recombination. The open source software advocate Eric Raymond has an optimistic observation: “Given enough eyeballs, all bugs are shallow.”20 The innovation equivalent to this might be, “With more eyeballs, more powerful combinations will be found.”

NASA experienced this effect as it was trying to improve its ability to forecast solar flares, or eruptions on the sun’s surface. Accuracy and plenty of advance warning are both important here, since solar particle events (or SPEs, as flares are properly known) can bring harmful levels of radiation to unshielded gear and people in space. Despite thirty-five years of research and data on SPEs, however, NASA acknowledged that it had “no method available to predict the onset, intensity or duration of a solar particle event.”21

The agency eventually posted its data and a description of the challenge of predicting SPEs on Innocentive, an online clearinghouse for scientific problems. Innocentive is ‘non-credentialist’; people don’t have to be PhDs or work in labs in order to browse the problems, download data, or upload a solution. Anyone can work on problems from any discipline; physicists, for example, are not excluded from digging in on biology problems.

As it turned out, the person with the insight and expertise needed to improve SPE prediction was not part of any recognizable astrophysics community. He was Bruce Cragin, a retired radio frequency engineer living in a small town in New Hampshire. Cragin said that, “Though I hadn’t worked in the area of solar physics as such, I had thought a lot about the theory of magnetic reconnection.”22This was evidently the right theory for the job, because Cragin’s approach enabled prediction of SPEs eight hours in advance with 85 percent accuracy, and twenty-four hours in advance with 75 percent accuracy. His recombination of theory and data earned him a thirty-thousand-dollar reward from the space agency.

In recent years, many organizations have adopted NASA’s strategy of using technology to open up their innovation challenges and opportunities to more eyeballs. This phenomenon goes by several names, including ‘open innovation’ and ‘crowdsourcing,’ and it can be remarkably effective. The innovation scholars Lars Bo Jeppesen and Karim Lakhani studied 166 scientific problems posted to Innocentive, all of which had stumped their home organizations. They found that the crowd assembled around Innocentive was able to solve forty-nine of them, for a success rate of nearly 30 percent. They also found that people whose expertise was far away from the apparent domain of the problem were more likely to submit winning solutions. In other words, it seemed to actually help a solver to be ‘marginal’—to have education, training, and experience that were not obviously relevant for the problem. Jeppesen and Lakhani provide vivid examples of this:

[There were] different winning solutions to the same scientific challenge of identifying a food-grade polymer delivery system by an aerospace physicist, a small agribusiness owner, a transdermal drug delivery specialist, and an industrial scientist. . . . All four submissions successfully achieved the required challenge objectives with differing scientific mechanisms. . . .

[Another case involved] an R&D lab that, even after consulting with internal and external specialists, did not understand the toxicological significance of a particular pathology that had been observed in an ongoing research program. . . . It was eventually solved, using methods common in her field, by a scientist with a Ph.D. in protein crystallography who would not normally be exposed to toxicology problems or solve such problems on a routine basis.23

Like Innocentive, the online startup Kaggle also assembles a diverse, non-credentialist group of people from around the world to work on tough problems submitted by organizations. Instead of scientific challenges, Kaggle specializes in data-intensive ones where the goal is to arrive at a better prediction than the submitting organization’s starting baseline prediction. Here again, the results are striking in a couple of ways. For one thing, improvements over the baseline are usually substantial. In one case, Allstate submitted a dataset of vehicle characteristics and asked the Kaggle community to predict which of them would have later personal liability claims filed against them.24 The contest lasted approximately three months and drew in more than one hundred contestants. The winning prediction was more than 270 percent better than the insurance company’s baseline.

Another interesting fact is that the majority of Kaggle contests are won by people who are marginal to the domain of the challenge—who, for example, made the best prediction about hospital readmission rates despite having no experience in health care—and so would not have been consulted as part of any traditional search for solutions. In many cases, these demonstrably capable and successful data scientists acquired their expertise in new and decidedly digital ways.

Between February and September of 2012 Kaggle hosted two competitions about computer grading of student essays, which were sponsored by the Hewlett Foundation.* Kaggle and Hewlett worked with multiple education experts to set up the competitions, and as they were preparing to launch many of these people were worried. The first contest was to consist of two rounds. Eleven established educational testing companies would compete against one another in the first round, with members of Kaggle’s community of data scientists invited to join in, individually or in teams, in the second. The experts were worried that the Kaggle crowd would simply not be competitive in the second round. After all, each of the testing companies had been working on automatic grading for some time and had devoted substantial resources to the problem. Their hundreds of person-years of accumulated experience and expertise seemed like an insurmountable advantage over a bunch of novices.

They needn’t have worried. Many of the ‘novices’ drawn to the challenge outperformed all of the testing companies in the essay competition. The surprises continued when Kaggle investigated who the top performers were. In both competitions, none of the top three finishers had any previous significant experience with either essay grading or natural language processing. And in the second competition, none of the top three finishers had any formal training in artificial intelligence beyond a free online course offered by Stanford AI faculty and open to anyone in the world who wanted to take it. People all over the world did, and evidently they learned a lot. The top three individual finishers were from, respectively, the United States, Slovenia, and Singapore.

Quirky, another Web-based startup, enlists people to participate in both phases of Weitzman’s recombinant innovation—first generating new ideas, then filtering them. It does this by harnessing the power of many eyeballs not only to come up with innovations but also to filter them and get them ready for market. Quirky seeks ideas for new consumer products from its crowd, and also relies on the crowd to vote on submissions, conduct research, suggest improvements, figure out how to name and brand the products, and drive sales. Quirky itself makes the final decisions about which products to launch and handles engineering, manufacturing, and distribution. It keeps 70 percent of all revenue made through its website and distributes the remaining 30 percent to all crowd members involved in the development effort; of this 30 percent, the person submitting the original idea gets 42 percent, those who help with pricing share 10 percent, those who contribute to naming share 5 percent, and so on. By the fall of 2012, Quirky had raised over $90 million in venture capital financing and had agreements to sell its products at several major retailers, including Target and Bed Bath & Beyond. One of its most successful products, a flexible electrical power strip called Pivot Power, sold more than 373 thousand units in less than two years and earned the crowd responsible for its development over $400,000.

I take this to mean that: 1) polymathy/interdisciplinarity is not dead or dying at all, it is in fact very useful, 2) to make oneself very useful, one should focus on learning a bunch of unrelated methods for analyzing data, and when studying a field, one should attempt to use methods not commonly used in that field, 3) work that is related to AI, machine learning etc. is the future (until we are completely unable to compete with computers).

G.M. IQ & Economic growth

I noted down some comments while reading it.

In Table 1, Dominican birth cohort is reversed.

 

“0.70 and 0.80 in world-wide country samples. Figure 1 gives an impression of

this relationship.”

 

Figure 1 shows regional IQs, not GDP relationships.

“We still depend on these descriptive methods of quantitative genetics because

only a small proportion of individual variation in general intelligence and

school achievement can be explained by known genetic polymorphisms (e.g.,

Piffer, 2013a,b; Rietveld et al, 2013).”

 

We don’t. Modern BG studies can confirm A^2 estimates directly from the genes.

E.g.:

Davies, G., Tenesa, A., Payton, A., Yang, J., Harris, S. E., Liewald, D., … & Deary, I. J. (2011). Genome-wide association studies establish that human intelligence is highly heritable and polygenic. Molecular psychiatry, 16(10), 996-1005.

Marioni, R. E., Davies, G., Hayward, C., Liewald, D., Kerr, S. M., Campbell, A., … & Deary, I. J. (2014). Molecular genetic contributions to socioeconomic status and intelligence. Intelligence, 44, 26-32.

Results are fairly low tho, in the 20’s, presumably due to non-additive heritability and rarer genes.

 

“Even in modern societies, the heritability of

intelligence tends to be higher for children from higher socioeconomic status

(SES) families (Turkheimer et al, 2003; cf. Nagoshi and Johnson, 2005; van

der Sluis et al, 2008). Where this is observed, most likely environmental

conditions are of similar high quality for most high-SES children but are more

variable for low-SES children. “

 

Or maybe not. There are also big studies that don’t find this interaction effect. en.wikipedia.org/wiki/Heritability_of_IQ#Heritability_and_socioeconomic_status

 

“Schooling has

only a marginal effect on growth when intelligence is included, consistent with

earlier results by Weede & Kämpf (2002) and Ram (2007).”

In the regression model of all countries, schooling has a larger beta than IQ does (.158 and .125). But these appear to be unstandardized values, so they are not readily comparable.

“Also, earlier studies that took account of

earnings and cognitive test scores of migrants in the host country or IQs in

wealthy oil countries have concluded that there is a substantial causal effect of

IQ on earnings and productivity (Christainsen, 2013; Jones & Schneider,

2010)”

 

National IQs were also found to predict migrant income, as well as most other socioeconomic traits, in Denmark and Norway (and Finland and the Netherland).

Kirkegaard, E. O. W. (2014). Crime, income, educational attainment and employment among immigrant groups in Norway and Finland. Open Differential Psychology.

Kirkegaard, E. O. W., & Fuerst, J. (2014). Educational attainment, income, use of social benefits, crime rate and the general socioeconomic factor among 71 immigrant groups in Denmark. Open Differential Psychology.

 

 

Figures 3 A-C are of too low quality.

 

 

“Allocation of capital resources has been an

element of classical growth theory (Solow, 1956). Human capital theory

emphasizes that individuals with higher intelligence tend to have lower

impulsivity and lower time preference (Shamosh & Gray, 2008). This is

predicted to lead to higher savings rates and greater resource allocation to

investment relative to consumption in countries with higher average

intelligence.”

 

Time preference data for 45 countries are given by:

Wang, M., Rieger, M. O., & Hens, T. (2011). How time preferences differ: evidence from 45 countries.

They are in the megadataset from version 1.7f

Correlations among some variables of interest:

r
             SlowTimePref Income.in.DK Income.in.NO   IQ lgGDP
SlowTimePref         1.00         0.45         0.48 0.57  0.64
Income.in.DK         0.45         1.00         0.89 0.55  0.59
Income.in.NO         0.48         0.89         1.00 0.65  0.66
IQ                   0.57         0.55         0.65 1.00  0.72
lgGDP                0.64         0.59         0.66 0.72  1.00

n
             SlowTimePref Income.in.DK Income.in.NO  IQ lgGDP
SlowTimePref          273           32           12  45    40
Income.in.DK           32          273           20  68    58
Income.in.NO           12           20          273  23    20
IQ                     45           68           23 273   169
lgGDP                  40           58           20 169   273

So time prefs predict income in DK and NO only slightly worse than national IQs or lgGDP.

 

 

“Another possible mediator of intelligence effects that is difficult to

measure at the country level is the willingness and ability to cooperate. A

review by Jones (2008) shows that cooperativeness, measured in the Prisoner‟s

dilemma game, is positively related to intelligence. This correlate of

intelligence may explain some of the relationship of intelligence with

governance. Other likely mediators of the intelligence effect include less red

tape and restrictions on economic activities (“economic freedom”), higher

savings and/or investment, and technology adoption in developing countries.”

 

There are data for IQ and trust too. Presumably trust is closely related to willingness to cooperate.

Carl, N. (2014). Does intelligence explain the association between generalized trust and economic development? Intelligence, 47, 83–92. doi:10.1016/j.intell.2014.08.008

 

 

“There is no psychometric evidence for rising intelligence before that time

because IQ tests were introduced only during the first decade of the 20th

century, but literacy rates were rising steadily after the end of the Middle Age

in all European countries for which we have evidence (Mitch, 1992; Stone,

1969), and the number of books printed per capita kept rising (Baten & van

Zanden, 2008).”

 

There’s also age heaping scores which are a crude measure of numeracy. AH scores for 1800 to 1970 are in the megadataset. They have been going up for centuries too just like literacy scores. See:

A’Hearn, B., Baten, J., & Crayen, D. (2009). Quantifying quantitative literacy: Age heaping and the history of human capital. The Journal of Economic History, 69(03), 783–808.

 

 

“Why did this spiral of economic and cognitive growth take off in Europe

rather than somewhere else, and why did it not happen earlier, for example in

classical Athens or the Roman Empire? One part of the answer is that this

process can start only when technologies are already in place to translate rising

economic output into rising intelligence. The minimal requirements are a

writing system that is simple enough to be learned by everyone without undue

effort, and a means to produce and disseminate written materials: paper, and

the printing press. The first requirement had been present in Europe and the

Middle East (but not China) since antiquity, and the second was in place in

Europe from the 15thcentury. The Arabs had learned both paper-making and

printing from the Chinese in the 13thcentury (Carter, 1955), but showed little

interest in books. Their civilization was entering into terminal decline at about

that time (Huff, 1993). “

 

Are there no FLynn effects in China? They still have a difficult writing system.

 

“Most important is that Flynn effect gains have been decelerating in recent

years. Recent losses (anti-Flynn effects) were noted in Britain, Denmark,

Norway and Finland. Results for the Scandinavian countries are based on

comprehensive IQ testing of military conscripts aged 18-19. Evidence for

losses among British teenagers is derived from the Raven test (Flynn, 2009)

and Piagetian tests (Shayer & Ginsburg, 2009). These observations suggest

that for cohorts born after about 1980, the Flynn effect is ending or has ended

in many and perhaps most of the economically most advanced countries.

Messages from the United States are mixed, with some studies reporting

continuing gains (Flynn, 2012) and others no change (Beaujean & Osterlind,

2008).”

 

These are confounded with immigration of low-g migrants however. Maybe the FLynn effect is still there, just being masked by dysgenics + low-g immigration.

 

 

“The unsustainability of this situation is obvious. Estimating that one third

of the present IQ differences between countries can be attributed to genetics,

and adding this to the consequences of dysgenic fertility within countries,

leaves us with a genetic decline of between 1 and 2 IQ points per generation

for the entire world population. This decline is still more than offset by Flynn

effects in less developed countries, and the average IQ of the world‟s

population is still rising. This phase of history will end when today‟s

developing countries reach the end of the Flynn effect. “Peak IQ” can

reasonably be expected in cohorts born around the mid-21stcentury. The

assumptions of the peak IQ prediction are that (1) Flynn effects are limited by

genetic endowments, (2) some countries are approaching their genetic limits

already, and others will fiollow, and (3) today‟s patterns of differential fertility

favoring the less intelligent will persist into the foreseeable future. “

 

It is possible that embryo selection for higher g will kick in and change this.

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92. doi:10.1111/1758-5899.12123

 

 

“Fertility differentials between countries lead to replacement migration: the

movement of people from high-fertility countries to low-fertility countries,

with gradual replacement of the native populations in the low-fertility

countries (Coleman, 2002). The economic consequences depend on the

quality of the migrants and their descendants. Educational, cognitive and

economic outcomes of migrants are influenced heavily by prevailing

educational, cognitive and economic levels in the country of origin (Carabaña,

2011; Kirkegaard, 2013; Levels & Dronkers, 2008), and by the selectivity of

migration. Brain drain from poor to prosperous countries is extensive already,

for example among scientists (Franzoni, Scellato & Stephan, 2012; Hunter,

Oswald & Charlton, 2009). “

 

There are quite a few more papers on the spatial transferability hypothesis. I have 5 papers on this alone in ODP: openpsych.net/ODP/tag/country-of-origin/

But there’s also yet unpublished data for crime in Netherlands and more crime data for Norway. Papers based off these data are on their way.

 

Posted on reddit.

 

This is your best film yet, and that says something.

For automatization for clinical decisions, it has been known for decades that simple algorithms are better than humans. This has so far not been put to much practice, but it will eventually. See review article: Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: a meta-analysis.[1] Psychological assessment, 12(1), 19.

There is only one temporary solution for this problem. It is to make humans smarter. I say temporary because these new smarter humans will quickly make robots even smarter and so they can replace even the new smarter humans.

How to make humans more intelligent? The only effective way to do that is to use applied human genetics aka. eugenics. This is because general intelligence (g-factor) is about 80% heritable in adults (and pretty much everything else is also moderately to highly heritable). There are two things we must do: 1) Find the genes for g. This effort is underway and we have found a few SNPs so far.[1-2] It is estimated that there are about 1k-10k genes for g. 2) Find out how to apply this genetic knowledge in practice to make both existing humans and the new ones smarter. The first effective technology for this is embryo selection[2] . Perhaps CRISPR[3] can work for existing humans.

  1. Rietveld, C.A., Medland, S.E., Derringer, J., Yang, K., Esko, T., et al. (2013). GWAS of 126,559 individuals identifies genetic variants associated with educational attainment. Science 340: 1467-1471.
  2. Ward, M.E., McMahon, G., St Pourcain, B., Evans, D.M., Rietveld, C.A., et al. (2014) Genetic Variation Associated with Differential Educational Attainment in Adults Has Anticipated Associations with School Performance in Children. PLoS ONE 9(7): e100248. doi:10.1371/journal.pone.0100248

This book is very popsci and can be read in 1 day for any reasonably fast reader. It doesnt contain much new information to anyone who has read a few books on the topic. As can be seen below, it has a lot of nonsense/errors since clearly the author is not used to this area of science. It is not recommended except as a light introduction to people with political problems with these facts.

gen.lib.rus.ec/book/index.php?md5=7a48b9a42d89294ca1ade9f76e26a63c

www.goodreads.com/book/show/18667960-a-troublesome-inheritance?from_search=true

 

But  a  drawback  o f  the  system  is  its  occasional  drift  toward
extreme  conservatism.  Researchers  get  attached  to  the  view  of their
field  they  grew  up  with  and,  as  they  grow  older,  they  may  gain  the
influence  to thwart change.  For  50  years  after it was  first proposed,
leading geophysicists  strenuously resisted the idea that the continents
have  drifted  across  the  face  of  the  globe.  “Knowledge  advances,
funeral  by funeral,”  the economist Paul  Samuelson  once  observed.

 

Wrong quote origin. en.wikiquote.org/wiki/Max_Planck

>A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

 

Academics, who are obsessed with intelligence, fear the discovery
of  a  gene  that  will  prove  one  major  race  is  more  intelligent  than
another.  But  that  is  unlikely  to  happen  anytime  soon.  Although
intelligence has a genetic basis, no genetic variants that enhance intel­
ligence  have  yet  been  found.  The  reason,  almost  certainly,  is  that
there  are  a  great  many  such  genes,  each  of  which  has  too  small  an
effect  to  be  detectable  with  present  methods.8  If  researchers  should
one  day  find  a  gene  that  enhances  intelligence  in  East  Asians,  say,
they can  hardly argue on that  basis that East Asians are more  intelli­
gent than other races, because hundreds of similar genes remain to be
discovered  in  Europeans  and  Africans.
Even  if  all  the  intelligence-enhancing  variants  in  each  race  had
been identified, no one would try to compute intelligence on the basis
of genetic  information:  it would  be  far easier  just to  apply  an  intelli­
gence test.  But IQ  tests already  exist, for what  they may  be  worth.

 

We have found a number of SNPs already. And we have already begun counting them in racial groups. See e.g.: openpsych.net/OBG/2014/05/opposite-selection-pressures-on-stature-and-intelligence-across-human-populations/

 

 

It s social behavior that is of relevance for understanding pivotal—
and otherwise imperfectly explained— events in history and econom­
ics.  Although  the  emotional  and  intellectual  differences  between  the
world’s peoples  as  individuals are slight enough,  even a  small  shift in
social  behavior  can  generate  a  very  different  kind  of society.  Tribal
societies, for instance, are organized on the basis of kinship and differ
from  modern  states  chiefly  in  that  people’s  radius  of trust  does  not
extend too far beyond the family and tribe.  But in this small variation
is  rooted  the  vast  difference  in  political  and  economic  structures
between tribal and modern societies. Variations in another genetically
based behavior, the readiness to punish those who violate social rules,
may explain why  some societies  are  more conformist than others.

 

See: www.goodreads.com/book/show/3026168-the-expanding-circle

 

 

The  lure  of  Galton’s  eugenics  was  his  belief  that  society  would
be  better  off  if  the  intellectually  eminent  could  be  encouraged  to
have  more  children.  W hat  scholar  could  disagree  with  that?  More
of  a  good  thing  must  surely  be  better.  In  fact  it  is  far  from  certain
that  this  would  be  a  desirable  outcome.  Intellectuals  as  a  class  are
notoriously  prone  to  fine-sounding  theoretical  schemes  that  lead
to  catastrophe,  such  as  Social  Darwinism,  Marxism  or  indeed
eugenics.
By  analogy  with  animal  breeding,  people  could  no  doubt  be
bred,  if it were ethically acceptable, so  as to  enhance  specific desired
traits.  But  it  is  impossible  to  know  what  traits would  benefit  society
as a whole. The eugenics program, however reasonable it might seem,
was  basically incoherent.

 

Obviously wrong.

 

 

The  principal  organizer  of  the  new  eugenics  movement  was
Charles  Davenport.  He  earned  a  doctorate  in  biology  from  Harvard
and  taught  zoology  at  Harvard,  the  University  of  Chicago,  and  the
Brooklyn  Institute  of  Arts  and  Sciences  Biological  Laboratory  at
Cold  Spring  Harbor  on  Long  Island.  Davenport’s  views  on  eugenics
were  motivated  by  disdain  for  races  other  than  his  own:  “Can  we
build a  wall high  enough around this country so as to keep  out these
cheaper  races,  or will  it  be  a  feeble  dam  .  .  .  leaving it to  our  descen­
dants to abandon  the country to the  blacks,  browns  and  yellows and
seek  an  asylum in New  Zealand?”  he wrote.9

 

Well, about that… In this century europeans will be <50% in the US. I wonder if the sociologists will then stop talking about minority, as if that somehow makes a difference.

 

 

One  of  the  most  dramatic  experiments  on  the  genetic  control  of
aggression was performed by the Soviet scientist Dmitriy Belyaev. From
the same population of Siberian gray rats he developed two strains, one
highly sociable  and  the  other  brimming with  aggression.  For  the tame
rats, the parents of each generation were chosen simply by the criterion
of how well they tolerated  human presence.  For the  ferocious  rats, the
criterion  was  how adversely they reacted  to people.  After many gener­
ations of breeding,  the  first strain was  now so tame that when visitors
entered  the  room  where  the  rats  were  caged,  the  animals  would  press
their  snouts  through  the  bars to  be  petted.  The  other  strain  could  not
have  been  more  different.  The  rats  would  hurl  themselves  screaming
toward  the  intruder,  thudding  ferociously  against  the  bars  of  their
cage.12

 

Didnt know this one. The ref is:

N icholas  Wade,  “N ice  R a ts,  N asty  R a ts:  Maybe  I t ’s  All  in  the  G en es,”
N ew  York  Tim es, Ju ly  2 5 ,  2 0 0 6 ,  www.nytimes.com/2006/07/25/health/
25 ra ts.h tm l?p a g ew a n ted = a ll& _ r=0  (accessed  Sept.  2 5 ,  2 0 1 3 )

 

 

Rodents and humans use many of the same genes and  brain regions
to control  aggression.  Experiments with  mice  have  shown that a  large
number of genes are involved in the trait, and the same is certainly true
of  people.  Comparisons  of  identical  twins  raised  together  and  sepa­
rately  show  that  aggression  is  heritable.  Genes  account  for  between
3 7%  and 72%  of the heritability, the variation  of the trait in a  popula­
tion, according to various studies.  But very few of the genes that under­
lie  aggression  have  yet  been  identified,  in  part  because  when  many
genes control  a  behavior,  each  has  so  small  an  effect  that  it  is  hard  to
detect.  Most  research  has  focused  on  genes  that  promote  aggression
rather than those at the other end of the  behavioral  spectrum.

 

This sentence is nonsensical.

 

 

Standing  in  sharp  contrast  to  the  economists’  working  assumption
that  people  the  world  over  are  interchangeable  units  is  the  idea  that
national  disparities  in  wealth  arise  from  differences  in  intelligence.
The possibility should  not be  dismissed  out of hand:  where  individu­
als are concerned,  IQ  scores do correlate,  on average,  with economic
success, so  it is not unreasonable to inquire if the same  might  be true
of countries.

 

Marked sentence is nonsensical.

 

 

Turning to economic indicators, they find that national  IQ scores
have an extremely high correlation  (83%)  with economic growth  per
capita  and  also  associate  strongly  with  the  rate  of economic  growth
between  1950  and  1 9 9 0  (64%  correlation).44

 

More conceptual confusion.

 

 

And  indeed  with  Lynn  and  Vanhanen’s correlations,  it  is  hard to
know  which  way  the  arrow  of  causality  may  be  pointing,  whether
higher  IQ  makes  a  nation  wealthier  or  whether  a  wealthier  nation
enables  its  citizens  to  do  better  on  IQ  tests.  The  writer  Roy  Unz  has
pointed out from  Lynn and Vanhanen’s own data examples  in  which
IQ  scores  increase  10  or more points  in  a generation  when  a  popula­
tion  becomes  richer,  showing  clearly  that  wealth  can  raise  IQ
scores  significantly.  East  German  children  averaged  90  in  1 9 6 7  but
99  in  1984.  In  West  Germany,  which  has  essentially the  same  popu­
lation,  averages  range  from  99  to  107.  This  17  point  range  in  the
German  population,  from  90  to  107,  was  evidently  caused  by  the
alleviation  of poverty,  not genetics.

 

Ron Unz, the cherry picker. conservativetimes.org/?p=11790

 

 

East  Asia  is  a  vast counterexample to the  Lynn/Vanhanen  thesis.
The  populations  of China, Japan  and Korea  have consistently  higher
IQs  than  those  of Europe  and  the  United  States,  but  their  societies,
despite  their  many  virtues,  are  not  obviously  more  successful  than
those of Europe and  its outposts. Intelligence can’t hurt, but it doesn’t
seem  a  clear  arbiter  of  a  population’s  economic  success.  W hat  is  it
then  that determines  the  wealth  or poverty of nations?

 

No. But it does disprove the claim that IQs are just GDPs. The oil states have low IQs and had that both before and after they got rich on oil, and will have in the future when they run out of oil again. Money cannot buy u intelligence (yet).

 

 

From  about  9 0 0  a d   to  1700  a d ,  Ashkenazim  were  concentrated
in  a  few  professions,  notably  moneylending  and  later  ta x  farming
(give  the prince  his  money  up  front,  then  extract the  taxes  due  from
his  subjects).  Because  of  the  strong  heritability  of  intelligence,  the
Utah team calculates that 20 generations, a mere 5 0 0 years, would be
sufficient for Ashkenazim to have developed an  extra  16 points of IQ
above that of Europeans. The Utah team assumes that the heritability
of  intelligence  is  0 .8 ,  meaning  that  8 0 %  of the  variance,  the  spread
between high and low values in a population, is due to genetics. If the
parents of each generation have an  IQ of just  1  point above the mean,
then  average  IQ  increases  by  0 .8 %  per  generation.  If  the  average
human  generation  time  in  the  Middle Ages was  2 5  years,  then  in  20
human  generations,  or  5 0 0  years,  Ashkenazi  IQ  would  increase  by
2 0  x  0.8  =  16  IQ  points.

 

More conceptual confusion. One cannot use % on IQs becus IQs are not ratio scale and hence division makes no sense. en.wikipedia.org/wiki/Levels_of_measurement#Comparison

[Bryan_Caplan]_The_Myth_of_the_Rational_Voter_Why(Bookos.org)

 

This is very interesting book. Most interesting I’ve read in a while.

 

 

If neither way of verifying the existence of preferences over beliefs

appeals to you, a final one remains. Reverse the direction of reason­

ing. Smoke usually means fire. The more bizarre a mistake is, the

harder it is to attribute to lack of information. Suppose your friend

thinks he is Napoleon. It is conceivable that he got an improbable

coincidence of misleading signals sufficient to convince any of us.

But it is awfully suspicious that he embraces the pleasant view that

he is a world-historic figure, rather than, say, Napoleon’s dishwasher.

Similarly, suppose an adult sees trade as a zero-sum game. Since he

experiences the opposite every day, it is hard to blame his mistake on

“lack of information.” More plausibly, like blaming your team’s defeat

on cheaters, seeing trade as disguised exploitation soothes those who

dislike the market’s outcome.

 

Common problem with reincarnation reports. Also: en.wikipedia.org/wiki/Emperor_Norton

 

 

In extreme cases, mistaken beliefs are fatal. A baby-proofed house

illustrates many errors that adults cannot afford to make. It is danger­

ous to think that poisonous substances are candy It is dangerous to

reject the theory of gravity at the top of the stairs. It is dangerous to

hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly If the price

of oranges is 50 cents each, but you mistakenly believe it is a dollar,

you buy too few oranges. If bottled water is, contrary to your impres­

sion, neither healthier nor better-tasting than tap water, you may

throw hundreds of dollars down the drain. If your chance of getting

an academic job is lower than you guess, you could waste your twent­

ies in a dead-end Ph.D. program.

 

There was a recent danish study on the quality of bottled water vs. tap water, and they were found to be the same. Bottled water is seriously waste of money. www.bt.dk/test/stor-test-kildevand-er-det-rene-snyd

 

 

Mosca and Jihad. In the Jain example, stubborn belief leads to dis­

comfort. Gaetano Mosca presents a case where stubborn belief leads

to death.

 

Mohammed, for instance, promises paradise to all who fall in a

holy war. Now if every believer were to guide his conduct by that

assurance in the Koran, every time a Mohammedan army found

itself faced by unbelievers it ought either to conquer or to fall to

the last man. It cannot be denied that a certain number of individu­

als do live up to the letter of the Prophet’s word, but as between

defeat and death followed by eternal bliss, the majority of Moham­

medans normally elect defeat.45

 

Yes, religious people are irrational, even about their own irrational beliefs: chaospet.com/2008/10/08/110-jesus-loves-abortion/

 

they should also try to get themselves killed as soon as possible. After all, heaven is infinitely good, so it’s obviously infinitely better than being on earth. An infinite improvement!

 

 

If you listen to your fellow citizens, you get the impression that they

disagree. How many times have you heard, “Every vote matters”? But

people are less credulous than they sound. The infamous poll tax—

which restricted the vote to those willing to pay for it—provides a

clean illustration. If individuals acted on the belief that one vote

makes a big difference, they would be willing to pay a lot to partici­

pate. Few are. Historically, poll taxes significantly reduced turnout.65

There is little reason to think that matters are different today. Imagine

setting a poll tax to reduce presidential turnout from 50% to 5%. How

high would it have to be? A couple hundred dollars? What makes the

poll tax alarming is that most of us subconsciously know that most

of us subconsciously know that one vote does not count.

 

Citizens often talk as if they personally have power over electoral

outcomes. They deliberate about their options as if they were order­

ing dinner. But their actions tell a different tale: They expect to be

served the same meal no matter what they “order.”

 

What does this imply about the material price a voter pays for polit­

ical irrationality? Let D be the difference between a voter’s willingness

to pay for policy A instead of policy B. Then the expected cost of

voting the wrong way is not D, but the probability of decisiveness p

times D. If p = 0, pD = 0 as well. Intuitively, if one vote cannot change

policy outcomes, the price of irrationality is zero.

 

 

But rational irrationality does not require Orwellian underpinnings.

The psychological interpretation can be seriously toned down with­

out changing the model. Above all, the steps should be conceived as

tacit. To get in your car and drive away entails a long series of steps—

take out your keys, unlock and open the door, sit down, put the key

in the ignition, and so on. The thought processes behind these steps

Eire rarely explicit. Yet we know the steps on some level, because when

we observe a would-be driver who fails to take one—by, say, trying to

open a locked door without using his key—it is easy to state which

step he skipped.

 

Once we recognize that cognitive “steps” are usually tacit, we can

enhance the introspective credibility of the steps themselves. The

process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attach­

ment to a particular answer.

Step 2: On topics where you have an emotional attachment to a

particular answer, keep a “lookout” for questions where false be­

liefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the

flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level

of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—

the progressive shattering of your comforting illusions—against

the material costs of error.

 

There is no need to posit that people start with a clear perception of

the truth, then throw it away. The only requirement is that rationality

remain on “standby,” ready to engage when error is dangerous.

 

Relevant to the ethics of belief:

 

ajburger.homestead.com/ethics.html

www.utilitarianism.net/singer/by/200303–.htm

 

 

So Classical Public Choice’s stories about rational ignorance prove

too much. But not much too much. By any absolute measure, average

levels of politicsil knowledge Eire low.8 Less than 40% of American

adults know both of their senators’ names.9 Slightly fewer know both

senators’ parties—a particularly significant finding given its oft-cited

informationEil role.10 Much of the public has forgotten—or never

learned—the elementary and unchanging facts taught in every civics

class. About half knows that each state has two senators, and only a

quarter knows the length of their terms in office.11 FEimiliEirity with

politicians’ voting records and policy positions is predictably close

to nil even on high-profile issues, but amazingly good on fun topics

irrelevant to policy. As Delli Carpini and Keeter remark:

 

During the 1992 presidential campaign 89 percent of the public

knew that Vice President Quayle was feuding with the television

character Murphy Brown, but only 19 percent could characterize

Bill Clinton’s record on the environment. . . 86 percent of the pub­

lic knew that the Bushes’ dog was named Millie, yet only 15 percent

knew that both presidential candidates supported the death pen­

alty. Judge Wapner (host of the television series “People’s Court”)

was identified by more people than were Chief Justices Burger or

Rehnquist.1

 

sigh!

 

 

Apparently irrational cultural beliefs are quite remarkable:

They do not appear irrational by slightly departing from

common sense, or timidly going beyond what the

evidence allows. They appear, rather, like down-right

provocations against common sense rationality.

—Richard Shwedei1

 

 

Economists’ love of qualification is notorious, but most doubt that

the protechnology position needs to be qualified. Technology often

creates new jobs; without the computer, there would be no jobs in

computer programming or software development. But the funda­

mental defense of labor-saving technology is that employing more

workers than you need wastes valuable labor. If you pay a worker to

twiddle his thumbs, you could have paid him to do something socially

useful instead.

Economists add that market forces readily convert this potential

social benefit into an actual one. After technology throws people out

of work, they have an incentive to find a new use for their talents. Cox

and Aim aptly describe this process as “churn”: “Through relentless

turmoil, the economy re-creates itself, shifting labor resources to

where they’re needed, replacing old jobs with new ones.”75 They illus­

trate this process with history’s most striking example: The drastic

decline in agricultural employment:

 

In 1800, it took nearly 95 of every 100 Americans to feed the country.

In 1900, it took 40. Today, it takes just 3…. The workers no longer

needed on farms have been put to use providing new homes, furni­

ture, clothing, computers, pharmaceuticals, appliances, medical

assistance, movies, financial advice, video games, gourmet meals,

and an almost dizzying array of other goods and services.. . . What

we have in place of long hours in the fields is the wealth of goods

and services that come from allowing the churn to work, wherever

and whenever it might occur.76

 

These arguments sound harsh. That is part of the reason why they are

so unpopular: people would rather feel compassionately than think

logically. Many economists advocate government assistance to cush­

ion displaced workers’ transition, and retain public support for a dy­

namic economy. Alan Blinder recommends extended unemployment

insurance, retraining, and relocation subsidies.77 Other economists

disagree. But almost all economists grant that stopping transitions

has a grave cost.

 

While this is correct in the general, it does not work in the case where there some jobs that have no possible jobs left, or too few jobs they can perform. Humans are limited by their intelligence, if we can make robots that can do what humans do better or equally well at lower costs, this WILL be a problem.

 

 

 

Economists are especially critical of the antiforeign outlook because

it does not just happen to be wrong; it frequently conflicts with ele­

mentary economics. Textbooks teach that total output increases if

producers specialize and trade. On an individual level, who could

deny it? Imagine how much time it would take to grow your own food,

when a few hours’ wages spent at the grocery store feed you for weeks.

Analogies between individual and social behavior are at times mis­

leading, but this is not one of those times. International trade is, as

Steven Landsburg explains, a technology:

 

There are two technologies for producing automobiles in America.

One is to manufacture them in Detroit, and the other is to grow

them in Iowa. Everybody knows about the first technology; let me

tell you about the second. First you plant seeds, which are the raw

materials from which automobiles are constructed. You wait a few

months until wheat appears. Then you harvest the wheat, load it

onto ships, and sail the ships westward into the Pacific Ocean. After

a few months, the ships reappear with Toyotas on them.59

 

Great quote! I will remember that one.

 

 

Skipping ahead to the present, Alan Blinder blames opposition to

tradable pollution permits on antimarket bias.39 Why let people “pay

to pollute,” when we can force them to cease and desist? The textbook

answer is that tradable permits get you more pollution abatement for

the same cost. The firms able to cheaply cut their emissions do so,

selling their excess pollution quota to less flexible polluters. End re­

sult: More abatement bang for your buck. A price for pollution is

therefore not a pure transfer; it creates incentives to improve environ­

mental quality as cheaply as possible. But noneconomists disagree—

including relatively sophisticated policy insiders. Blinder discusses a

fascinating survey of 63 environmentalists, congressional staffers,

and industry lobbyists. Not one could explain economists’ standard

rationale for tradable permits.4

 

Sounds like: citizensclimatelobby.org/carbon-fee-and-dividend-faq/

 

 

Good intentions are ubiquitous in politics; what is scarce is accu­

rate beliefs. The pertinent question about selective participation is

whether voters are more biased than nonvoters, not whether voters

take advantage of nonvoters.59 Empirically, the opposite holds: The

median voter is less biased than the median nonvoter. One of the

main predictors of turnout, education, substantially increases eco­

nomic literacy. The other two—age and income—have little effect on

economic beliefs.

Though it sounds naive to count on the affluent to look out for the

interests of the needy, that is roughly what the data advise. All kinds

of voters hope to make society better off, but the well educated are

more likely to get the job done.60 Selective turnout widens the gap

between what the public gets and what it wants. But it narrows the

gap between what the public gets and what it needs.

 

great quote, “Good intentions are ubiquitous in politics; what is scarce is accurate beliefs.”

 

If people dont vote for self-interest, then representation is not necessary. To complaints about lack of representation are not well-founded, at least to some degree.

 

 

In financial and betting markets, there are intrinsic reasons why

clearer heads wield disproportionate influence.61 People who know

more can expect to earn higher profits, giving them a stronger to in­

centive to participate. Furthermore, past winners have more assets to

influence the market price. In contrast, the disproportionate electoral

influence of the well educated is a lucky surprise. Indeed, since the

value of their time is greater, one would expect them to vote less. To

be blunt, the problem with democracy is not that clearer heads have

surplus influence. The problem is that, compared to financial and

betting markets, the surplus is small.

 

More meritocracy is needed, it seems.

 

 

If education causes better economic understanding, there is an ar­

gument for education subsidies—albeit not necessarily higher sub­

sidies than we have now.62 If the connection is not causal, however,

throwing money at education treats a symptom of economic illiteracy,

not the disease. You would get more bang for your buck by defunding

efforts to “get out the vote.”63 One intriguing piece of evidence against

the causal theory is that educational attainment rose substantially in

the postwar era, but political knowledge stayed about the same.64

 

this indicates that it is g not education that causes greater political knowledge. In other words, g is a common cause of both better education and greater political knowledge. This isnt surprising at all. But it might still be that education has some beneficial effect, the study referred to is faulty in some way. Or that perhaps we’re doing education wrong. Perhaps we need incentives for people to increase their political knowledge? After all, if greater political knowledge causes better democratic results, and better democratic results cause more economic growth for the country, then it does pay for itself. It might even be a good investment.

 

The cite of 64 is: Delli Carpini, Michael, and Scott Keeter. 1996. What Americans Know About

Politics and Why It Matters. New Haven: Yale University Press.

 

It cant be found on either bookos or libgen, so i cant look it up.

 

 

 

Before studying public opinion, many wonder why democracy does

not work better. After one becomes familiar with the public’s system­

atic biases, however, one is struck by the opposite question: Why does

democracy work as well as it does? How do the unpopular policies

that sustain the prosperity of the West survive? Selective participation

is probably one significant part of the answer. It is easy to criticize

the beliefs of the median voter, but at least he is less deluded than

the median nonvoter.

 

lol’d

 

 

If voters are systematically mistaken about what policies work,

there is a striking implication: They will not be satisfied by the politi­

cians they elect. A politician who ignores the public’s policy prefer­

ences looks like a corrupt tool of special interests. A politician who

implements the public’s policy preferences looks incompetent be­

cause of the bad consequences. Empirically, the shoe fits: In the GSS,

only 25% agree that “People we elect to Congress try to keep the

promises they have made during the election,” and only 20% agree

that “most government administrators can be trusted to do what is

best for the country.”71 Why does democratic competition yield so few

satisfied customers? Because politicians are damned if they do and

damned if they don’t. The public calls them venal for failing to deliver

the impossible.

 

 

As in economics, laymen reject the basics, not merely details. Toxi­

cologists are vastly more likely than the public to affirm that “use of

chemicals has improved our health more than it has harmed it,” to

deny that natural chemicals are less harmful than man-made chemi­

cals, and to reject the view that “it can never be too expensive to

reduce the risks associated with chemicals.”81 While critics might like

to impugn the toxicologists’ objectivity, it is hard to take such accusa­

tions seriously. The public’s views are often patently silly, and toxicol­

ogists who work in industry, academia, and regulatory bureaus largely

see eye to eye.82

 

seems worth looking up these studies.

 

81. Kraus, Nancy, Torbjörn Malmfors, and Paul Slovic. “Intuitive toxicology: Expert and lay judgments of chemical risks.” Risk analysis 12.2 (1992): 215-232.

 

82. Lichter and Rothman (1999) similarly document that cancer research­

ers’ ideology has little effect on their scientific judgment. Liberal cancer re­

searchers who do not work in the private sector still embrace their profes­

sion’s contrarian views. “As a group, the experts—whether conservative or

liberal, Democratic or Republican—viewed cancer risks along roughly the

same lines. Thus, their perspectives on this topic do not appear to be ‘con­

taminated’ by either narrow self-interest or broader ideological commit­

ments” (1999: 116).

 

 

 

 

Why then does environmental policy put as much emphasis on

dosage as it does? Selective participation is probably part of the story.

Mirroring my results, Kraus, Malmfors, and Slovic (1992) find that ed­

ucation makes people think like toxicologists.84 The bulk of the expla­

nation, though, is probably that voters care about economic well-being

as well as safety from toxic substances. Moving from low dosage to

zero is expensive. It might absorb all of GDP. This puts a democratic

leader in a tight spot. If he embraces the public’s doseless worldview

and legislates accordingly, it would spark economic disaster. Over

60% of the public agrees that “It can never be too expensive to reduce

the risks associated with chemicals,”85 but the leader who complied

would be a hated scapegoat once the economy fell to pieces. On the

other hand, a leader who dismisses every low-dose scare as “unscien­

tific” and “paranoid” would soon be a reviled symbol of pedantic in­

sensitivity. Given their incentives, politicians cannot disregard the

public’s misconceptions, but they often drag their feet.

 

nowhere is this as clear as with pesticides and radiation. The public’s extreme fear of those do not at all mirror the scientific evidence of their harmfulness at low dosages.

 

 

Leaders’ incentive to rationally assess the effects of policy might be

perverse, not just weak. Machiavelli counsels the prince “to do evil if

constrained” but at the same time “take great care that nothing goes

out of his mouth which is not full of” “mercy, faith, integrity, humanity

and religion.” One can freely play the hypocrite because “everybody

sees what you appear to be, few feel what you are, and those few will

not dare oppose themselves to the many.”10 Yet, contra Machiavelli,

psychologists have documented humans’ real if modest ability to de­

tect dishonesty from body language, tone of voice, and more.11 George

Costanza memorably counseled Jerry Seinfeld, “Just remember, it’s

not a lie if you believe it.”12 The honestly mistaken politician appears

more genuine because he is more genuine. This gives leaders who

sincerely share their constituents’ policy views a competitive advan­

tage over Machiavellian rivals.13

 

I’ve sometimes heard the claim that privately, politicians really do acknowledge that ex. war on drugs does not work and is counter-productive, but that they go along with the voter opinion anyway. Perhaps this isn’t true. Perhaps the politicians really are as deluded as the voters? Or even more! Polls in Denmark show that politicians are firmly against legalization, while the public/voters are slightly positive.

 

 

To get ahead in politics, leaders need a blend of naive populism

and realistic cynicism. No wonder the modal politician has a law de­

gree. Dye and Zeigler report that “70 percent of the presidents, vice

presidents, and cabinet officers of the United States and more than

50 percent of the U.S. senators and House members” have been law­

yers.14 The economic role of government has greatly expanded since

the New Deal, but the percentage of congressmen with economic

training remains negligible.15 Economic issues Eire important to vot­

ers, but they do not want politicians with economic expertise—espe­

cially not ones who lecture them and point out their confusions.

 

no wonder they think new laws can solve everything…

 

 

It helps to sell the right kind of favors. Like a journalist with an ax

to grind, a shrewd politician moves along the margins of voter indif­

ference. The public is protectionist, but rarely has strong opinions

about which industries need help. This is a great opportunity for a

politician and a struggling industry to make a deal. Steel manufactur­

ers could pay a politician to take (a) a popular stand against foreigners

combined with (b) a not unpopular stand for American steel. In

maxim form: Do what the public wants when it cares; take bids from

interested parties when its doesn’t. Bear in mind, though, that the

important thing is not how burdensome a concession is, but how bur­

densome voters perceive it to be.

 

Always lean to the green, as it is said in Congress. www.huffingtonpost.com/lawrence-lessig/neoprogressives_b_704715.html

 

 

Consider the insurance market failure known as “adverse selec­

tion.” If people who want insurance know their own riskiness, but

insurers only know average riskiness, the market tends to shrink. Low-

risk people drop out, which raises consumers’ average riskiness,

which raises prices, which leads more low-risk customers to drop

out.52 In the worst-case scenario, the market “unravels.” Prices get so

high that no one buys insurance, and consumers get so risky that

firms cannot afford to sell for less.

 

Interesting. This shud happen to some degree becuz of the new consumer genomics. It may also be illegal for the insurance companies to utilize known information to change rates. For instance, feminists’ ideas about equality of the sexes had the result that it become illegal in the EU to change rates conditional on sex. This means that prices rose for women and fell for men even tho men cause most of the accidents.

 

ec.europa.eu/justice/newsroom/gender-equality/news/121220_en.htm

 

 

The main upshot of my analysis of democracy is that it is a good

idea to rely more on private choice and the free market. But what—if

anything—can be done to improve outcomes, taking the supremacy

of democracy over the market as fixed?. The answer depends on how

flexibly you define “democracy.” Would we still have a “democracy”

if you needed to pass a test of economic literacy to vote? If you needed

a college degree? Both of these measures raise the economic under­

standing of the median voter, leading to more sensible policies. Fran­

chise restrictions were historically used for discriminatory ends, but

that hardly implies that they should never be used again for any rea­

son. A test of voter competence is no more objectionable than a driv­

ing test. Both bad driving and bad voting are dangerous not merely

to the individual who practices them, but to innocent bystanders. As

Frederic Bastiat argues, “The right to suffrage rests on the presump­

tion of capacity”:

 

And why is incapacity a cause of exclusion? Because it is not the

voter alone who must bear the consequences of his vote; because

each vote involves and affects the whole community; because the

community clearly has the right to require some guarantee as to

the acts on which its welfare and existence depend.56

 

A more palatable way to raise the economic literacy of the median

voter is by giving extra votes to individuals or groups with greater

economic literacy. Remarkably, until the passage of the Representa­

tion of the People Act of 1949, Britain retained plural voting for gradu­

ates of elite universities and business owners. As Speck explains,

“Graduates had been able to vote for candidates in twelve universities

in addition to those in their own constituencies, and businessmen

with premises in a constituency other than their own domicile could

vote in both.”57 Since more educated voters think more like econo­

mists, there is much to be said for such weighting schemes. I leave it

to the reader to decide whether 1948 Britain counts as a democracy.

 

wow, never knew this!

 

 

Since well-educated people are better voters, another tempting way

to improve democracy is to give voters more education. Maybe it

would work. But it would be expensive, Eind as mentioned in the pre­

vious chapter, education may be a proxy for intelligence or curiosity.

A cheaper strategy, and one where a causal effect is more credible, is

changing the curriculum. Steven Pinker Eirgues that schools should

try to “provide students with the cognitive skills that are most im­

portant for grasping the modern world and that are most unlike the

cognitive tools they Eire born with,” by emphasizing “economics, evo­

lutionary biology, and probability and statistics.”60 Pinker essentially

wants to give schools a new mission: rooting out the biased beliefs

that students arrive with, especially beliefs that impinge on govern­

ment policy.61 What should be cut to make room for the new material?

 

There are only twenty-four hours in a day, and a decision to teach

one subject is also a decision not to teach another one. The ques­

tion is not whether trigonometry is important, but whether it is

more important than statistics; not whether an educated person

should know the classics, but whether it is more important for an

educated person to know the classics than elementary economics.62

 

Indeed

 

 

 

The Signal and the Noise Why So Many Predictions Fail – but Some Don’t Nate Silver 544p

 

It is a pretty interesting book especially becus it covers some areas of science not usually covered in popsci (geology, meteorology), and i learned a lot. it is also clearly written and easy to read, which speeds up reading speeds, making the 450ish pages rather quickly to devour. From a learning perspectiv this is awesome as it allows for faster learning. it shud also be mentioned that it has a lot of very useful illustrations which i shared on my social networks while reading it.

 

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

 

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

 

 

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

 

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

 

thats interesting. never heard of this.

 

 

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

 

Nicely illustrating for importance of honesty in reporting data, even on local TV.

 

 

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

 

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

 

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

a year ahead of time.21

 

and this is why we cant have nice things, i mean macroeconomics

 

 

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

 

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

 

 

 

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

-

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

thats interesting. never heard of this.

-

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

Nicely illustrating for importance of honesty in reporting data, even on local TV.

-

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

a year ahead of time.21

and this is why we cant have nice things, i mean macroeconomics

-

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

-

The g factor, the science of mental ability – Arthur R. Jensen, ebook download pdf free

 

This is a very interesting book. Without a doubt the best about intelligence that i hav read so far. I definitely recommend reading it if one is interested in psychometrics. It can serve as a long, good, but a bit dated introduction to the subject. For shorter introductions, probably Gottfredson’s why g matters is better.

 

 

Quotes and comments below. Red text = quotes.

——-

 

Galton had no tests for obtaining direct measurements of cognitive ability.

Yet he tried to estimate the mean levels of mental capacity possessed by different

racial and national groups on his interval scale of the normal curve. His esti­

mates—many would say guesses—were based on his observations of people of

different races encountered on his extensive travels in Europe and Africa, on

anecdotal reports of other travelers, on the number and quality of the inventions

and intellectual accomplishments of different racial groups, and on the percent­

age of eminent men in each group, culled from biographical sources. He ven­

tured that the level of ability among the ancient Athenian Greeks averaged “ two

grades” higher than that of the average Englishmen of his own day. (Two grades

on Galton’s scale is equivalent to 20.9 IQ points.) Obviously, there is no pos­

sibility of ever determining if Galton’s estimate was anywhere near correct. He

also estimated that African Negroes averaged “ at least two grades” (i.e., 1.39a,

or 20.9 IQ points) below the English average. This estimate appears remarkably

close to the results for phenotypic ability assessed by culture-reduced IQ tests.

Studies in sub-Saharan Africa indicate an average difference (on culture-reduced
nonverbal tests of reasoning) equivalent to 1.43a, or 21.5 IQ points between

blacks and whites.8 U.S. data from the Armed Forces Qualification Test (AFQT),

obtained in 1980 on large representative samples of black and white youths,

show an average difference of 1.36a (equivalent to 20.4 IQ points)—not far

from Galton’s estimate (1.39a, or 20.9 IQ points).9 But intuition and informed

guesses, though valuable in generating hypotheses, are never acceptable as ev­

idence in scientific research. Present-day scientists, therefore, properly dismiss

Galton’s opinions on race. Except as hypotheses, their interest is now purely

biographical and historical.

 

yes there is. first, one can check the historical record to look for dysgenic effects. if the british are less smart than the ancient greeks, there wud probably hav been som dysgenic effects somwher in history. still, this is not a good method, since the population groups are somwhat different.

 

second, soon we will know the genes that cause different levels of intelligence. we can then analyze the remains of ancient greeks to see which genes they had. this shud giv a pretty good estimate, altho not perfect since, that 1) new mutations hav com by since then, 2) som gene variants hav perhaps disappeared, 3) the difficulty of getting a representativ sample of ancient greeks to test from, 4) the problems with getting good enuf quality DNA to run tests on. still, i dont think these are impossible to overcom, and i predict that som decent estimate can be made.

 

 

A General Factor Is Not Inevitable. Factor analysis is not by its nature

bound to produce a general factor regardless of the nature of the correlation

matrix that is analyzed. A general factor emerges from a hierarchical factor

analysis if, and only if, a general factor is truly latent in the particular correlation

matrix. A general factor derived from a hierarchical analysis should be based

on a matrix of positive correlations that has at least three latent roots (eigen­

values) greater than 1.

For proof that a general factor is not inevitable, one need only turn to studies

of personality. The myriad of inventories that measure various personality traits

have been subjected to every type of factor analysis, yet no general factor has

ever emerged in the personality domain. There are, however, a great many first-

order group factors and several clearly identified second-order group factors, or

“ superfactors” (e.g., introversion-extraversion, neuroticism, and psychoticism),

but no general factor. In the abilities domain, on the other hand, a general factor,

g, always emerges, provided the number and variety of mental tests are sufficient

to allow a proper factor analysis. The domain of body measurements (including

every externally measurable feature of anatomy) when factor analyzed also

shows a large general factor (besides several small group factors). Similarly, the

correlations among various measures of athletic ability show a substantial gen­

eral factor.

 

 

Jensen was wrong about this, altho the significance of that is disputed afaict. see:

How important is the General Factor of Personality? A General Critique (William Revelle and Joshua Wilt), PDF

 

 

In jobs where assurance of competence is absolutely critical, however, such

as airline pilots and nuclear reactor operators, government agencies seem to have

recognized that specific skills, no matter how well trained, though essential for

job performance, are risky if they are not accompanied by a fairly high level of

g. For example, the TVA, a leader in the selection and training of reactor op­

erators, concluded that results of tests of mechanical aptitude and specific job

knowledge were inadequate for predicting an operator’s actual performance on

the job. A TVA task force on the selection and training of reactor operators

stated: “ intelligence will be stressed as one of the most important characteristics

of superior reactor operators.. . . intelligence distinguishes those who have

merely memorized a series of discrete manual operations from those who can

think through a problem and conceptualize solutions based on a fundamental

understanding of possible contingencies.” 161 This reminds one of Carl Bereiter’s

clever definition of “ intelligence” as “ what you use when you don’t know

what to do.”

 

funny and true

 

 

The causal underpinnings of mental development take place at the neurolog­

ical level even in the absence of any specific environmental inputs such as those

that could possibly explain mental growth in something like figure copying in

terms of transfer from prior learning. The well-known “ Case of Isabel” is a

classic example.181 From birth to age six, Isabel was totally confined to a dimly

lighted attic room, where she lived alone with her deaf-mute mother, who was

her only social contact. Except for food, shelter, and the presence of her mother,

Isabel was reared in what amounted to a totally deprived environment. There

were no toys, picture books, or gadgets of any kind for her to play with. When

found by the authorities, at age six, Isabel was tested and found to have a mental

age of one year and seven months and an IQ of about 30, which is barely at

the imbecile level. In many ways she behaved like a very young child; she had

no speech and made only croaking sounds. When handed toys or other unfa­

miliar objects, she would immediately put them in her mouth, as infants nor­

mally do. Yet as soon as she was exposed to educational experiences she

acquired speech, vocabulary, and syntax at an astonishing rate and gained six

years of tested mental age within just two years. By the age of eight, she had

come up to a mental age of eight, and her level of achievement in school was

on a par with her age-mates. This means that her rate of mental development—

gaining six years of mental age in only two years—was three times faster than

that of the average child. As she approached the age of eight, however, her

mental development and scholastic performance drastically slowed down and

proceeded thereafter at the rate of an average child. She graduated from high

school as an average student.

 

What all this means to the g controversy is that the neurological basis of

information processing continued developing autonomously throughout the six

years of Isabel’s environmental deprivation, so that as soon as she was exposed

to a normal environment she was able to learn those things for which she was

developmentally “ ready” at an extraordinarily fast rate, far beyond the rate for

typically reared children over the period of six years during which their mental

age normally increases from two to eight years. But the fast rate of manifest

mental development slowed down to an average rate at the point where the level

of mental development caught up with the level of neurological development.

Clearly, the rate of mental development during childhood is not just the result

of accumulating various learned skills that transfer to the acquisition of new

skills, but is largely based on the maturation of neural structures.

 

this reminds me of the person who suggested that we delay teaching math in schools for the same reason. it is simply more time-effective, and time is costly, both for the child who has limited freedom in the time spent in school, and for soceity becus that time cud hav been spent on teaching somthing else, or not spent at all and thus saved money on teachers.

 

the idea is that som math subjects takes very long to teach, say, 8 year olds, but can rapidly to taught to 12 year olds. so, using som invented numbers, the idea is that instead of spending 10 hours teaching long division to 8 year olds, we cud spend 2 hours teaching long division to 12 year olds, thus saving 8 eights that can be either used on somthing else that can be taught easily to 8 year olds, or simply freeing up the time for non-teaching activities.

 

see: www.inference.phy.cam.ac.uk/sanjoy/benezet/ for the original papers

 

 

Perhaps the most problematic test of overlapping neural elements posited by

the sampling theory would be to find two (or more) abilities, say, A and B, that

are highly correlated in the general population, and then find some individuals

in whom ability A is severely impaired without there being any impairment of

ability B. For example, looking back at Figure 5.2, which illustrates sampling

theory, we see a large area of overlap between the elements in Test A and the

elements in Test B. But if many of the elements in A are eliminated, some of

its elements that are shared with the correlated Test B will also be eliminated,

and so performance on Test B (and also on Test C in this diagram) will be

diminished accordingly. Yet it has been noted that there are cases of extreme

impairment in a particular ability due to brain damage, or sensory deprivation

due to blindness or deafness, or a failure in development of a certain ability due

to certain chromosomal anomalies, without any sign of a corresponding deficit

in other highly correlated abilities.22 On this point, behavioral geneticists Will-

erman and Bailey comment: “ Correlations between phenotypically different

mental tests may arise, not because of any causal connection among the mental

elements required for correct solutions or because of the physical sharing of

neural tissue, but because each test in part requires the same ‘qualities’ of brain

for successful performance. For example, the efficiency of neural conduction or

the extent of neuronal arborization may be correlated in different parts of the

brain because of a similar epigenetic matrix, not because of concurrent func­

tional overlap.” 22 A simple analogy to this would be two independent electric

motors (analogous to specific brain functions) that perform different functions

both running off the same battery (analogous to g). As the battery runs down,

both motors slow down at the same rate in performing their functions, which

are thus perfectly correlated although the motors themselves have no parts in

common. But a malfunction of one machine would have no effect on the other

machine, although a sampling theory would have predicted impaired perform­

ance for both machines.

 

i know its only an analogy, but whether ther ar one or two motors tapping from one battery might hav an effect on their speed. that depends on the setup, i think.

 

 

Gc is most highly loaded in tests based on scholastic knowledge and cultural

content where the relation-eduction demands of the items are fairly simple. Here

are two examples of verbal analogy problems, both of about equal difficulty in

terms of percentage of correct responses in the English-speaking general pop­

ulation, but the first is more highly loaded on G f and the second is more highly

loaded on Gc.

 

1. Temperature is to cold as Height is to

(a) hot (b) inches (c) size (d) tall (e) weight

2. Bizet is to Carmen as Verdi is to

(a) Aida (b) Elektra (c) Lakme (d) Manon (e) Tosca

 

first one, i wanted to answer <small>, since <cold> is on the bottum of the scale of temperature, so i wanted somthing that was on the bottom of the scale of height. but ther is no such option, but tall is also on the scale of height, just as cold is on the scale of temperature. with no other better option, i went with (d), which was correct.

 

second one, however, made no sense to me. i did look for patterns in spelling, vowels, length, etc., found nothing. i then googled it. its composers and their operas.

en.wikipedia.org/wiki/Georges_Bizet

en.wikipedia.org/wiki/Carmen

en.wikipedia.org/wiki/Giuseppe_Verdi

en.wikipedia.org/wiki/Aida

 

 

Another blood variable of interest is the amount of uric acid in the blood

(serum urate level). Many studies have shown it to have only a slight positive

correlation with IQ. But it is considerably more correlated with measures of

ambition and achievement. Uric acid, which has a chemical structure similar to

caffeine, seems to act as a brain stimulant, and its stimulating effect over the

course of the individual’s life span results in more notable achievements than

are seen in persons of comparable IQ, social and cultural background, and gen­

eral life-style, but who have a lower serum urate level. High school students

with elevated serum urate levels, for example, obtain higher grades than their

IQ-matched peers with an average or below-average serum urate level, and,

amusingly, one study found a positive correlation between university professors’

serum urate levels and their publication rates. The undesirable aspect of high

serum urate level is that it predisposes to gout. In fact, that is how the association

was originally discovered. The English scientist Havelock Ellis, in studying the

lives and accomplishments of the most famous Britishers, discovered that they

had a much higher incidence of gout than occurs in the general population.

Asthma and other allergies have a much-higher-than-average frequency in

children with higher IQs (over 130), particularly those who are mathematically

gifted, and this is an intrinsic relationship. The intellectually gifted show some

15 to 20 percent more allergies than their siblings and parents. The gifted are

also more apt to be left-handed, as are the mentally retarded; the reason seems

to be that the IQ variance of left-handed persons is slightly greater than that of

the right-handed, hence more of the left-handed are found in the lower and upper

extremes of the normal distribution of IQ.

 

Then there are also a number of odd and less-well-established physical cor­

relates of IQ that have each shown up in only one or two studies, such as vital

capacity (i.e., the amount of air that can be expelled from the lungs), handgrip

strength, symmetrical facial features, light hair color, light eye color, above-

average basic metabolic rate (all these are positively correlated with IQ), and

being unable to taste the synthetic chemical phenylthiocarbamide (nontasters are

higher both in g and in spatial ability than tasters; the two types do not differ

in tests of clerical speed and accuracy). The correlations are small and it is not

yet known whether any of them are within-family correlations. Therefore, no

causal connection with g has been established.

 

Finally, there is substantial evidence of a positive relation between g and

general health or physical well-being.[36] In a very large national sample of high

school students (about 10,000 of each sex) there was a correlation of +.381

between a forty-three-item health questionnaire and the composite score on a

large number of diverse mental tests, which is virtually a measure of g. By

comparison, the correlation between the health index and the students’ socio­

economic status (SES) was only +.222. Partialing out g leaves a very small

correlation ( + .076) between SES and health status. In contrast, the correlation

between health and g when SES is partialed out is +.326.

 

how very curius!

 

 

Certainly psychometric tests were never constructed with the intention of

measuring inbreeding depression. Yet they most certainly do. At least fourteen

studies of the effects of inbreeding on mental ability test scores—mostly IQ—

have been reported in the literature.132′ Without exception, all of the studies show

inbreeding depression both of IQ and of IQ-correlated variables such as scho­

lastic achievement. As predicted by genetic theory, the IQ variance of the inbred

is greater than that of the noninbred samples. Moreover, the degree to which

IQ is depressed is an increasing monotonic function of the coefficient of in-

breeding. The severest effects are seen in the offspring of first-degree incestuous

matings (e.g., father-daughter, brother-sister); the effect is much less for first-

cousin matings and still less for second-cousin matings. The degree of IQ de­

pression for first cousins is about half a standard deviation (seven or eight IQ

points).

 

In most of these studies, social class and other environmental factors are well

controlled. Studies in Muslim populations in the Middle East and India are

especially pertinent. Cousin marriages there are more prevalent in the higher

social classes, as a means of keeping wealth in family lines, so inbreeding and

high SES would tend to have opposite and canceling effects. The observed effect

of inbreeding depression on IQ in the studies conducted in these groups,

therefore, cannot be attributed to the environmental effects of SES that are often

claimed to explain IQ differences between socioeconomically advantaged and

disadvantaged groups.

 

These studies unquestionably show inbreeding depression for IQ and other

single measures of mental ability. The next question, then, concerns the extent

to which g itself is affected by inbreeding. Inbreeding depression could be

mainly manifested in factors other than g, possibly even in each test’s specificity.

To answer this question, we can apply the method of correlated vectors to in-

breeding data based on a suitable battery of diverse tests from which g can be

extracted in a hierarchical factor analysis. I performed these analyses1331 for the

several large samples of children born to first-and second-cousin matings in

Japan, for whom the effects of inbreeding were intensively studied by geneticists

William Schull and James Neel (1965). All of the inbred children and compa­

rable control groups of noninbred children were tested on the Japanese version

of the Wechsler Intelligence Scale for Children (WISC). The correlations among

the eleven subtests of the WISC were subjected to a hierarchical factor analysis,

separately for boys and girls, and for different age groups, and the overall av­

erage g loadings were obtained as the most reliable estimates of g for each

subtest. The analysis revealed the typical factor structure of the WISC—a large

g factor and two significant group factors: Verbal and Spatial (Performance).

(The Memory factor could not emerge because the Digit Span subtest was not

used.) Schull and Neel had determined an index of inbreeding depression on

each of the subtests. In each subject sample, the column vector of the eleven

subtests’ g loadings was correlated with the column vector of the subtests’ index

of inbreeding depression (ID). (Subtest reliabilities were partialed out of these

correlations.) The resulting rank-order correlation between subtests’ g loadings

and their degree of inbreeding depression was + .79 (p < .025). The correlation

of ID with the Verbal factor loadings (independent of g) was +.50 and with the

Spatial (or Performance) factor the correlation was —.46. (The latter two cor­

relations are nonsignificant, each with p < .05.) Although this negative corre­

lation of ID with the spatial factor (independent of g) falls short of significance,

the negative correlation was found in all four independent samples. Moreover,

it is consistent with the hypothesis that spatial visualization ability is affected

by an X-linked recessive allele.34 Therefore, it is probably not a fluke.

 

A more recent study1351 of inbreeding depression, performed in India, was

based entirely on the male offspring of first-cousin parents and a control group

of the male offspring of genetically unrelated parents. Because no children of

second-cousin marriages were included, the degree of inbreeding depression was

considerably greater than in the previous study, which included offspring of

second-cousin marriages. The average inbreeding effect on the WISC-R Full

Scale IQ was about ten points, or about two-third of a standard deviation.1361

The inbreeding index was reported for the ten subtests of the WISC-R used in

this study. To apply the method of correlated vectors, however, the correlations

among the subtests for this sample are needed to calculate their g loadings.

Because these correlations were not reported, I have used the g loadings obtained

from a hierarchical factor analysis of the 1,868 white subjects in the WISC-R

standardization sample.1371 The column vector of these g loadings and the column

vector of the ID index have a rank-order correlation (with the tests’ reliability

coefficients partialed out) of +.83 (p < .01), which is only slightly larger than

the corresponding correlation between the g and ID vectors in the Japanese

study.

 

In sum, then, the g factor significantly predicts the degree to which perform­

ance on various mental tests is affected by inbreeding depression, a theoretically

predictable effect for traits that manifest genetic dominance. The larger a test’s

g loading, the greater is the depression of the test scores of the inbred offspring

of consanguineous parents, as compared with the scores of noninbred persons.

The evidence in these studies of inbreeding rules out environmental variables

as contributing to the observed depression of test scores. Environmental differ­

ences were controlled statistically, or by matching the inbred and noninbred

groups on relevant indices of environmental advantage.

 

pretty large effects. the footnote with the 14 studies mentioned is:

 

Adams & Neel, 1967; Afzal, 1988; Afzal & Sinha, 1984; Agrawal et al., 1984;

Badaruddoza & Afzil, 1993; Bashi, 1977; Book, 1957; Carter, 1967; Cohen et al., 1963;

Inbaraj & Rao, 1978; Neel, et al., 1970; Schull & Neel, 1965; Seemanova, 1971; Slatis

& Hoene, 1961.

 

 

Semantic Verification Test. The SVT uses the binary response console (Fig­

ure 8.3) and a computer display screen. Following the preparatory “ beep,” a

simple statement appears on the screen. The statement involves the relative

positions of the three letters A, B, C as they may appear (equally spaced) in a

horizontal array. Each trial uses one of the six possible permutations of these

three letters chosen at random. The statement appears on the screen for three

seconds, allowing more than enough time for the subject to read it. There are

fourteen possible statements of the following types: “ A after B,” “ C before

A,” “ A between B and C,” “ B first,” “ B last,” “ C before A and B,” “ C

after B and A” ; and the negative form of each of these statements, for instance,

“ A not after B.” Following the three-second appearance of one of these state­

ments, the screen goes blank for one second and then one of the permutations

of the letters A B C appears. The subject responds by pressing either the TRUE

or FALSE button, depending on whether the positions of the letters does or does

not agree with the immediately previous statement.

 

Although the SVT is the most complex of the many ECTs that have been

tried in my lab, the average RT for university students is still less than 1 second.

The various “ problems” differ widely in difficulty, with average RTs ranging

from 650 msec to 1,400 msec. Negative statements take about 200 msec longer

than the corresponding positive statements. MT, on the other hand, is virtually

constant across conditions, indicating that it represents something other than

speed of information processing.

 

The overall median RT and RTSD as measured in the SVT each correlates

about —.50 with scores on the Raven’s Advanced Progressive Matrices given

without time limit. The average RT on the SVT also shows large differences

between Navy recruits and university students,1201 and between academically

gifted children and their less gifted siblings.1211 The fact that there is a within-

families correlation between RT and IQ indicates that these variables are intrin­

sically and functionally related.

 

One study20 reveals that the average processing time for each of the fourteen

types of SVT statements in university students predicts the difficulty level of

the statements (in terms of error responses) in children (third-graders) who were

given the SVT as a nonspeeded paper-and-pencil test. While the SVT is of such

trivial difficulty for college students that individual differences are much more

reliably reflected by RT rather than by errors, the SVT items are relatively

difficult for young children. Even when they take the SVT as a nonspeeded

paper-and-pencil test, young children make errors on about 20 percent of the

trials. (The few university students who made even a single error under these

conditions, given as a pretest, were screened out.) The fact that the rank order

of the children’s error rates on the various types of SVT statements closely

corresponds to the rank order of the college students’ average RTs on the same

statements indicates that item difficulty is related to speed of processing, even

when the test is nonspeeded.

 

It appears that if information exceeds a critical level of complexity for the in­

dividual, the individual’s speed of processing is too slow to handle the infor­

mation all at once; the system becomes overloaded and processing breaks

down, with resulting errors, even for nonspeeded tests on which subjects are

told to take all the time they need. There are some items in Raven’s Advanced

Matrices, for example, that the majority of college students cannot solve with

greater than chance success, even when given any amount of time, although the

problems do not call for the retrieval of any particular knowledge. As already

noted, the scores on such nonspeeded tests are correlated with the speed of in­

formation processing in simple ECTs that are easily performed by all subjects

in the study.

 

interesting test. the threshold hypothesis is also interesting for makers of IQ tests.

 

 

There are many other kinds of simple tasks that do not resemble the con­

tents of conventional psychometric tests but that have significant correlations

with IQ. Many studies have confirmed Spearman’s finding that pitch discrim­

ination is g-loaded, and other musical discriminations, in duration, timbre,

rhythmic pattern, pitch interval, and harmony, are correlated with IQ, indepen­

dently of musical training.28 The strength of certain optical illusions is also

significantly related to IQ.1291 Surprisingly, higher-IQ subjects experience cer­

tain illusions more strongly than subjects with lower IQ, probably because

seeing the illusion implies a greater amount of mental transformation of the

stimulus, and tasks that involve transformation of information (e.g., backward

digit span) are typically more g loaded than tasks involving less transforma­

tion of the input (e.g., forward digit span). The positive correlation between

IQ and susceptibility to illusions is consistent with the fact that susceptibility

to optical illusions also increases with age, from childhood to maturity, and

then decreases in old age—the same trajectory we see for raw-score perform­

ance on IQ tests and for speed and intraindividual consistency of RT in ECTs.

The speed and consistency of information processing generally show an in­

verted U curve across the life span.

 

interesting.

 

 

Jensen mentions the en.wikipedia.org/wiki/Yerkes-Dodson_law

interesting. i link to Wikipedia since i think its explanation of the law is better than Jensens, who just briefly mentions it.

 

 

[…Localized damage to the brain

areas that normally subserve one of these group factors can leave the person

severely impaired in the expression of the abilities loaded on the group factor,

but with little or no impairment of abilities that are loaded on other group factors

or on g.]

 

A classic example of this is females who are born with a chromosomal anom­

aly known as Turner’s syndrome.1701 Instead of having the two normal female

sex chromosomes (designated XX), they lack one X chromosome (hence are

designated XO). Provided no spatial visualization tests are included in the IQ

battery, the IQs of these women (and presumably their levels of g) are normally

distributed and virtually indistinguishable from that of the general population.

Yet their performance on all tests that are highly loaded on the spatial-

visualization factor is extremely low, typically borderline retarded, even in

Turner’s syndrome women with verbal IQs above 130. It is as if their level of

g is almost totally unreflected in their level of performance on spatial tasks.

 

It is much harder to imagine the behavior of persons who are especially

deficient in all abilities involving g and all of the major group factors, but have

only one group factor that remains intact. In our everyday experience, persons

who are highly verbal, fluent, articulate, and use a highly varied vocabulary,

speaking with perfect syntax and appropriate expression, are judged to be of at

least average or probably superior IQ. But there is a rare and, until recently,

little-known genetic anomaly, Williams syndrome,1711 in which the above-listed

characteristics of high verbal ability are present in persons who are otherwise

severely mentally deficient, with IQs averaging about 50. In most ways, Wil­

liams syndrome persons appear to behave with no more general capability of

getting along in the world than most other persons with similarly low IQs. As

adults, they display only the most rudimentary scholastic skills and must live

under supervision. Only their spoken verbal ability has been spared by this

genetic defect. But their verbal ability appears to be “ hollow” with respect to

g. They speak in complete, often complex, sentences, with good syntax, and

even use unusual words appropriately. (They do surprisingly well on the Pea­

body Picture Vocabulary Test.) In response to a series of pictures, they can tell

a connected and fully elaborated story, accompanied by appropriate, if somewhat

exaggerated, emotional expression. Yet they have exceedingly little ability to

reason, or to explain or summarize the meaning of what they say. On most

spatial ability tests they generally perform on a par with Down syndrome persons

of comparable IQ, but they also differ markedly from Down persons in peculiar

ways. Williams syndrome subjects are more handicapped than IQ-matched

Down subjects in figure copying and block designs.

 

Comparing Turner’s syndrome with Williams syndrome obviously suggests

the generalization that a severe deficiency of one group factor in the presence

of an average level of g is far less a handicap than an intact group factor in the

presence of a very low level of g.

 

never heard of Williams syndrome befor.

 

en.wikipedia.org/wiki/Williams_syndrome

 

 

The correlation of IQ with grades and achievement test scores is highest (.60

to .70) in elementary school, which includes virtually the entire child population

and hence the full range of mental ability. At each more advanced educational

level, more and more pupils from the lower end of the IQ distribution drop out,

thereby restricting the range of IQs. The average validity coefficients decrease

accordingly: high school (.50 to .60), college (.40 to .50), graduate school (.30

to .40). All of these are quite high, as validity coefficients go, but they permit

far less than accurate prediction of a specific individual. (The standard error of

estimate is quite large for validity coefficients in this range.)

 

interesting. one thing that i hav been thinking about is that my GPA thruout my life has always been a bit abov average, but not close to the top. given that the intelligence requirement for each new step on the way thru the school system increases, one wud hav expected a drop in GPA, but no such thing happened. in fact, its the other way around. my GPA is the danish elementary school is 9.3 (9th grade) the average is ~8.1. this includes grades from non-intellectual subjects such as the ‘subject’ of having a nice hand-writing (yes seriusly). in 10th grade my average was 8.7, and the average is ~6.6. the max is 13 in all cases, altho normally grades abov 11 wer not given.

 

in gymnasiet (high school equiv.ish), my GPA was 7.8 and the average is 7.0. the slightly slower grades is becus the system was changed from a 13-step to a 7-step scale. and for comparison reasons, one can note that i went to HTX which has lower grades. the percentile level is 65th.

 

my university grades befor dropping out of filosofy were rather good, lots of 10’s, but i dont know the average, so cant compare. i suspect they were abov average again.

 

 

Unless an individual has made the transition from word reading to reading

comprehension of sentences and paragraphs, reading is neither pleasurable nor

practically useful. Few adults with an IQ of eighty (the tenth percentile of the

overall population norm) ever make the transition from word reading skill to

reading comprehension. The problem of adult illiteracy (defined as less than a

fourth-grade level of reading comprehension) in a society that provides an ele­

mentary school education to virtually its entire population is therefore largely a

problem of the lower segment of the population distribution of g. In the vast

majority of people with low reading comprehension, the problem is not word

reading per se, but lack of comprehension. These individuals score about the

same on tests of reading comprehension even if the test paragraphs are read

aloud to them by the examiner. In other words, individual differences in oral

comprehension and in reading comprehension are highly correlated.12’1

 

80.. but the american black average is only about 85. is it really true that ~37% of them ar too dull to learn to read properly? compared with ~10% of whites.

 

 

Virtually every type of work calls for behavior that is guided by cognitive

processes. As all such processes reflect g to some extent, work proficiency is g

loaded. The degree depends on the level of novelty and cognitive complexity

the job demands. No job is so simple as to be totally without a cognitive com­

ponent. Several decades of empirical studies have shown thousands of correla­

tions of various mental tests with work proficiency. One of the most important

conclusions that can be drawn from all this research is that mental ability tests

in general have a higher success rate in predicting job performance than any

other variables that have been researched in this context, including (in descend­

ing order of average predictive validity) skill testing, reference checks, class

rank or grade-point average, experience, interview, education, and interest meas­

ures.1221 In recent years, one personality constellation, characterized as “ consci­

entiousness,” has emerged near the top of the list (just after general mental

ability) as a predictor of occupational success.

 

reminds me that i ought to look into this field of psychology. its called I/O psychology. som time back i talked with a phd (i think) on 4chan who studied that area. he said that if he had his way, he wud just rely on g alone to predict job performance, training etc. he recommended me a textbook, which i found on the internet.

 

Psychology Applied to Work, An Introduction to Industrial and Organizational Psychology – Paul M. Muchinsky

 

it seems decent.

 

 

A person cannot perform a job successfully without the specific knowledge

required by the job. Possibly such job knowledge could be acquired on the job

after a long period of trial-and-error learning. For all but the very simplest jobs,

however, trial-and-error learning is simply too costly, both in time and in errors.

Job training inculcates the basic knowledge much more efficiently, provided that

later on-the-job experience further enhances the knowledge or skills acquired in

prior job training. Because knowledge and skill acquisition depend on learning,

and because the rate of learning is related to g, it is a reasonable hypothesis that

g should be an effective predictor of individuals’ relative success in any specific

training program.

 

The best studies for testing this hypothesis have been performed in the armed

forces. Many thousands of recruits have been selected for entering different

training programs for dozens of highly specialized jobs based on their perform­

ance on a variety of mental tests. As the amount of time for training is limited,

efficiency dictates assigning military personnel to the various training schools

so as to maximize the number who can complete the training successfully and

minimize the number who fail in any given specialized school. When a failed

trainee must be rerouted to a different training school better suited to his apti­

tude, it wastes time and money. Because the various schools make quite differing

demands on cognitive abilities, the armed services employ psychometric re­

searchers to develop and validate tests to best predict an individual’s probability

of success in one or another of the various specialized schools.

 

 

one is tempted to say ”common sense”, but apparently, only the military dares to do such things.

 

 

A rough analogy may help to make the essential point. Suppose that for some

reason it was impossible to measure persons’ heights directly in the usual way,

with a measuring stick. However, we still could accurately measure the length

of the shadow cast by each person when the person is standing outdoors in the

sunlight. Provided everyone’s shadow is measured at the same time of day, at

the same day of the year, and at the same latitude on the earth’s surface, the

shadow measurements would show exactly the same correlations with persons’

weight, shoe size, suit or dress size, as if we had measured everyone directly

with a yardstick; and the shadow measurements could be used to predict per­

fectly whether or not a given person had to stoop when walking through a door

that is only 5 ‘/2 -feet high. However, if one group of persons’ shadows were

measured at 9:00 a .m . and another group’s at 10:00 a .m ., the pooled measure­

ments would show a much smaller correlation with weight and other factors

than if they were all measured at the same time, date, and place, and the meas­

urements would have poor validity for predicting which persons could walk

through a 5 ‘/2 -foot door without stooping. We would say, correctly, that these

measurements are biased. In order to make them usefully accurate as predictors

of a person’s weight and so forth, we would have to know the time the person’s

shadow was measured and could then add or subtract a value that would adjust

the measurement so as to make it commensurate with measurements obtained

at some other specific time, date, and location. This procedure would permit the

standardized shadow measurements of height, which in principle would be as

good as the measurements obtained directly with a measuring stick.

 

Standardized IQs are somewhat analogous to the standardized shadow meas­

urements of height, while the raw scores on IQ tests are more analogous to the

raw measurements of the shadows themselves. If we naively remain unaware

that the shadow measurements vary with the time of day, the day of the year,

and the degrees of latitude, our raw measurements would prove practically

worthless for comparing individuals or groups tested at different times, dates,

or places. Correlations and predictions could be accurate only within each unique

group of persons whose shadows were measured at the same time, date, and

place. Since psychologists do not yet have the equivalent of a yardstick for

measuring mental ability directly, their vehicles of mental measurement—IQ

scores—are necessarily “ shadow” measurements, as in our height analogy, al­

beit with amply demonstrated practical predictive validity and construct validity

within certain temporal and cultural limits.

 

 

interesting. however, biologically based tests shud allow for absolut measurement, say tests based on RT in ECTs, or tests based on the amount of mylianation in the brain, or brain ph levels, brain size via brain imaging scans if we can make them better measurements of g, etc.

 

 

Many possible factors determine whether a person passes or fails a particular

test item. Does the person understand the item at all (e.g., “What is the sum of

all the latent roots of a 7 X 7 R matrix?” )? Has the person acquired the specific

knowledge called for by the item (e.g., “Who wrote Faust?”), or perhaps has

he acquired it in the past and has since forgotten it? Did the person really know

the answer, but just couldn’t recall it at the moment of being tested? Does the

item call for a cognitive skill the person either never acquired or has forgotten

through disuse (e.g., “ How much of a whole apple is two-thirds of one-half of

the apple?” )? Does the person understand the problem and know how to solve

it, but is unable to do it within the allotted time limit (e.g., substituting the

corresponding letter of the alphabet for each of the numbers from one to twenty-

six listed in a random order in one minute)? Or even when there is a liberal

time limit does the person give up on the item or just guess at the answer

prematurely, perhaps because the item looks too complicated at first glance (e.g.,

“ If it takes six garden hoses, all running for three hours and thirty minutes to

fill a tank, how many additional hoses would be needed to fill the tank in thirty

minutes?” )?

 

1) dunno

2) Goethe

3) 2/3*1/2=4/6*3/6=12/36=1/3

4) #hose*time=tank size

6*3.5=21

21 is the size of the tank

21=0.5*#hose, solve #hose

42=#hose

42-6=36

36 more hoses

 

 

The only study I have found that investigated whether there has been a secular

change (over thirty years) in the heritability of g-loaded test scores concluded

that “ the results revealed no unambiguous evidence for secular trends in the

heritability of intelligence test scores.” 1351 However, the heritability coefficients

(based on twenty-two same-age cohort samples of MZ and DZ male twins born

in Norway between 1930 and 1960) showed some statistically reliable nonlinear

trends over the thirty-year period, as shown in Figure 10.2. The overall trend

line goes equally down-up-down-up with heritability coefficients ranging from

slightly above .80 to slightly below .40. The heritability coefficient was the same

for the cohort born in 1930 as for the cohort born in 1960 (for both, h2 = .80).

The authors offer only weak ad hoc speculations about possible causes of this

erratic fluctuation of h2 across 22 points in time.

 

the hole is the german occupation of norway. the data from the 30s make sense to me, the depression wud result in civil unrest and the changing up of society. after a period of such, heritabilities shud stabilize again, as seen in the after war period. i dont understand the 50s down swing in heritability.

 

so, i thought it might be somthing economic. i gathered GDP data, and looked at the data. nope, not true.

 

www.norges-bank.no/pages/77409/p1_c6.xlsx

 

data from 1901 to 2000 looks like this:

gdp norway 50s

 

doesnt fit with the GDP hypothesis at all, except for missing data in the war.

 

i dunno, perhaps www.newsinenglish.no/2010/06/16/the-50s-in-norway-werent-so-nifty/

 

the authors of the study that found the drop in heritability also dont know ”We are, however, quite at a loss in explaining the dip from about 1950 to 1954. Thus, we feel that the best strategy at present is to leave the issue of secular trends open. ”

On the question of secular trends in the heritability of intelligence scores A study of Norwegian twins

 

Head Start. The federal preschool intervention known as Head Start, which

has been in continual existence now since 1964, is undoubtedly the largest-

scale, though not the most intensive, educational intervention program ever un­

dertaken, with an annual expenditure over $2 billion. The program is aimed at

improving the health status and the learning and social skills of preschoolers

from poor backgrounds so they can begin regular school more on a par with

children from more privileged backgrounds. The intervention is typically short­

term, with various programs lasting anywhere from a few months to two years.

 

The general conclusion of the hundreds of studies based on Head Start data

is that the program has little, if any, effect on IQ or scholastic achievement that

endures beyond more than two to three years after exposure to Head Start. The

program does, however, have some potential health benefits, such as inoculations

of enrollees against common childhood diseases and improved nutrition (by

school-provided breakfast or lunch). The documented behavioral effects are less

retention-in-grade and lower dropout rates. The cause(s) of these effects are

uncertain. Because eligible children were not randomly enrolled in Head Start,

but were selected by parents and program administrators, these scholastic cor­

relates of Head Start are uninterpretable from a causal standpoint. Selection,

rather than direct causation by the educational intervention itself, could be the

explanation of Head Start’s beneficial outcomes.

 

crazy amount of money spent for som slight health benefits. perhaps ther is a cheaper way to get such benefits.

 

 

The Milwaukee Project. Aside from Head Start, this is the most highly

publicized of all intervention experiments. It was the most intensive and exten­

sive educational intervention ever conducted for which the final results have

been published.55 It was also the most costly single experiment in the history of

psychology and education—over $14 million. In terms of the highest peak of

IQ gains for the seventeen children in the treatment condition (before the gains

began to vanish), the cost was an estimated $23,000 per IQ point per child.

 

holy shit. even tho i think iv seen this figur befor (in The g Factor by Chris Brand).

 

Jensen also doesnt mention the end of the project, but Wikipedia does:

en.wikipedia.org/wiki/Milwaukee_Project

 

The Milwaukee Project’s claimed success was celebrated in the popular media and by famous psychologists. However, later in the project Rick Heber, the principal investigator, was discharged from the University of Wisconsin–Madison and convicted and imprisoned for large-scale abuse of federal funding for private gain. Two of Heber’s colleagues in the project were also convicted for similar abuses. The project’s results were not published in any refereed scientific journals, and Heber did not respond to requests from colleagues for raw data and technical details of the study. Consequently, even the existence of the project as described by Heber has been called into question. Nevertheless, many college textbooks in psychology and education have uncritically reported the project’s results.[3][4]

 

this reminds me why open data is necessary in science.

 

 

[The Abecedarian Early Intervention Project.]

Both the T and C groups (each with about fifty subjects) were given age-

appropriate mental tests (Bayley, Stanford-Binet, McCarthy, WPPSI) at

six-month intervals from age six months to sixty months. The important com­

parisons here are the mean T-C differences at each testing. (Because the test

scores do not have the same factor composition across this wide age range,

the absolute scores of the T group alone are not as informative of the efficacy

of the intervention as are the mean T-C differences.) At every testing from six

months to five years of age, the T group outperformed the C group, and the

overall average T-C difference (103.3 — 95.5 = 7.8 IQ points) was highly

significant (p < .001). Peculiarly, however, the largest T-C differences (aver­

aging fifteen IQ points) occurred between eighteen and thirty-six months of

age and then declined during the last two years of intervention. At sixty

months, the average T-C difference was 7.5 IQ points. This decrease might

simply reflect the fact that with the children’s increasing age the tests become

increasingly more g-Ioaded. The tests used before two or three years of age

measure mainly perceptual-motor functions that have relatively little g satura­

tion. Only later does g becomes the predominant component of variance in

IQ. In follow-up studies at eight and twelve years of age, the T-C difference

on the WISC-R was about five IQ points,1571 a difference that has remained up

to age fifteen. At the last reported testing, the T-C difference was 4.6 IQ

points, or a difference of 0.35ct. Scholastic achievement test scores showed a

somewhat larger effect of the intervention up to age fifteen.1571 The interven­

tion effect on other criteria of the project’s success was demonstrated by the

decreased percentage of children who repeated at least one grade by age

twelve (T = 28 percent, C = 55 percent) and the percentage of children with

borderline or retarded intelligence (IQ < 85) (T = 12.8 percent, C = 44.2

percent).1561

 

Thus this five-year program of intensive intervention beginning in early in­

fancy increased IQ (at age fifteen years) by about five points. Judging from a

comparable gain in scholastic achievement, the effect had broad transfer, sug­

gesting that it probably raised the level of g to some extent. The finding that

the T subjects did better than the C subjects on a battery of Piaget’s tests of

conservation, which reflect important stages in mental development, is further

evidence. The Piagetian tests are not only very different in task demands from

anything in the conventional IQ tests used in the conventional assessments, but

are also highly g loaded.1571 The mean T-C difference on the Piagetian conser­

vation tests was equal to 0.33a (equivalent to five IQ points). Assuming that

the instructional materials in the intervention program did not closely resemble

Piaget’s tests, it is a warranted conclusion that the intervention appreciably

raised the Level of g.

 

im still skeptical as to the g effects. id like to see the data about them as adults, and a larger sample size.

 

again, Wikipedia has mor on the issue, both positiv and negativ:

en.wikipedia.org/wiki/Abecedarian_Early_Intervention_Project

Significant findings

Follow-up assessment of the participants involved in the project has been ongoing. So far, outcomes have been measured at ages 3, 4, 5, 6.5, 8, 12, 15, 21, and 30.[5] The areas covered were cognitive functioning, academic skills, educational attainment, employment, parenthood, and social adjustment. The significant findings of the experiment were as follows:[6][7]

Impact of child care/preschool on reading and math achievement, and cognitive ability, at age 21:

  • An increase of 1.8 grade levels in reading achievement
  • An increase of 1.3 grade levels in math achievement
  • A modest increase in Full-Scale IQ (4.4 points), and in Verbal IQ (4.2 points).

Impact of child care/preschool on life outcomes at age 21:

  • Completion of a half-year more of education
  • Much higher percentage enrolled in school at age 21 (42 percent vs. 20 percent)
  • Much higher percentage attended, or still attending, a 4-year college (36 percent vs. 14 percent)
  • Much higher percentage engaged in skilled jobs (47 percent vs. 27 percent)
  • Much lower percentage of teen-aged parents (26 percent vs. 45 percent)
  • Reduction of criminal activity

Statistically significant outcomes at age 30:

  • Four times more likely to have graduated from a four-year college (23 percent vs. 6 percent)
  • More likely to have been employed consistently over the previous two years (74 percent vs. 53 percent)
  • Five times less likely to have used public assistance in the previous seven years (4 percent vs. 20 percent)
  • Delayed becoming parents by average of almost two years

(Most recent information from Developmental Psychology, January 18, 2012, cited in uncnews.unc.edu, January 19, 2012)

The project concluded that high quality, educational child care from early infancy was therefore of utmost importance.

Other, less intensive programs, notably the Head Start Program, but also others, have not been as successful. It may be that they provided too little too late compared with the Abecedarian program.[4]

Criticisms

Some researchers have advised caution about the reported positive results of the project. Among other things, they have pointed out analytical discrepancies in published reports, including unexplained changes in sample sizes between different assessments and publications. It has also been noted that the intervention group’s reported 4.6 point advantage in mean IQ at age 15 was not statistically significant. Herman Spitz has noted that a mean IQ difference of similar magnitude to the final difference between the intervention and control groups was apparent already at age six months, indicating that “4 1/2 years of massive intervention ended with virtually no effect.” Spitz has suggested that the IQ difference between the intervention and control groups may have been present from the outset due to faulty randomization.[8]

 

not quite sure what to think. the sample sizes ar still kind small, and if Spitz is right in his criticism, the studies hav not shown much.

 

the reason that im skeptical to begin with is that the modern twin studies show, that shared environment, which is what these studies change to a large degree, has no effect on adult IQ.

 

in any case, if it requires so expensiv spendings to get slightly less dumb kids, its hard to justify as a public policy. at the very least, id like to see the calculation that finds that this has a net positiv benefit for society. it is possible, for instance, becus crime rates ar (supposedly) down, and job retention up which leads to mor taxes being paid, and so on.

 

 

Error distractors in multiple-choice answers are of interest as a method of

discovering bias. When a person fails to select the correct answer but instead

chooses one of the alternative erroneous responses (called “ distractors” ) offered

for an item in a multiple-choice test, the person’s incorrect choice is not random,

but is about as reliable as is the choice of the correct answer. In other words,

error responses, like correct responses, are not just a matter of chance, but reflect

certain information processes (or the failure of certain crucial steps in infor­

mation processing) that lead the person to choose not just any distractor, but a

particular one. Some types of errors result from a solution strategy that is more

naive or less sophisticated than other types of errors. For example, consider the

following test item:

 

If you mix a pint of water at 50° temperature with two pints of water at 80°

measured on the same thermometer, what will be the temperature of the mix­

ture? (a) 65°, (b) 70°, (c) 90°, (d) 130°, (e) Can’t say without knowing

whether the temperatures are Centigrade or Fahrenheit.

 

We see that the four distractors differ in the level of sophistication in mental

processing that would lead to their choice. The most naive distractor, for ex­

ample, is D, which is arrived at by simple addition of 50° and 80°. The answer

A at least shows that the subject realized the necessity for averaging the tem­

peratures. The answer 90° is the most sophisticated distractor, as it reveals that

the subject had a glimmer of the necessity for a weighted average (i.e., 50° +

8072 = 90°) but didn’t know how to go about calculating it. (The correct

answer, of course, is B, because the weighted average is [1 pint X 50° + 2

pints X 80°]/3 pints = 70°.) Preference for selecting different distractors changes

across age groups, with younger children being attracted to the less sophisticated

type of distractor, as indicated by comparing the percentage of children in dif­

ferent age groups that select each distractor. The kinds of errors made, therefore,

appear to reflect something about the children’s level of cognitive development.

 

interesting.

 

 

What is termed a cline results where groups overlap at their fuzzy boundaries

in some characteristic, with intermediate gradations of the phenotypic charac­

teristic, often making the classification of many individuals ambiguous or even

impossible, unless they are classified by some arbitrary rule that ignores biology.

The fact that there are intermediate gradations or blends between racial groups,

however, does not contradict the genetic and statistical concept of race. The

different colors of a rainbow do not consist of discrete bands but are a perfect

continuum, yet we readily distinguish different regions of this continuum as

blue, green, yellow, and red, and we effectively classify many things according

to these colors. The validity of such distinctions and of the categories based on

them obviously need not require that they form perfectly discrete Platonic cat­

egories.

 

while the rainbow analogy works to som extent, it is not that good. the reason is that with rainbows, all the colors (groups) ar on a continuum in such a way that ther isnt a blend between every two colors (groups). this is not how races work, as ther is always the possibility of a blend between any two groups, even odd groups such as amerindians and aboriginals.

 

 

Of the approximately 100,000 human polymorphic genes, about 50,000 are

functional in the brain and about 30,000 are unique to brain functions.[12] The

brain is by far the structurally and functionally most complex organ in the human

body and the greater part of this complexity resides in the neural structures of

the cerebral hemispheres, which, in humans, are much larger relative to total

brain size than in any other species. A general principle of neural organization

states that, within a given species, the size and complexity of a structure reflect

the behavioral importance of that structure. The reason, again, is that structure

and function have evolved conjointly as an integrated adaptive mechanism. But

as there are only some 50,000 genes involved in the brain’s development and

there are at least 200 billion neurons and trillions of synaptic connections in the

brain, it is clear that any single gene must influence some huge number of

neurons— not just any neurons selected at random, but complex systems of

neurons organized to serve special functions related to behavioral capacities.

 

It is extremely improbable that the evolution of racial differences since the

advent of Homo sapiens excluded allelic changes only in those 50,000 genes

that are involved with the brain.

 

the same point was made, altho less technically, in Hjernevask. ther is no good apriori reason to think that natural selection for som reason only worked on non-brain, non-behavioral genes. it simply makes no sense at all to suppose that.

 

 

Bear in mind that, from the standpoint of natural selection, a larger brain

size (and its corresponding larger head size) is in many ways decidedly disad­

vantageous. A large brain is metabolically very expensive, requiring a high-

calorie diet. Though the human brain is less than 2 percent of total body weight,

it accounts for some 20 percent of the body’s basal metabolic rate (BMR). In

other primates, the brain accounts for about 10 percent of the BMR, and for

most carnivores, less than 5 percent. A larger head also greatly increases the

difficulty of giving birth and incurs much greater risk of perinatal trauma or

even fetal death, which are much more frequent in humans than in any other

animal species. A larger head also puts a greater strain on the skeletal and

muscular support. Further, it increases the chances of being fatally hit by an

enemy’s club or missile. Despite such disadvantages of larger head size, the

human brain, in fact, evolved markedly in size, with its cortical layer accom­

modating to a relatively lesser increase in head size by becoming highly con­

voluted in the endocranial vault. In the evolution of the brain, the effects of

natural selection had to have reflected the net selective pressures that made an

increase in brain size disadvantageous versus those that were advantageous. The

advantages obviously outweighed the disadvantages to some degree or the in­

crease in hominid brain size would not have occurred.

 

this brain must hav been very useful for somthing. if som of this use has to do with non-social things, like environment, one wud expect to see different levels of ‘brain adaptation’ due to the relative differences in selection pressure in populations that evolved in different environments.

 

 

How then can the default hypothesis be tested empirically? It is tested exactly

as is any other scientific hypothesis; no hypothesis is regarded as scientific unless

predictions derived from it are capable of risking refutation by an empirical test.

Certain predictions can be made from the default hypothesis that are capable of

empirical test. I f the observed result differs significantly from the prediction, the

hypothesis is considered disproved, unless it can be shown that the tested pre­

diction was an incorrect deduction from the hypothesis, or that there are artifacts

in the data or methodological flaws in their analysis that could account for the

observed result. If the observed result does in fact accord with the prediction,

the hypothesis survives, although it cannot be said to be proven. This is because

it is logically impossible to prove the null hypothesis, which states that there is

no difference between the predicted and the observed result. If there is an al­

ternative hypothesis, it can also be tested against the same observed result.

 

For example, if we hypothesize that no tiger is living in the Sherwood Forest

and a hundred people searching the forest fail to find a tiger, we have not proved

the null hypothesis, because the searchers might have failed to look in the right

places. I f someone actually found a tiger in the forest, however, the hypothesis

is absolutely disproved. The alternative hypothesis is that a tiger does live in

the forest; finding a tiger clearly proves the hypothesis. The failure of searchers

to find the tiger decreases the probability of its existence, and the more search­

ing, the lower is the probability, but it can never prove the tiger’s nonexistence.

 

Similarly, the default hypothesis predicts certain outcomes under specified

conditions. If the observed outcome does not differ significantly from the pre­

dicted outcomes, the default hypothesis is upheld but not proved. If the predic­

tion differs significantly from the observed result, the hypothesis must be

rejected. Typically, it is modified to accord better with the existing evidence,

and then its modified predictions are empirically tested with new data. If it

survives numerous tests, it conventionally becomes a “ fact.” In this sense, for

example, it is a “ fact” that the earth revolves around the sun, and it is a “ fact”

that all present-day organisms have evolved from primitive forms.

 

meh, mediocre or bad filosofy of science.

 

 

 

 

the problem with this data is that the women were not don having children. the data is from women aged 34. since especially smart women (and so mor whites) hav children later than that age, their fertility estimates ar spuriusly low. see also the data in Intelligence: A Unifying Construct for the Social Sciences (Richard Lynn and Tatu Vanhanen, 2012).

 

 

Whites perform significantly better than blacks on the subtests called Com­

prehension, Block Design, Object Assembly, and Mazes. The latter three tests

are loaded on the spatial visualization factor of the WISC-R. Blacks perform

significantly better than whites on Arithmetic and Digit Span. Both of these tests

are loaded on the short-term memory factor of the WISC-R. (As the test of

arithmetic reasoning is given orally, the subject must remember the key elements

of the problem long enough to solve it.) It is noteworthy that Vocabulary is the

one test that shows zero W-B difference when g is removed. Along with Infor­

mation and Similarities, which even show a slight (but nonsignificant) advantage

for blacks, these are the subtests most often claimed to be culturally biased

against blacks. The same profile differences on the WISC-R were found in

another study|8lbl based on 270 whites and 270 blacks who were perfectly

matched on Full Scale IQ.

 

seems inconsistent with typical environment only theories.

 

 

 

Do Bad Things Happen When Works Enter the Public Domain Empirical Tests of Copyright Term Extension

papers.ssrn.com/sol3/papers.cfm?abstract_id=2130008

 

The most interesting thing about this paper was the arguments put forward by the supporters of copyright extension. They are so distressingly bad that it seems pointless to empirically test them. Theoretical arguments are sufficient to show them to be faulty. Nevertheless, the authors carried out some experiments that show the obvious to be true.

Abstract:

According to the current copyright statute, in 2018, copyrighted works of music,
film, and literature will begin to transition into the public domain. While this will
prove a boon for users and creators, it could be disastrous for the owners of these
valuable copyrights. Accordingly, the next few years will witness another round of
aggressive lobbying by the film, music, and publishing industries to extend the
terms of already-existing works. These industries, and a number of prominent
scholars, claim that when works enter the public domain bad things will happen
to them. They worry that works in the public domain will be underused, overused,
or tarnished in ways that will undermine the works’ cultural and economic value.
Although the validity of their assertions turn on empirically testable hypotheses,
very little effort has been made to study them.  
 
This Article attempts to fill that gap by studying the market for audiobook
recordings of bestselling novels. Data from our research, including a novel
human subjects experiment, suggest that the claims about the public domain are
suspect. Our data indicate that audio books made from public domain bestsellers
(1913-22) are significantly more available than those made from copyrighted
bestsellers (1923-32). In addition, our experimental protocol suggests that
professionally made recordings of public domain and copyrighted books are of
similar quality. Finally, while a low quality recording seems to lower a listener’s
valuation of the underlying work, our data do not suggest any correlation
between that valuation and legal status of the underlying work. Accordingly, our
research indicates that the significant costs of additional copyright protection for
already-existing works are not justified by the benefits claimed for it.  These
findings will be crucially important to the inevitable congressional and judicial
debate over copyright term extension in the next few years.

Richard Lynn was so kind to send me a signed copy of his latest book. i immediately paused the reading of another book to read this one. some comments and quotes are below. quotes are from the ebook version of the book which i found on the internet.

Richard Lynn, Tatu Vanhanan – Intelligence, a A Unifying Construct for the Social Sciences, 2012

Review

Some general conclusions about the book. All in all this is a typical Richard Lynn book. It has a very dry style, and is somewhat repetitive. On the other hand, it is not overly long at 400 pages. Many of these are long lists of tables, so are not normally read except if one wants to look up specific countries. It would perhaps have been a good idea to just publish them on the internet for the curious and other researchers. The book contains a wealth of citations revealing a very impressive scholarship. The areas investigated on a global level are many, and the results interesting. The people who think that national IQs are “meaningless” and that human races do not exist or are social constructions (whatever that means, if anything) have the difficult job of explaining why, if these numbers are meaningless, do they fare so well in predicting things on a global level? In other words, why do they have so high validity for a multitude of things? One cannot just regard IQ as “academic intelligence” or some such thing if one can effectively use national IQs to predict things like the lack of proper sanitation. Most often national IQs are found to be better predictors than various non-IQ variables. Although one some occasions I would have liked the authors to use some more variables to see whether they made an impact. I think the authors are sometimes a bit too pessimistic about the possibilities of changing the situation for the low-IQ countries, but I agree with them that one should not expect many of these correlations to change drastically in the near future.

 

Thoughts and comments to various things

The introduction of the book neatly and shortly explains what the book is about:

The physical sciences are unified by a few common theoretical
constructs, such as mass, energy, pressure, atoms, molecules and
momentum, that are defined and measured in the same ways and
explain a wide range of phenomena in physics, astrophysics,
chemistry and biochemistry. This has been beneficial for the
development of the physical sciences, because it has allowed the
transfer of concepts from one field to others. It has allowed
interface subjects like chemical physics and biochemistry to
develop their own insights and concepts on the basis of those
already developed in their parent fields. Physics is the most basic
of the natural sciences, because the phenomena of the others can
be explained by the laws of physics. For this reason, physics has
been called the queen of the physical sciences.

Hitherto, the social sciences have lacked common unifying
constructs of this kind. The disciplines of the social sciences,
comprising psychology, economics, political science,
demography, sociology, criminology, anthropology and
epidemiology are largely isolated from one another, each with
their own vocabulary and theoretical constructs.
Psychology can be considered the most basic of the social
sciences because it is concerned with differences between
individuals, while the other social sciences are principally
concerned with differences between groups such as socio-
economic classes, ethnic and racial populations, regions within
countries, and nations. These groups are aggregates of
individuals, so the laws that have been established in psychology
should be applicable to the group phenomena that are the concern
of the other social sciences.
Our objective in this book is to develop the case that the
psychological construct of intelligence can be a unifying
explanatory construct for the social sciences. Intelligence is
measured by the intelligence test that was constructed by Alfred
Binet in 1905. During the succeeding century it has been shown
that intelligence, measured as the IQ (the intelligence quotient),
is a determinant of many important social phenomena,
including educational attainment, earnings, socio-economic
status, crime and health. Our theme is that the explanatory value
of intelligence that has been established for individuals can be
extended to the explanation of the differences between groups,
that have been found in the other social sciences, and in
particular to the explanation of the differences between nations.
Thus, we propose that psychology is potentially the queen of
the social sciences, analogous to the position of physics as the
queen of the physical sciences. (p. 1-2)

It is difficult to disagree with this.

one of the things that bother me with the Health chapter is that it doesnt try to compare with and adjoin with the data from The Spirit Level. The authors of SPL contend that many of the things that Lynn&Vanhanan (LV) thinks is due to intelligence, is really due to economic (in)equality. unfortunately, LV does not try to control for this. it wud be interesting to see if the effects of high econ. equality goes away if one controls for intelligence. in other words, that the effects of econ. equality is really just intelligence working thru it.

For a video introduction to the SPL, see this:

one annoying thing about this book, is that it is full of data tables, and the data from these cannot easily be copied into something useful. at least, i have failed to do it in any easy way. it requires a lot of fiddling to get the formatting right in calc/excel. hopefully, LV will make data tables available on their websites where they can easily be downloaded so that others can test out other hypotheses.

many of the tables span two pages but are not that big and cud easily fit into a single table on one page. unfortunately, having to use the image now requires that one either zooms out a lot to fit it all into one screen before taking a screenshot and hence makes the text small, or take two screenshots and edit them together in an image editor. it wud be very nice if they were made available on the website for free use.

a recurrent thing about the book is that the editor did quite a poor job. there are a lot of easily visible typografical mistakes that are a bit annoying. they dont distract too much from the reading of the book, except in the rare cases where a missing word makes interpretation necessary. for instance, on p. 83-84 table 4.5, the 10th line is missing the prefix “in” which makes it appear as if the data presented varies wildly from a positive 0.61 correlation to three other strong negative correlations between -.52 and -0.60.

there was also another place where a “not” was missing and this left me confused for a few seconds.

as for formatting, look at table 7.1, line 1, the word “All” is strangely located in a line below the other information. look also to lines 10-11 and notice how the two “F” are floating to the left.

these mistakes shud be fixed and a new online edition released. this cant be too difficult to do.

notice how low the dysgenic effects are. i was under the impression that they were stronger. also keep in mind that the lines 14-17 are those with the best data. the reason for that is that:

Rows 2, 3 and 4 give negative correlations between
intelligence and fertility based on a nationally representative
American sample showing that the negative correlation is higher
for white women than for white men, and higher for white
women than for black women. This study is not wholly
satisfactory because the age of the sample was 25 to 34 years and
many of them would not have completed their fertility.

To overcome this problem, Vining (1995) published data on
the fertility of his female sample of the ages between 35 and 44,
which can be regarded as close to completed fertility. The results
are given in rows 4 and 5 for white and black women and show
that the correlations between intelligence and fertility are still
significantly negative and are higher for black women (-0.226)
than for white women (-0.062). These correlations are probably
underestimates because the samples excluded high-school
dropouts, who were about 14 per cent of whites and 26 per cent
of blacks at this time, and who likely had low IQs and high
average fertility. (p. 201-2)

which is to say that if one gathers the data before women are done having children, one will miss out some older women who get children late. since such women are especially likely to be well-educated (and hence, smart), this is an important bias.

still given that there are some consistent negative correlations, then there is a dysgenic effect – its just smaller than i had imagined. at least on a within population basis.

It would be interesting to explore to what extent differences
in geographical circumstances and water resources affect the
access to clean water, but unfortunately it is difficult to find
appropriate indicators of geographical factors. However, there is
one indicator for this purpose.WDI-09 (Table 3.5) includes data
on renewable internal freshwater resources per capita in cubic
metres in 2007 (Freshwater). It measures internal renewable
resources (internal river flows and groundwater from rainfall) in
the country. It is noted that these “estimates are based on different
sources and refer to different years, so cross-country
comparisons should be made with caution” (WDI-09, p. 153). It
could be assumed that freshwater resources per capita are
negatively correlated with Water-08, but in fact there is no
correlation between these variables (0.050, N=139). The
correlation between national IQ and Freshwater is also in zero
(0.014, N=147). Access to clean water seems to be completely
independent from freshwater resources, whereas it is
significantly dependent on national IQ (39%) and several
environmental variables. Therefore, it is interesting to see how
well national IQ explains the variation in Water-08 at the level of
single countries and what kinds of countries deviate most from
the regression line. Figure 8.1 summarizes the results of the
regression analysis of Water-08 on national IQ in the group of
166 countries. Detailed results for single countries are reported in
Table 8.3. (p. 246)

Very interesting! Is this a direct disproof of Jared Diamond (1997)‘s environment theory regarding access to water?

Figure 8.1 shows that the relationship between national IQ
and Water-08 is linear as hypothesized, but many highly
deviating countries weaken the relationship. In the countries
above the regression line, the percentage of people without
access to improved water services is higher than expected on the
basis of the regression equation, and in the countries below the
regression line it is lower than expected. In all countries above
the national IQ level of 90, the percentage of the population
without access to clean water is zero or near zero, except in
Cambodia, China and Mongolia, whereas this percentage varies
greatly in the countries below the national IQ level of 85.
National IQ is not able to explain the great variation in Water-08
in the group of countries with low national IQs. Most of that
variation seems to be due to some environmental and local
factors, perhaps also to measurement errors. ( p. 247-8)

in the case of China it seems very unhelpful to category it as one country. it is a HUGE place. it wud be better to split it up into provinces, and calculate these instead. en.wikipedia.org/wiki/Provinces_of_the_People%27s_Republic_of_China altho this will result in many of them having no data. i doubt that there is IQ data for all the regions of China. perhaps those in the regions away from the ocean are not quite as clever as those near the ocean, and near Japan. but surely there is data about Hong Kong, Macau, and some other city or city-like states.

one thing that bothers me a bit is that when LV discuss outliers to their correlation, they use some seemingly arbitrarily picked number. heres a random example (p. 258):

Table 8.3 shows the countries which deviate most from the
regression line and for which positive or negative residuals are
large. An interesting question is whether some systematic
differences between large positive and negative outliers could
help to explain their deviations from the regression line. Let us
regard as large outliers countries whose residuals are ±15 or
higher (one standard deviation is 13).

they note that the sd is 13, but instead opt to use 15 without an explanation. this is the same every time they adopt such an analysis, which do they every chapter. normally, they choose some number slightly larger than 1sd. in p. 155 sd = 1.7, and they use 2. in p. 146 they use 11 while sd = 10.1. in p. 103 they use 12 while the sd is 12.017. the general rule seems to be: choose an arbitrary but nicely looking number just a bit larger than the sd. i dont think this skews the analysis much, but i wud have prefered just if they used 1sd as the border for counting as an outlier.

one odd thing is that when LV finds that a relationship between national IQs and some other variable is curvilinear, they still go on to use the linear model in their explanation. they do this time and time again. it results in some bad points of analysis, for instance:

It is remarkable that this group does not include any
economically highly developed countries, Caribbean tourist
countries, Latin American countries, or oil exporting countries.
Most of them are poor sub-Saharan African countries (17). China
is not really a large positive outlier for the reason that its
predicted value of Water-08 is negative -6. The other eight
positive outliers are poor Asian and Oceanian countries. Most of
them (especially Afghanistan, Cambodia, Myanmar and Timor-
Leste) have suffered from serious civil wars, which have
hampered socio-economic development. (p.259)

if they had made a proper model, one where negative values are impossible, then they wud have avoided such details. its not that LV doesnt know this, as they discuss on page. 79:

Rows 13 through 18 give six correlations between national
IQs and various measures of per capita income reported. The
author analyzed further the relationship by fitting linear, quadratic
and exponential curves to the data for 81 and 185 nations and
found that fitting exponential curves gave the best results. His
interpretation was that “a given increment in IQ, anywhere along
the IQ scale, results in a given percentage in GDP, rather than a
given dollar increase as linear fitting would predict” (Dickerson,
2006, p. 291). He suggests that

exponential fitting of GDP to IQ is logically
meaningful as well as mathematically valid. It is
inherently reasonable that a given increment of IQ
should improve GDP by the same proportional ratio,
not the same number of dollars. An increase of GDP
from $500 to $600 is a much more significant change
than is a linear increase from $20,000 to $20,100. The
same proportional change would increase $20,000 to
$24,000. These data tell us that the influence of
increasing IQ is a proportional effect, not an absolute
one (p. 294).

heres as example of a plot where LV acknowledges that it is curvilinear:

i wud replicate this plot myself and fit an exponential function to it, and then look for outliers, but i wud need the raw data for that in a useable form. see the previous point about how it is difficult to extract the data from the PDF and the need to publish it in some other format, preferably excel/calc.

Some systematic differences in the characteristics of large
positive and negative outliers provide partial explanations for
their large residuals. Most countries with large negative residuals
have benefitted from investments, technologies, and
management from countries of higher national IQs, whereas
most countries with large positive residuals have received much
less such foreign help. (p.260)

tourism is not the only way to receive money from the rich countries. it wud be interesting to look at the effects of foreign aid to poor countries. is there any discernible effect of it? perhaps it has had effects on water supply, for instance.

Table 8.4 shows that the indicators of sanitation are a little
more strongly correlated with national IQ than the indicators of
water (cf. Table 8.2). The explained part of variation varies from
41 to 60 percent. Differences between the three groups of
countries are relatively small, although the correlations are
strongest in the group of countries with more than one million
inhabitants. It should be noted that the correlations between
national IQ and Sanitation-08 are negative because Sanitation-08
concerns the percentage of the population without access to
improved sanitation services (see section 2). (p. 261)

i understand their wish to stay true to the sources numbers, but i wud have prefered if they had multiplied the numbers by -1 to make them fit with the direction of the other numbers.

Row 7 gives a low but statistically significant positive
correlation of 0.18 between national IQ and son preference. This
may be a surprising result, because it might be expected that
liberal and more modern populations would not have such a
strong preference for sons as more traditional peoples. (p. 273)

surprising indeed.

Consistent with Frazer’s analysis, it has been found in a
number of studies of individuals within nations that there is a
negative relationship between intelligence and religious belief.
This negative relationship was first reported in the United States
in the 1920s by Howells (1928) and Sinclair (1928), who both
reported studies showing negative correlations between
intelligence and religious belief among college students of -0.27
to -0.36 (using different measures of religious belief). A number
of subsequent studies confirmed these early results, and a review
of 43 of these studies by Bell (2002) found that all but four found
a negative correlation. To these can be added a study in the
Netherlands of a nationally representative sample (total N=1,538)
that reported that agnostics scored 4 IQs higher than believers
(Verhage, 1964). In a more recent study Kanazawa (2010) has
analyzed the data of the American National Longitudinal Study of
Adolescent Health, a national sample initially tested for
intelligence with the PPVT (Peabody Picture Vocabulary Test) as
adolescents and interviewed as young adults in 2001-2
(N=14,277). At this interview they were asked: “To what extent
are you a religious person?” The responses were coded “not
religious at all”, “slightly religious”, “moderately religious”, and
“very religious”. The results showed that the “not religious at all”
group had the highest IQ (103.09), followed in descending order
by the other three groups (IQs = 99.34, 98.28, 97.14). The
negative relationship between IQ and religious belief is highly
statistically significant. (p. 278)

the Bell article sounds interesting, but after spending some time trying to locate it, i failed. it seems that im not the only one having such problems.

regardless of that, there was a similar article: “The Effect of Intelligence on Religious Faith,” Free Inquiry, Spring 1986: (1). There is an online parafrase of it here.

one of the interesting datasets that id love to see a nonlinear function fitted to. i want to know how much we need to boost intelligence to almost remove religiousness. perhaps one can discover this from using high-IQ samples. at which IQ are there <5% religious people?

another of those tables that have problems with the direction. Legatum and Newsweek shud be positive with each other, right? since they are measuring in the same direction, that is, the one opposite of HDI and IHC (which correlate positively).

LV mention the 2008 study by Kanazawa: Temperature and evolutionary novelty as forces behind the evolution of general intelligence. The interesting thing about this study is that it sort of tests my idea that i wrote about earlier. Kanazawa goes on with his novelty hypothesis using distance from Africa to predict national IQs. However, compared with Ashraf and Galor (2012) paper, he just uses bird distance instead of actual travel distance (humans are not birds, after all!, nor did they just sail straight from Africa to populate America). So im not really sure what his computed r’s are useful for. It wud be interesting to add together the data from the Ashraf and Galor (2012) paper about distances, and genetic diversity to the climate model. LV does mention at one point that lack of genetic diversity make evolution slower:

A further
anomaly is that the Australian Aborigines inhabit a relatively
warm region but have small brain sizes and low IQs. The
explanation for this anomaly is that these were a small isolated
population numbering only around 300,000 at the time of
European colonization, so the mutant alleles for higher IQs did
not appear in them. (p. 381)

consider also the criticism of Kanazawa’s paper in Why national IQs do not support evolutionary theories of intelligence, Wicherts et al (2009):

5. Migration and geographic distance
Kanazawa (2008) was concerned with the relation between lev-
els of general intelligence, as they were distributed geographically
thousands of years ago, and the degree of ‘‘evolutionary novelty” of
the relevant geographic locations. Lacking data regarding evolu-
tionary novelty, Kanazawa proposed, as a measure of evolutionary
novelty, the geographic distance to the EEA, i.e., a large region of
sub-Saharan Africa. The idea is that the greater the distance from
the EEA, the more evolutionarily novel the corresponding environ-
ment. There are several problems with this operationalization.
First, Kanazawa operationalized geographic distance using
Pythagoras’ first theorem (a2+ b2= c2). However, Pythagoras’ theo-
rem applies to Euclidian space, not to the surface of a sphere. Sec-
ond, even if these calculations were accurate, distances as traveled
on foot do not in general correspond to distances ‘‘as the crow flies”
(Kanazawa 2008, p. 102). According to most theories, ancestors of
the indigenous people in Australia (i.e., the Aborigines) moved out
of Africa on foot. They probably crossed the Red Sea from Africa to
present day Saudi Arabia, went on to India, and then through Indo-
nesia to Australia. Thus the distance covered on foot must have
been much larger than the distances computed by Kanazawa. This
suggests that the real distances covered by humans to reach a gi-
ven location, i.e., data of central interest to Kanazawa, are likely
to differ appreciably from the distances as the crow flies. One
can avoid this problem by using maps that exist of the probable
routes that humans followed in their exodus from Africa, and esti-
mating the distances between the cradle of humankind and various
other locations accordingly (Relethford, 2004).
Third, it is not obvious that locations farther removed from the
African Savannah are geographically and ecologically more dissim-
ilar than locations closer to the African Savannah. For instance, the
rainforests of central Africa or the mountain ranges of Morocco are
relatively close to the Savannah, but arguably are more dissimilar
to it than the great plains of North America or the steppes of Mon-
golia. In addition, some parts of the world were quite similar to the
African savannas during the relevant period of evolution (e.g., Ray
& Adams, 2001). Clearly, there is no strict correspondence between
evolutionary novelty and geographic distance. This leaves the use
of distances in need of theoretical justification. It is also notewor-
thy that given the time span of evolutionary theories, it is hardly
useful to speak of environmental effects as if these were fixed at
a certain geographical location.
People migrate, and have done so extensively in the time since
the evolutionarily period relevant to the evolutionary theories by
Kanazawa and others. A simple, yet imperfect, solution to this
problem is to use data solely from countries that have predomi-
nantly indigenous inhabitants (Templer, 2008; Templer & Arika-
wa, 2006). However, Kanazawa used national IQs of all
countries in Lynn and Vanhanen’s survey, including Australia
and the United States. This casts further doubt on the relevance
of Kanazawa’s data vis-à-vis the evolutionary theories that he
set out to test. Given persistent migration, it is likely that many
of the people, whose test scores Lynn and Vanhanen used to cal-
culate national IQs, are genetically unrelated to the original
inhabitants of their respective countries. In at least 50 of the
192 countries in Kanazawa’s (2008) study, the indigenous people
represent the ethnic minority.

Via Steve Sailer.

The Out of Africa Hypothesis, Human Genetic Diversity, and Comparative Economic Development

Quamrul Ashraf and Oded Galor

Abstract
This research advances and empirically establishes the hypothesis that, in the course of the prehistoric exodus of Homo sapiens out of Africa, variation in migratory distance to various settlements across the globe affected genetic diversity and has had a long-lasting hump-shaped effect on comparative economic development, reáecting the trade-offs between the beneficial and the detrimental effects of diversity on productivity. While intermediate levels of genetic diversity prevalent among Asian and European populations have been conducive for development, the high diversity of African populations and the low diversity of Native American populations have been detrimental for the development of these regions.

A very interesting paper. As can be seen in the link, it receives the usual backlash of dumbness.

The interesting thing about this that wasnt explored – even if it screamed to be explored – is how it works together with Lynn’s world wide IQ data. Lynn’s theory of cold climate has difficulties explaining why the arctic people are not smarter than they are. They are by no means dumb like africans, but they shud be smarter than they are going by the latitude theory and climate theory. I suggested that this might be due to inbreeding due to small populations. Perhaps. Perhaps its due to less genetic variation. It shud be possible to run a multiple regression analysis, and see how these two together explain IQ and income per capita.

The theory behind this is: First humans were in Africa, then they migrated out to live in other places. These other places differed in environment by coldness among other things. Those that lived in colder places were under (stronger) selection pressure for intelligence. How fast this adaptation happens is controlled by population size and genetic variation in the populations.

Very strange that the paper does not even cite Lynn, or mentioned IQ anywhere. These seem obvious to explain differences in income per capita.