Archive for the ‘Economics’ Category

This book is very popsci and can be read in 1 day for any reasonably fast reader. It doesnt contain much new information to anyone who has read a few books on the topic. As can be seen below, it has a lot of nonsense/errors since clearly the author is not used to this area of science. It is not recommended except as a light introduction to people with political problems with these facts.

gen.lib.rus.ec/book/index.php?md5=7a48b9a42d89294ca1ade9f76e26a63c

www.goodreads.com/book/show/18667960-a-troublesome-inheritance?from_search=true

 

But  a  drawback  o f  the  system  is  its  occasional  drift  toward
extreme  conservatism.  Researchers  get  attached  to  the  view  of their
field  they  grew  up  with  and,  as  they  grow  older,  they  may  gain  the
influence  to thwart change.  For  50  years  after it was  first proposed,
leading geophysicists  strenuously resisted the idea that the continents
have  drifted  across  the  face  of  the  globe.  “Knowledge  advances,
funeral  by funeral,”  the economist Paul  Samuelson  once  observed.

 

Wrong quote origin. en.wikiquote.org/wiki/Max_Planck

>A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

 

-

Academics, who are obsessed with intelligence, fear the discovery
of  a  gene  that  will  prove  one  major  race  is  more  intelligent  than
another.  But  that  is  unlikely  to  happen  anytime  soon.  Although
intelligence has a genetic basis, no genetic variants that enhance intel­
ligence  have  yet  been  found.  The  reason,  almost  certainly,  is  that
there  are  a  great  many  such  genes,  each  of  which  has  too  small  an
effect  to  be  detectable  with  present  methods.8  If  researchers  should
one  day  find  a  gene  that  enhances  intelligence  in  East  Asians,  say,
they can  hardly argue on that  basis that East Asians are more  intelli­
gent than other races, because hundreds of similar genes remain to be
discovered  in  Europeans  and  Africans.
Even  if  all  the  intelligence-enhancing  variants  in  each  race  had
been identified, no one would try to compute intelligence on the basis
of genetic  information:  it would  be  far easier  just to  apply  an  intelli­
gence test.  But IQ  tests already  exist, for what  they may  be  worth.

 

We have found a number of SNPs already. And we have already begun counting them in racial groups. See e.g.: openpsych.net/OBG/2014/05/opposite-selection-pressures-on-stature-and-intelligence-across-human-populations/

 

-

 

It s social behavior that is of relevance for understanding pivotal—
and otherwise imperfectly explained— events in history and econom­
ics.  Although  the  emotional  and  intellectual  differences  between  the
world’s peoples  as  individuals are slight enough,  even a  small  shift in
social  behavior  can  generate  a  very  different  kind  of society.  Tribal
societies, for instance, are organized on the basis of kinship and differ
from  modern  states  chiefly  in  that  people’s  radius  of trust  does  not
extend too far beyond the family and tribe.  But in this small variation
is  rooted  the  vast  difference  in  political  and  economic  structures
between tribal and modern societies. Variations in another genetically
based behavior, the readiness to punish those who violate social rules,
may explain why  some societies  are  more conformist than others.

 

See: www.goodreads.com/book/show/3026168-the-expanding-circle

 

-

 

The  lure  of  Galton’s  eugenics  was  his  belief  that  society  would
be  better  off  if  the  intellectually  eminent  could  be  encouraged  to
have  more  children.  W hat  scholar  could  disagree  with  that?  More
of  a  good  thing  must  surely  be  better.  In  fact  it  is  far  from  certain
that  this  would  be  a  desirable  outcome.  Intellectuals  as  a  class  are
notoriously  prone  to  fine-sounding  theoretical  schemes  that  lead
to  catastrophe,  such  as  Social  Darwinism,  Marxism  or  indeed
eugenics.
By  analogy  with  animal  breeding,  people  could  no  doubt  be
bred,  if it were ethically acceptable, so  as to  enhance  specific desired
traits.  But  it  is  impossible  to  know  what  traits would  benefit  society
as a whole. The eugenics program, however reasonable it might seem,
was  basically incoherent.

 

Obviously wrong.

 

-

 

The  principal  organizer  of  the  new  eugenics  movement  was
Charles  Davenport.  He  earned  a  doctorate  in  biology  from  Harvard
and  taught  zoology  at  Harvard,  the  University  of  Chicago,  and  the
Brooklyn  Institute  of  Arts  and  Sciences  Biological  Laboratory  at
Cold  Spring  Harbor  on  Long  Island.  Davenport’s  views  on  eugenics
were  motivated  by  disdain  for  races  other  than  his  own:  “Can  we
build a  wall high  enough around this country so as to keep  out these
cheaper  races,  or will  it  be  a  feeble  dam  .  .  .  leaving it to  our  descen­
dants to abandon  the country to the  blacks,  browns  and  yellows and
seek  an  asylum in New  Zealand?”  he wrote.9

 

Well, about that… In this century europeans will be <50% in the US. I wonder if the sociologists will then stop talking about minority, as if that somehow makes a difference.

 

-

 

One  of  the  most  dramatic  experiments  on  the  genetic  control  of
aggression was performed by the Soviet scientist Dmitriy Belyaev. From
the same population of Siberian gray rats he developed two strains, one
highly sociable  and  the  other  brimming with  aggression.  For  the tame
rats, the parents of each generation were chosen simply by the criterion
of how well they tolerated  human presence.  For the  ferocious  rats, the
criterion  was  how adversely they reacted  to people.  After many gener­
ations of breeding,  the  first strain was  now so tame that when visitors
entered  the  room  where  the  rats  were  caged,  the  animals  would  press
their  snouts  through  the  bars to  be  petted.  The  other  strain  could  not
have  been  more  different.  The  rats  would  hurl  themselves  screaming
toward  the  intruder,  thudding  ferociously  against  the  bars  of  their
cage.12

 

Didnt know this one. The ref is:

N icholas  Wade,  “N ice  R a ts,  N asty  R a ts:  Maybe  I t ’s  All  in  the  G en es,”
N ew  York  Tim es, Ju ly  2 5 ,  2 0 0 6 ,  www.nytimes.com/2006/07/25/health/
25 ra ts.h tm l?p a g ew a n ted = a ll& _ r=0  (accessed  Sept.  2 5 ,  2 0 1 3 )

 

-

 

Rodents and humans use many of the same genes and  brain regions
to control  aggression.  Experiments with  mice  have  shown that a  large
number of genes are involved in the trait, and the same is certainly true
of  people.  Comparisons  of  identical  twins  raised  together  and  sepa­
rately  show  that  aggression  is  heritable.  Genes  account  for  between
3 7%  and 72%  of the heritability, the variation  of the trait in a  popula­
tion, according to various studies.  But very few of the genes that under­
lie  aggression  have  yet  been  identified,  in  part  because  when  many
genes control  a  behavior,  each  has  so  small  an  effect  that  it  is  hard  to
detect.  Most  research  has  focused  on  genes  that  promote  aggression
rather than those at the other end of the  behavioral  spectrum.

 

This sentence is nonsensical.

 

-

 

Standing  in  sharp  contrast  to  the  economists’  working  assumption
that  people  the  world  over  are  interchangeable  units  is  the  idea  that
national  disparities  in  wealth  arise  from  differences  in  intelligence.
The possibility should  not be  dismissed  out of hand:  where  individu­
als are concerned,  IQ  scores do correlate,  on average,  with economic
success, so  it is not unreasonable to inquire if the same  might  be true
of countries.

 

Marked sentence is nonsensical.

 

-

 

Turning to economic indicators, they find that national  IQ scores
have an extremely high correlation  (83%)  with economic growth  per
capita  and  also  associate  strongly  with  the  rate  of economic  growth
between  1950  and  1 9 9 0  (64%  correlation).44

 

More conceptual confusion.

 

-

 

And  indeed  with  Lynn  and  Vanhanen’s correlations,  it  is  hard to
know  which  way  the  arrow  of  causality  may  be  pointing,  whether
higher  IQ  makes  a  nation  wealthier  or  whether  a  wealthier  nation
enables  its  citizens  to  do  better  on  IQ  tests.  The  writer  Roy  Unz  has
pointed out from  Lynn and Vanhanen’s own data examples  in  which
IQ  scores  increase  10  or more points  in  a generation  when  a  popula­
tion  becomes  richer,  showing  clearly  that  wealth  can  raise  IQ
scores  significantly.  East  German  children  averaged  90  in  1 9 6 7  but
99  in  1984.  In  West  Germany,  which  has  essentially the  same  popu­
lation,  averages  range  from  99  to  107.  This  17  point  range  in  the
German  population,  from  90  to  107,  was  evidently  caused  by  the
alleviation  of poverty,  not genetics.

 

Ron Unz, the cherry picker. conservativetimes.org/?p=11790

 

-

 

East  Asia  is  a  vast counterexample to the  Lynn/Vanhanen  thesis.
The  populations  of China, Japan  and Korea  have consistently  higher
IQs  than  those  of Europe  and  the  United  States,  but  their  societies,
despite  their  many  virtues,  are  not  obviously  more  successful  than
those of Europe and  its outposts. Intelligence can’t hurt, but it doesn’t
seem  a  clear  arbiter  of  a  population’s  economic  success.  W hat  is  it
then  that determines  the  wealth  or poverty of nations?

 

No. But it does disprove the claim that IQs are just GDPs. The oil states have low IQs and had that both before and after they got rich on oil, and will have in the future when they run out of oil again. Money cannot buy u intelligence (yet).

 

-

 

From  about  9 0 0  a d   to  1700  a d ,  Ashkenazim  were  concentrated
in  a  few  professions,  notably  moneylending  and  later  ta x  farming
(give  the prince  his  money  up  front,  then  extract the  taxes  due  from
his  subjects).  Because  of  the  strong  heritability  of  intelligence,  the
Utah team calculates that 20 generations, a mere 5 0 0 years, would be
sufficient for Ashkenazim to have developed an  extra  16 points of IQ
above that of Europeans. The Utah team assumes that the heritability
of  intelligence  is  0 .8 ,  meaning  that  8 0 %  of the  variance,  the  spread
between high and low values in a population, is due to genetics. If the
parents of each generation have an  IQ of just  1  point above the mean,
then  average  IQ  increases  by  0 .8 %  per  generation.  If  the  average
human  generation  time  in  the  Middle Ages was  2 5  years,  then  in  20
human  generations,  or  5 0 0  years,  Ashkenazi  IQ  would  increase  by
2 0  x  0.8  =  16  IQ  points.

 

More conceptual confusion. One cannot use % on IQs becus IQs are not ratio scale and hence division makes no sense. en.wikipedia.org/wiki/Levels_of_measurement#Comparison

[Bryan_Caplan]_The_Myth_of_the_Rational_Voter_Why(Bookos.org)

 

This is very interesting book. Most interesting I’ve read in a while.

 

-

 

If neither way of verifying the existence of preferences over beliefs

appeals to you, a final one remains. Reverse the direction of reason­

ing. Smoke usually means fire. The more bizarre a mistake is, the

harder it is to attribute to lack of information. Suppose your friend

thinks he is Napoleon. It is conceivable that he got an improbable

coincidence of misleading signals sufficient to convince any of us.

But it is awfully suspicious that he embraces the pleasant view that

he is a world-historic figure, rather than, say, Napoleon’s dishwasher.

Similarly, suppose an adult sees trade as a zero-sum game. Since he

experiences the opposite every day, it is hard to blame his mistake on

“lack of information.” More plausibly, like blaming your team’s defeat

on cheaters, seeing trade as disguised exploitation soothes those who

dislike the market’s outcome.

 

Common problem with reincarnation reports. Also: en.wikipedia.org/wiki/Emperor_Norton

 

-

 

In extreme cases, mistaken beliefs are fatal. A baby-proofed house

illustrates many errors that adults cannot afford to make. It is danger­

ous to think that poisonous substances are candy It is dangerous to

reject the theory of gravity at the top of the stairs. It is dangerous to

hold that sticking forks in electrical sockets is harmless fun.

But false beliefs do not have to be deadly to be costly If the price

of oranges is 50 cents each, but you mistakenly believe it is a dollar,

you buy too few oranges. If bottled water is, contrary to your impres­

sion, neither healthier nor better-tasting than tap water, you may

throw hundreds of dollars down the drain. If your chance of getting

an academic job is lower than you guess, you could waste your twent­

ies in a dead-end Ph.D. program.

 

There was a recent danish study on the quality of bottled water vs. tap water, and they were found to be the same. Bottled water is seriously waste of money. www.bt.dk/test/stor-test-kildevand-er-det-rene-snyd

 

-

 

Mosca and Jihad. In the Jain example, stubborn belief leads to dis­

comfort. Gaetano Mosca presents a case where stubborn belief leads

to death.

 

Mohammed, for instance, promises paradise to all who fall in a

holy war. Now if every believer were to guide his conduct by that

assurance in the Koran, every time a Mohammedan army found

itself faced by unbelievers it ought either to conquer or to fall to

the last man. It cannot be denied that a certain number of individu­

als do live up to the letter of the Prophet’s word, but as between

defeat and death followed by eternal bliss, the majority of Moham­

medans normally elect defeat.45

 

Yes, religious people are irrational, even about their own irrational beliefs: chaospet.com/2008/10/08/110-jesus-loves-abortion/

 

they should also try to get themselves killed as soon as possible. After all, heaven is infinitely good, so it’s obviously infinitely better than being on earth. An infinite improvement!

 

-

 

If you listen to your fellow citizens, you get the impression that they

disagree. How many times have you heard, “Every vote matters”? But

people are less credulous than they sound. The infamous poll tax—

which restricted the vote to those willing to pay for it—provides a

clean illustration. If individuals acted on the belief that one vote

makes a big difference, they would be willing to pay a lot to partici­

pate. Few are. Historically, poll taxes significantly reduced turnout.65

There is little reason to think that matters are different today. Imagine

setting a poll tax to reduce presidential turnout from 50% to 5%. How

high would it have to be? A couple hundred dollars? What makes the

poll tax alarming is that most of us subconsciously know that most

of us subconsciously know that one vote does not count.

 

Citizens often talk as if they personally have power over electoral

outcomes. They deliberate about their options as if they were order­

ing dinner. But their actions tell a different tale: They expect to be

served the same meal no matter what they “order.”

 

What does this imply about the material price a voter pays for polit­

ical irrationality? Let D be the difference between a voter’s willingness

to pay for policy A instead of policy B. Then the expected cost of

voting the wrong way is not D, but the probability of decisiveness p

times D. If p = 0, pD = 0 as well. Intuitively, if one vote cannot change

policy outcomes, the price of irrationality is zero.

 

-

 

But rational irrationality does not require Orwellian underpinnings.

The psychological interpretation can be seriously toned down with­

out changing the model. Above all, the steps should be conceived as

tacit. To get in your car and drive away entails a long series of steps—

take out your keys, unlock and open the door, sit down, put the key

in the ignition, and so on. The thought processes behind these steps

Eire rarely explicit. Yet we know the steps on some level, because when

we observe a would-be driver who fails to take one—by, say, trying to

open a locked door without using his key—it is easy to state which

step he skipped.

 

Once we recognize that cognitive “steps” are usually tacit, we can

enhance the introspective credibility of the steps themselves. The

process of irrationality can be recast:

Step 1: Be rational on topics where you have no emotional attach­

ment to a particular answer.

Step 2: On topics where you have an emotional attachment to a

particular answer, keep a “lookout” for questions where false be­

liefs imply a substantial material cost for you.

Step 3: If you pay no substantial material costs of error, go with the

flow; believe whatever makes you feel best.

Step 4: If there are substantial material costs of error, raise your level

of intellectual self-discipline in order to become more objective.

Step 5: Balance the emotional trauma of heightened objectivity—

the progressive shattering of your comforting illusions—against

the material costs of error.

 

There is no need to posit that people start with a clear perception of

the truth, then throw it away. The only requirement is that rationality

remain on “standby,” ready to engage when error is dangerous.

 

Relevant to the ethics of belief:

 

ajburger.homestead.com/ethics.html

www.utilitarianism.net/singer/by/200303–.htm

 

-

 

So Classical Public Choice’s stories about rational ignorance prove

too much. But not much too much. By any absolute measure, average

levels of politicsil knowledge Eire low.8 Less than 40% of American

adults know both of their senators’ names.9 Slightly fewer know both

senators’ parties—a particularly significant finding given its oft-cited

informationEil role.10 Much of the public has forgotten—or never

learned—the elementary and unchanging facts taught in every civics

class. About half knows that each state has two senators, and only a

quarter knows the length of their terms in office.11 FEimiliEirity with

politicians’ voting records and policy positions is predictably close

to nil even on high-profile issues, but amazingly good on fun topics

irrelevant to policy. As Delli Carpini and Keeter remark:

 

During the 1992 presidential campaign 89 percent of the public

knew that Vice President Quayle was feuding with the television

character Murphy Brown, but only 19 percent could characterize

Bill Clinton’s record on the environment. . . 86 percent of the pub­

lic knew that the Bushes’ dog was named Millie, yet only 15 percent

knew that both presidential candidates supported the death pen­

alty. Judge Wapner (host of the television series “People’s Court”)

was identified by more people than were Chief Justices Burger or

Rehnquist.1

 

sigh!

 

-

 

Apparently irrational cultural beliefs are quite remarkable:

They do not appear irrational by slightly departing from

common sense, or timidly going beyond what the

evidence allows. They appear, rather, like down-right

provocations against common sense rationality.

—Richard Shwedei1

 

-

 

Economists’ love of qualification is notorious, but most doubt that

the protechnology position needs to be qualified. Technology often

creates new jobs; without the computer, there would be no jobs in

computer programming or software development. But the funda­

mental defense of labor-saving technology is that employing more

workers than you need wastes valuable labor. If you pay a worker to

twiddle his thumbs, you could have paid him to do something socially

useful instead.

Economists add that market forces readily convert this potential

social benefit into an actual one. After technology throws people out

of work, they have an incentive to find a new use for their talents. Cox

and Aim aptly describe this process as “churn”: “Through relentless

turmoil, the economy re-creates itself, shifting labor resources to

where they’re needed, replacing old jobs with new ones.”75 They illus­

trate this process with history’s most striking example: The drastic

decline in agricultural employment:

 

In 1800, it took nearly 95 of every 100 Americans to feed the country.

In 1900, it took 40. Today, it takes just 3…. The workers no longer

needed on farms have been put to use providing new homes, furni­

ture, clothing, computers, pharmaceuticals, appliances, medical

assistance, movies, financial advice, video games, gourmet meals,

and an almost dizzying array of other goods and services.. . . What

we have in place of long hours in the fields is the wealth of goods

and services that come from allowing the churn to work, wherever

and whenever it might occur.76

 

These arguments sound harsh. That is part of the reason why they are

so unpopular: people would rather feel compassionately than think

logically. Many economists advocate government assistance to cush­

ion displaced workers’ transition, and retain public support for a dy­

namic economy. Alan Blinder recommends extended unemployment

insurance, retraining, and relocation subsidies.77 Other economists

disagree. But almost all economists grant that stopping transitions

has a grave cost.

 

While this is correct in the general, it does not work in the case where there some jobs that have no possible jobs left, or too few jobs they can perform. Humans are limited by their intelligence, if we can make robots that can do what humans do better or equally well at lower costs, this WILL be a problem.

 

-

 

 

Economists are especially critical of the antiforeign outlook because

it does not just happen to be wrong; it frequently conflicts with ele­

mentary economics. Textbooks teach that total output increases if

producers specialize and trade. On an individual level, who could

deny it? Imagine how much time it would take to grow your own food,

when a few hours’ wages spent at the grocery store feed you for weeks.

Analogies between individual and social behavior are at times mis­

leading, but this is not one of those times. International trade is, as

Steven Landsburg explains, a technology:

 

There are two technologies for producing automobiles in America.

One is to manufacture them in Detroit, and the other is to grow

them in Iowa. Everybody knows about the first technology; let me

tell you about the second. First you plant seeds, which are the raw

materials from which automobiles are constructed. You wait a few

months until wheat appears. Then you harvest the wheat, load it

onto ships, and sail the ships westward into the Pacific Ocean. After

a few months, the ships reappear with Toyotas on them.59

 

Great quote! I will remember that one.

 

-

 

Skipping ahead to the present, Alan Blinder blames opposition to

tradable pollution permits on antimarket bias.39 Why let people “pay

to pollute,” when we can force them to cease and desist? The textbook

answer is that tradable permits get you more pollution abatement for

the same cost. The firms able to cheaply cut their emissions do so,

selling their excess pollution quota to less flexible polluters. End re­

sult: More abatement bang for your buck. A price for pollution is

therefore not a pure transfer; it creates incentives to improve environ­

mental quality as cheaply as possible. But noneconomists disagree—

including relatively sophisticated policy insiders. Blinder discusses a

fascinating survey of 63 environmentalists, congressional staffers,

and industry lobbyists. Not one could explain economists’ standard

rationale for tradable permits.4

 

Sounds like: citizensclimatelobby.org/carbon-fee-and-dividend-faq/

 

-

 

Good intentions are ubiquitous in politics; what is scarce is accu­

rate beliefs. The pertinent question about selective participation is

whether voters are more biased than nonvoters, not whether voters

take advantage of nonvoters.59 Empirically, the opposite holds: The

median voter is less biased than the median nonvoter. One of the

main predictors of turnout, education, substantially increases eco­

nomic literacy. The other two—age and income—have little effect on

economic beliefs.

Though it sounds naive to count on the affluent to look out for the

interests of the needy, that is roughly what the data advise. All kinds

of voters hope to make society better off, but the well educated are

more likely to get the job done.60 Selective turnout widens the gap

between what the public gets and what it wants. But it narrows the

gap between what the public gets and what it needs.

 

great quote, “Good intentions are ubiquitous in politics; what is scarce is accurate beliefs.”

 

If people dont vote for self-interest, then representation is not necessary. To complaints about lack of representation are not well-founded, at least to some degree.

 

-

 

In financial and betting markets, there are intrinsic reasons why

clearer heads wield disproportionate influence.61 People who know

more can expect to earn higher profits, giving them a stronger to in­

centive to participate. Furthermore, past winners have more assets to

influence the market price. In contrast, the disproportionate electoral

influence of the well educated is a lucky surprise. Indeed, since the

value of their time is greater, one would expect them to vote less. To

be blunt, the problem with democracy is not that clearer heads have

surplus influence. The problem is that, compared to financial and

betting markets, the surplus is small.

 

More meritocracy is needed, it seems.

 

-

 

If education causes better economic understanding, there is an ar­

gument for education subsidies—albeit not necessarily higher sub­

sidies than we have now.62 If the connection is not causal, however,

throwing money at education treats a symptom of economic illiteracy,

not the disease. You would get more bang for your buck by defunding

efforts to “get out the vote.”63 One intriguing piece of evidence against

the causal theory is that educational attainment rose substantially in

the postwar era, but political knowledge stayed about the same.64

 

this indicates that it is g not education that causes greater political knowledge. In other words, g is a common cause of both better education and greater political knowledge. This isnt surprising at all. But it might still be that education has some beneficial effect, the study referred to is faulty in some way. Or that perhaps we’re doing education wrong. Perhaps we need incentives for people to increase their political knowledge? After all, if greater political knowledge causes better democratic results, and better democratic results cause more economic growth for the country, then it does pay for itself. It might even be a good investment.

 

The cite of 64 is: Delli Carpini, Michael, and Scott Keeter. 1996. What Americans Know About

Politics and Why It Matters. New Haven: Yale University Press.

 

It cant be found on either bookos or libgen, so i cant look it up.

 

 

-

 

Before studying public opinion, many wonder why democracy does

not work better. After one becomes familiar with the public’s system­

atic biases, however, one is struck by the opposite question: Why does

democracy work as well as it does? How do the unpopular policies

that sustain the prosperity of the West survive? Selective participation

is probably one significant part of the answer. It is easy to criticize

the beliefs of the median voter, but at least he is less deluded than

the median nonvoter.

 

lol’d

 

-

 

If voters are systematically mistaken about what policies work,

there is a striking implication: They will not be satisfied by the politi­

cians they elect. A politician who ignores the public’s policy prefer­

ences looks like a corrupt tool of special interests. A politician who

implements the public’s policy preferences looks incompetent be­

cause of the bad consequences. Empirically, the shoe fits: In the GSS,

only 25% agree that “People we elect to Congress try to keep the

promises they have made during the election,” and only 20% agree

that “most government administrators can be trusted to do what is

best for the country.”71 Why does democratic competition yield so few

satisfied customers? Because politicians are damned if they do and

damned if they don’t. The public calls them venal for failing to deliver

the impossible.

 

-

 

As in economics, laymen reject the basics, not merely details. Toxi­

cologists are vastly more likely than the public to affirm that “use of

chemicals has improved our health more than it has harmed it,” to

deny that natural chemicals are less harmful than man-made chemi­

cals, and to reject the view that “it can never be too expensive to

reduce the risks associated with chemicals.”81 While critics might like

to impugn the toxicologists’ objectivity, it is hard to take such accusa­

tions seriously. The public’s views are often patently silly, and toxicol­

ogists who work in industry, academia, and regulatory bureaus largely

see eye to eye.82

 

seems worth looking up these studies.

 

81. Kraus, Nancy, Torbjörn Malmfors, and Paul Slovic. “Intuitive toxicology: Expert and lay judgments of chemical risks.” Risk analysis 12.2 (1992): 215-232.

 

82. Lichter and Rothman (1999) similarly document that cancer research­

ers’ ideology has little effect on their scientific judgment. Liberal cancer re­

searchers who do not work in the private sector still embrace their profes­

sion’s contrarian views. “As a group, the experts—whether conservative or

liberal, Democratic or Republican—viewed cancer risks along roughly the

same lines. Thus, their perspectives on this topic do not appear to be ‘con­

taminated’ by either narrow self-interest or broader ideological commit­

ments” (1999: 116).

 

 

 

-

 

Why then does environmental policy put as much emphasis on

dosage as it does? Selective participation is probably part of the story.

Mirroring my results, Kraus, Malmfors, and Slovic (1992) find that ed­

ucation makes people think like toxicologists.84 The bulk of the expla­

nation, though, is probably that voters care about economic well-being

as well as safety from toxic substances. Moving from low dosage to

zero is expensive. It might absorb all of GDP. This puts a democratic

leader in a tight spot. If he embraces the public’s doseless worldview

and legislates accordingly, it would spark economic disaster. Over

60% of the public agrees that “It can never be too expensive to reduce

the risks associated with chemicals,”85 but the leader who complied

would be a hated scapegoat once the economy fell to pieces. On the

other hand, a leader who dismisses every low-dose scare as “unscien­

tific” and “paranoid” would soon be a reviled symbol of pedantic in­

sensitivity. Given their incentives, politicians cannot disregard the

public’s misconceptions, but they often drag their feet.

 

nowhere is this as clear as with pesticides and radiation. The public’s extreme fear of those do not at all mirror the scientific evidence of their harmfulness at low dosages.

 

-

 

Leaders’ incentive to rationally assess the effects of policy might be

perverse, not just weak. Machiavelli counsels the prince “to do evil if

constrained” but at the same time “take great care that nothing goes

out of his mouth which is not full of” “mercy, faith, integrity, humanity

and religion.” One can freely play the hypocrite because “everybody

sees what you appear to be, few feel what you are, and those few will

not dare oppose themselves to the many.”10 Yet, contra Machiavelli,

psychologists have documented humans’ real if modest ability to de­

tect dishonesty from body language, tone of voice, and more.11 George

Costanza memorably counseled Jerry Seinfeld, “Just remember, it’s

not a lie if you believe it.”12 The honestly mistaken politician appears

more genuine because he is more genuine. This gives leaders who

sincerely share their constituents’ policy views a competitive advan­

tage over Machiavellian rivals.13

 

I’ve sometimes heard the claim that privately, politicians really do acknowledge that ex. war on drugs does not work and is counter-productive, but that they go along with the voter opinion anyway. Perhaps this isn’t true. Perhaps the politicians really are as deluded as the voters? Or even more! Polls in Denmark show that politicians are firmly against legalization, while the public/voters are slightly positive.

 

-

 

To get ahead in politics, leaders need a blend of naive populism

and realistic cynicism. No wonder the modal politician has a law de­

gree. Dye and Zeigler report that “70 percent of the presidents, vice

presidents, and cabinet officers of the United States and more than

50 percent of the U.S. senators and House members” have been law­

yers.14 The economic role of government has greatly expanded since

the New Deal, but the percentage of congressmen with economic

training remains negligible.15 Economic issues Eire important to vot­

ers, but they do not want politicians with economic expertise—espe­

cially not ones who lecture them and point out their confusions.

 

no wonder they think new laws can solve everything…

 

-

 

It helps to sell the right kind of favors. Like a journalist with an ax

to grind, a shrewd politician moves along the margins of voter indif­

ference. The public is protectionist, but rarely has strong opinions

about which industries need help. This is a great opportunity for a

politician and a struggling industry to make a deal. Steel manufactur­

ers could pay a politician to take (a) a popular stand against foreigners

combined with (b) a not unpopular stand for American steel. In

maxim form: Do what the public wants when it cares; take bids from

interested parties when its doesn’t. Bear in mind, though, that the

important thing is not how burdensome a concession is, but how bur­

densome voters perceive it to be.

 

Always lean to the green, as it is said in Congress. www.huffingtonpost.com/lawrence-lessig/neoprogressives_b_704715.html

 

-

 

Consider the insurance market failure known as “adverse selec­

tion.” If people who want insurance know their own riskiness, but

insurers only know average riskiness, the market tends to shrink. Low-

risk people drop out, which raises consumers’ average riskiness,

which raises prices, which leads more low-risk customers to drop

out.52 In the worst-case scenario, the market “unravels.” Prices get so

high that no one buys insurance, and consumers get so risky that

firms cannot afford to sell for less.

 

Interesting. This shud happen to some degree becuz of the new consumer genomics. It may also be illegal for the insurance companies to utilize known information to change rates. For instance, feminists’ ideas about equality of the sexes had the result that it become illegal in the EU to change rates conditional on sex. This means that prices rose for women and fell for men even tho men cause most of the accidents.

 

ec.europa.eu/justice/newsroom/gender-equality/news/121220_en.htm

 

-

 

The main upshot of my analysis of democracy is that it is a good

idea to rely more on private choice and the free market. But what—if

anything—can be done to improve outcomes, taking the supremacy

of democracy over the market as fixed?. The answer depends on how

flexibly you define “democracy.” Would we still have a “democracy”

if you needed to pass a test of economic literacy to vote? If you needed

a college degree? Both of these measures raise the economic under­

standing of the median voter, leading to more sensible policies. Fran­

chise restrictions were historically used for discriminatory ends, but

that hardly implies that they should never be used again for any rea­

son. A test of voter competence is no more objectionable than a driv­

ing test. Both bad driving and bad voting are dangerous not merely

to the individual who practices them, but to innocent bystanders. As

Frederic Bastiat argues, “The right to suffrage rests on the presump­

tion of capacity”:

 

And why is incapacity a cause of exclusion? Because it is not the

voter alone who must bear the consequences of his vote; because

each vote involves and affects the whole community; because the

community clearly has the right to require some guarantee as to

the acts on which its welfare and existence depend.56

 

A more palatable way to raise the economic literacy of the median

voter is by giving extra votes to individuals or groups with greater

economic literacy. Remarkably, until the passage of the Representa­

tion of the People Act of 1949, Britain retained plural voting for gradu­

ates of elite universities and business owners. As Speck explains,

“Graduates had been able to vote for candidates in twelve universities

in addition to those in their own constituencies, and businessmen

with premises in a constituency other than their own domicile could

vote in both.”57 Since more educated voters think more like econo­

mists, there is much to be said for such weighting schemes. I leave it

to the reader to decide whether 1948 Britain counts as a democracy.

 

wow, never knew this!

 

-

 

Since well-educated people are better voters, another tempting way

to improve democracy is to give voters more education. Maybe it

would work. But it would be expensive, Eind as mentioned in the pre­

vious chapter, education may be a proxy for intelligence or curiosity.

A cheaper strategy, and one where a causal effect is more credible, is

changing the curriculum. Steven Pinker Eirgues that schools should

try to “provide students with the cognitive skills that are most im­

portant for grasping the modern world and that are most unlike the

cognitive tools they Eire born with,” by emphasizing “economics, evo­

lutionary biology, and probability and statistics.”60 Pinker essentially

wants to give schools a new mission: rooting out the biased beliefs

that students arrive with, especially beliefs that impinge on govern­

ment policy.61 What should be cut to make room for the new material?

 

There are only twenty-four hours in a day, and a decision to teach

one subject is also a decision not to teach another one. The ques­

tion is not whether trigonometry is important, but whether it is

more important than statistics; not whether an educated person

should know the classics, but whether it is more important for an

educated person to know the classics than elementary economics.62

 

Indeed

 

-

 

 

The Signal and the Noise Why So Many Predictions Fail – but Some Don’t Nate Silver 544p

 

It is a pretty interesting book especially becus it covers some areas of science not usually covered in popsci (geology, meteorology), and i learned a lot. it is also clearly written and easy to read, which speeds up reading speeds, making the 450ish pages rather quickly to devour. From a learning perspectiv this is awesome as it allows for faster learning. it shud also be mentioned that it has a lot of very useful illustrations which i shared on my social networks while reading it.

 

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

 

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

 

-

 

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

 

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

 

thats interesting. never heard of this.

 

-

 

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

 

Nicely illustrating for importance of honesty in reporting data, even on local TV.

 

-

 

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

 

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

 

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

a year ahead of time.21

 

and this is why we cant have nice things, i mean macroeconomics

 

-

 

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

 

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

 

-

 

 

“Fortunately, Dustin is really cocky, because if he was the kind of person

who was intimidated—if he had listened to those people—it would have ruined

him. He didn’t listen to people. He continued to dig in and swing from his heels

and eventually things turned around for him.”

Pedroia has what John Sanders calls a “major league memory”—which is to

say a short one. He isn’t troubled by a slump, because he is damned sure that

he’s playing the game the right way, and in the long run, that’s what matters.

Indeed, he has very little tolerance for anything that distracts him from doing

his job. This doesn’t make him the most generous human being, but it is ex­

actly what he needs in order to play second base for the Boston Red Sox, and

that’s the only thing that Pedroia cares about.

“Our weaknesses and our strengths are always very intimately connected,”

James said. “Pedroia made strengths out of things that would be weaknesses for

other players.”

This sounds like low agreeableness to me. I wonder if Big Five can predict baseball success?

-

The statistical reality of accuracy isn’t necessarily the governing paradigm

when it comes to commercial weather forecasting. It’s more the perception of

accuracy that adds value in the eyes of the consumer.

For instance, the for-profit weather forecasters rarely predict exactly a

50 percent chance of rain, which might seem wishy-washy and indecisive to

consumers.41 Instead, they’ll flip a coin and round up to 60, or down to 40, even

though this makes the forecasts both less accurate and less honest.42

Floehr also uncovered a more flagrant example of fudging the numbers,

something that may be the worst-kept secret in the weather industry. Most com­

mercial weather forecasts are biased, and probably deliberately so. In particu­

lar, they are biased toward forecasting more precipitation than will actually

occur43—what meteorologists call a “wet bias.” The further you get from the

government’s original data, and the more consumer facing the forecasts, the

worse this bias becomes. Forecasts “add value” by subtracting accuracy.

thats interesting. never heard of this.

-

This logic is a little circular. TV weathermen say they aren’t bothering to

make accurate forecasts because they figure the public won’t believe them any­

way. But the public shouldn t believe them, because the forecasts aren’t accurate.

This becomes a more serious problem when there is something urgent—

something like Hurricane Katrina. Lots of Americans get their weather infor­

mation from local sources49 rather than directly from the Hurricane Center, so

they will still be relying on the goofball on Channel 7 to provide them with

accurate information. If there is a mutual distrust between the weather fore­

caster and the public, the public may not listen when they need to most.

Nicely illustrating for importance of honesty in reporting data, even on local TV.

-

In fact, the actual value for GDP fell outside the economists’ prediction

interval six times in eighteen years, or fully one-third of the time. Another

study,18 which ran these numbers back to the beginnings of the Survey of Pro­

fessional Forecasters in 1968, found even worse results: the actual figure for

GDP fell outside the prediction interval almost h a l f the time. There is almost

no chance19 that the economists have simply been unlucky; they fundamentally

overstate the reliability of their predictions.

In reality, when a group of economists give you their GDP forecast, the

true 90 percent prediction interval—based on how these forecasts have actually

performed20 and not on how accurate the economists claim them to be—spans

about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2

percent).*

When you hear on the news that GDP will grow by 2.5 percent next year,

that means it could quite easily grow at a spectacular rate of 5.7 percent instead.

Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t

been able to do any better than that, and there isn’t much evidence that their

forecasts are improving. The old joke about economists’ having called nine out

of the last six recessions correctly has some truth to it; one actual statistic is that

in the 1990s, economists predicted only 2 of the 60 recessions around the world

a year ahead of time.21

and this is why we cant have nice things, i mean macroeconomics

-

I have no idea whether I was really a good player at the very outset. But the

bar set by the competition was low, and my statistical background gave me an

advantage. Poker is sometimes perceived to be a highly psychological game, a

battle of wills in which opponents seek to make perfect reads on one another by

staring into one another’s souls, looking for “tells” that reliably betray the con­

tents of the other hands. There is a little bit of this in poker, especially at the

higher limits, but not nearly as much as you’d think. (The psychological factors

in poker come mostly in the form of self-discipline.) Instead, poker is an incred­

ibly mathematical game that depends on making probabilistic judgments amid

uncertainty, the same skills that are important in any type of prediction.

The obvious idea is to program computers to play poker for u online. If they play against bad humans, they shud bring in a steady flow of cash for almost free.

-

The g factor, the science of mental ability – Arthur R. Jensen, ebook download pdf free

 

This is a very interesting book. Without a doubt the best about intelligence that i hav read so far. I definitely recommend reading it if one is interested in psychometrics. It can serve as a long, good, but a bit dated introduction to the subject. For shorter introductions, probably Gottfredson’s why g matters is better.

 

 

Quotes and comments below. Red text = quotes.

——-

 

Galton had no tests for obtaining direct measurements of cognitive ability.

Yet he tried to estimate the mean levels of mental capacity possessed by different

racial and national groups on his interval scale of the normal curve. His esti­

mates—many would say guesses—were based on his observations of people of

different races encountered on his extensive travels in Europe and Africa, on

anecdotal reports of other travelers, on the number and quality of the inventions

and intellectual accomplishments of different racial groups, and on the percent­

age of eminent men in each group, culled from biographical sources. He ven­

tured that the level of ability among the ancient Athenian Greeks averaged “ two

grades” higher than that of the average Englishmen of his own day. (Two grades

on Galton’s scale is equivalent to 20.9 IQ points.) Obviously, there is no pos­

sibility of ever determining if Galton’s estimate was anywhere near correct. He

also estimated that African Negroes averaged “ at least two grades” (i.e., 1.39a,

or 20.9 IQ points) below the English average. This estimate appears remarkably

close to the results for phenotypic ability assessed by culture-reduced IQ tests.

Studies in sub-Saharan Africa indicate an average difference (on culture-reduced
nonverbal tests of reasoning) equivalent to 1.43a, or 21.5 IQ points between

blacks and whites.8 U.S. data from the Armed Forces Qualification Test (AFQT),

obtained in 1980 on large representative samples of black and white youths,

show an average difference of 1.36a (equivalent to 20.4 IQ points)—not far

from Galton’s estimate (1.39a, or 20.9 IQ points).9 But intuition and informed

guesses, though valuable in generating hypotheses, are never acceptable as ev­

idence in scientific research. Present-day scientists, therefore, properly dismiss

Galton’s opinions on race. Except as hypotheses, their interest is now purely

biographical and historical.

 

yes there is. first, one can check the historical record to look for dysgenic effects. if the british are less smart than the ancient greeks, there wud probably hav been som dysgenic effects somwher in history. still, this is not a good method, since the population groups are somwhat different.

 

second, soon we will know the genes that cause different levels of intelligence. we can then analyze the remains of ancient greeks to see which genes they had. this shud giv a pretty good estimate, altho not perfect since, that 1) new mutations hav com by since then, 2) som gene variants hav perhaps disappeared, 3) the difficulty of getting a representativ sample of ancient greeks to test from, 4) the problems with getting good enuf quality DNA to run tests on. still, i dont think these are impossible to overcom, and i predict that som decent estimate can be made.

 

-

 

A General Factor Is Not Inevitable. Factor analysis is not by its nature

bound to produce a general factor regardless of the nature of the correlation

matrix that is analyzed. A general factor emerges from a hierarchical factor

analysis if, and only if, a general factor is truly latent in the particular correlation

matrix. A general factor derived from a hierarchical analysis should be based

on a matrix of positive correlations that has at least three latent roots (eigen­

values) greater than 1.

For proof that a general factor is not inevitable, one need only turn to studies

of personality. The myriad of inventories that measure various personality traits

have been subjected to every type of factor analysis, yet no general factor has

ever emerged in the personality domain. There are, however, a great many first-

order group factors and several clearly identified second-order group factors, or

“ superfactors” (e.g., introversion-extraversion, neuroticism, and psychoticism),

but no general factor. In the abilities domain, on the other hand, a general factor,

g, always emerges, provided the number and variety of mental tests are sufficient

to allow a proper factor analysis. The domain of body measurements (including

every externally measurable feature of anatomy) when factor analyzed also

shows a large general factor (besides several small group factors). Similarly, the

correlations among various measures of athletic ability show a substantial gen­

eral factor.

 

 

Jensen was wrong about this, altho the significance of that is disputed afaict. see:

How important is the General Factor of Personality? A General Critique (William Revelle and Joshua Wilt), PDF

 

-

 

In jobs where assurance of competence is absolutely critical, however, such

as airline pilots and nuclear reactor operators, government agencies seem to have

recognized that specific skills, no matter how well trained, though essential for

job performance, are risky if they are not accompanied by a fairly high level of

g. For example, the TVA, a leader in the selection and training of reactor op­

erators, concluded that results of tests of mechanical aptitude and specific job

knowledge were inadequate for predicting an operator’s actual performance on

the job. A TVA task force on the selection and training of reactor operators

stated: “ intelligence will be stressed as one of the most important characteristics

of superior reactor operators.. . . intelligence distinguishes those who have

merely memorized a series of discrete manual operations from those who can

think through a problem and conceptualize solutions based on a fundamental

understanding of possible contingencies.” 161 This reminds one of Carl Bereiter’s

clever definition of “ intelligence” as “ what you use when you don’t know

what to do.”

 

funny and true

 

-

 

The causal underpinnings of mental development take place at the neurolog­

ical level even in the absence of any specific environmental inputs such as those

that could possibly explain mental growth in something like figure copying in

terms of transfer from prior learning. The well-known “ Case of Isabel” is a

classic example.181 From birth to age six, Isabel was totally confined to a dimly

lighted attic room, where she lived alone with her deaf-mute mother, who was

her only social contact. Except for food, shelter, and the presence of her mother,

Isabel was reared in what amounted to a totally deprived environment. There

were no toys, picture books, or gadgets of any kind for her to play with. When

found by the authorities, at age six, Isabel was tested and found to have a mental

age of one year and seven months and an IQ of about 30, which is barely at

the imbecile level. In many ways she behaved like a very young child; she had

no speech and made only croaking sounds. When handed toys or other unfa­

miliar objects, she would immediately put them in her mouth, as infants nor­

mally do. Yet as soon as she was exposed to educational experiences she

acquired speech, vocabulary, and syntax at an astonishing rate and gained six

years of tested mental age within just two years. By the age of eight, she had

come up to a mental age of eight, and her level of achievement in school was

on a par with her age-mates. This means that her rate of mental development—

gaining six years of mental age in only two years—was three times faster than

that of the average child. As she approached the age of eight, however, her

mental development and scholastic performance drastically slowed down and

proceeded thereafter at the rate of an average child. She graduated from high

school as an average student.

 

What all this means to the g controversy is that the neurological basis of

information processing continued developing autonomously throughout the six

years of Isabel’s environmental deprivation, so that as soon as she was exposed

to a normal environment she was able to learn those things for which she was

developmentally “ ready” at an extraordinarily fast rate, far beyond the rate for

typically reared children over the period of six years during which their mental

age normally increases from two to eight years. But the fast rate of manifest

mental development slowed down to an average rate at the point where the level

of mental development caught up with the level of neurological development.

Clearly, the rate of mental development during childhood is not just the result

of accumulating various learned skills that transfer to the acquisition of new

skills, but is largely based on the maturation of neural structures.

 

this reminds me of the person who suggested that we delay teaching math in schools for the same reason. it is simply more time-effective, and time is costly, both for the child who has limited freedom in the time spent in school, and for soceity becus that time cud hav been spent on teaching somthing else, or not spent at all and thus saved money on teachers.

 

the idea is that som math subjects takes very long to teach, say, 8 year olds, but can rapidly to taught to 12 year olds. so, using som invented numbers, the idea is that instead of spending 10 hours teaching long division to 8 year olds, we cud spend 2 hours teaching long division to 12 year olds, thus saving 8 eights that can be either used on somthing else that can be taught easily to 8 year olds, or simply freeing up the time for non-teaching activities.

 

see: www.inference.phy.cam.ac.uk/sanjoy/benezet/ for the original papers

 

-

 

Perhaps the most problematic test of overlapping neural elements posited by

the sampling theory would be to find two (or more) abilities, say, A and B, that

are highly correlated in the general population, and then find some individuals

in whom ability A is severely impaired without there being any impairment of

ability B. For example, looking back at Figure 5.2, which illustrates sampling

theory, we see a large area of overlap between the elements in Test A and the

elements in Test B. But if many of the elements in A are eliminated, some of

its elements that are shared with the correlated Test B will also be eliminated,

and so performance on Test B (and also on Test C in this diagram) will be

diminished accordingly. Yet it has been noted that there are cases of extreme

impairment in a particular ability due to brain damage, or sensory deprivation

due to blindness or deafness, or a failure in development of a certain ability due

to certain chromosomal anomalies, without any sign of a corresponding deficit

in other highly correlated abilities.22 On this point, behavioral geneticists Will-

erman and Bailey comment: “ Correlations between phenotypically different

mental tests may arise, not because of any causal connection among the mental

elements required for correct solutions or because of the physical sharing of

neural tissue, but because each test in part requires the same ‘qualities’ of brain

for successful performance. For example, the efficiency of neural conduction or

the extent of neuronal arborization may be correlated in different parts of the

brain because of a similar epigenetic matrix, not because of concurrent func­

tional overlap.” 22 A simple analogy to this would be two independent electric

motors (analogous to specific brain functions) that perform different functions

both running off the same battery (analogous to g). As the battery runs down,

both motors slow down at the same rate in performing their functions, which

are thus perfectly correlated although the motors themselves have no parts in

common. But a malfunction of one machine would have no effect on the other

machine, although a sampling theory would have predicted impaired perform­

ance for both machines.

 

i know its only an analogy, but whether ther ar one or two motors tapping from one battery might hav an effect on their speed. that depends on the setup, i think.

 

-

 

Gc is most highly loaded in tests based on scholastic knowledge and cultural

content where the relation-eduction demands of the items are fairly simple. Here

are two examples of verbal analogy problems, both of about equal difficulty in

terms of percentage of correct responses in the English-speaking general pop­

ulation, but the first is more highly loaded on G f and the second is more highly

loaded on Gc.

 

1. Temperature is to cold as Height is to

(a) hot (b) inches (c) size (d) tall (e) weight

2. Bizet is to Carmen as Verdi is to

(a) Aida (b) Elektra (c) Lakme (d) Manon (e) Tosca

 

first one, i wanted to answer <small>, since <cold> is on the bottum of the scale of temperature, so i wanted somthing that was on the bottom of the scale of height. but ther is no such option, but tall is also on the scale of height, just as cold is on the scale of temperature. with no other better option, i went with (d), which was correct.

 

second one, however, made no sense to me. i did look for patterns in spelling, vowels, length, etc., found nothing. i then googled it. its composers and their operas.

en.wikipedia.org/wiki/Georges_Bizet

en.wikipedia.org/wiki/Carmen

en.wikipedia.org/wiki/Giuseppe_Verdi

en.wikipedia.org/wiki/Aida

 

-

 

Another blood variable of interest is the amount of uric acid in the blood

(serum urate level). Many studies have shown it to have only a slight positive

correlation with IQ. But it is considerably more correlated with measures of

ambition and achievement. Uric acid, which has a chemical structure similar to

caffeine, seems to act as a brain stimulant, and its stimulating effect over the

course of the individual’s life span results in more notable achievements than

are seen in persons of comparable IQ, social and cultural background, and gen­

eral life-style, but who have a lower serum urate level. High school students

with elevated serum urate levels, for example, obtain higher grades than their

IQ-matched peers with an average or below-average serum urate level, and,

amusingly, one study found a positive correlation between university professors’

serum urate levels and their publication rates. The undesirable aspect of high

serum urate level is that it predisposes to gout. In fact, that is how the association

was originally discovered. The English scientist Havelock Ellis, in studying the

lives and accomplishments of the most famous Britishers, discovered that they

had a much higher incidence of gout than occurs in the general population.

Asthma and other allergies have a much-higher-than-average frequency in

children with higher IQs (over 130), particularly those who are mathematically

gifted, and this is an intrinsic relationship. The intellectually gifted show some

15 to 20 percent more allergies than their siblings and parents. The gifted are

also more apt to be left-handed, as are the mentally retarded; the reason seems

to be that the IQ variance of left-handed persons is slightly greater than that of

the right-handed, hence more of the left-handed are found in the lower and upper

extremes of the normal distribution of IQ.

 

Then there are also a number of odd and less-well-established physical cor­

relates of IQ that have each shown up in only one or two studies, such as vital

capacity (i.e., the amount of air that can be expelled from the lungs), handgrip

strength, symmetrical facial features, light hair color, light eye color, above-

average basic metabolic rate (all these are positively correlated with IQ), and

being unable to taste the synthetic chemical phenylthiocarbamide (nontasters are

higher both in g and in spatial ability than tasters; the two types do not differ

in tests of clerical speed and accuracy). The correlations are small and it is not

yet known whether any of them are within-family correlations. Therefore, no

causal connection with g has been established.

 

Finally, there is substantial evidence of a positive relation between g and

general health or physical well-being.[36] In a very large national sample of high

school students (about 10,000 of each sex) there was a correlation of +.381

between a forty-three-item health questionnaire and the composite score on a

large number of diverse mental tests, which is virtually a measure of g. By

comparison, the correlation between the health index and the students’ socio­

economic status (SES) was only +.222. Partialing out g leaves a very small

correlation ( + .076) between SES and health status. In contrast, the correlation

between health and g when SES is partialed out is +.326.

 

how very curius!

 

-

 

Certainly psychometric tests were never constructed with the intention of

measuring inbreeding depression. Yet they most certainly do. At least fourteen

studies of the effects of inbreeding on mental ability test scores—mostly IQ—

have been reported in the literature.132′ Without exception, all of the studies show

inbreeding depression both of IQ and of IQ-correlated variables such as scho­

lastic achievement. As predicted by genetic theory, the IQ variance of the inbred

is greater than that of the noninbred samples. Moreover, the degree to which

IQ is depressed is an increasing monotonic function of the coefficient of in-

breeding. The severest effects are seen in the offspring of first-degree incestuous

matings (e.g., father-daughter, brother-sister); the effect is much less for first-

cousin matings and still less for second-cousin matings. The degree of IQ de­

pression for first cousins is about half a standard deviation (seven or eight IQ

points).

 

In most of these studies, social class and other environmental factors are well

controlled. Studies in Muslim populations in the Middle East and India are

especially pertinent. Cousin marriages there are more prevalent in the higher

social classes, as a means of keeping wealth in family lines, so inbreeding and

high SES would tend to have opposite and canceling effects. The observed effect

of inbreeding depression on IQ in the studies conducted in these groups,

therefore, cannot be attributed to the environmental effects of SES that are often

claimed to explain IQ differences between socioeconomically advantaged and

disadvantaged groups.

 

These studies unquestionably show inbreeding depression for IQ and other

single measures of mental ability. The next question, then, concerns the extent

to which g itself is affected by inbreeding. Inbreeding depression could be

mainly manifested in factors other than g, possibly even in each test’s specificity.

To answer this question, we can apply the method of correlated vectors to in-

breeding data based on a suitable battery of diverse tests from which g can be

extracted in a hierarchical factor analysis. I performed these analyses1331 for the

several large samples of children born to first-and second-cousin matings in

Japan, for whom the effects of inbreeding were intensively studied by geneticists

William Schull and James Neel (1965). All of the inbred children and compa­

rable control groups of noninbred children were tested on the Japanese version

of the Wechsler Intelligence Scale for Children (WISC). The correlations among

the eleven subtests of the WISC were subjected to a hierarchical factor analysis,

separately for boys and girls, and for different age groups, and the overall av­

erage g loadings were obtained as the most reliable estimates of g for each

subtest. The analysis revealed the typical factor structure of the WISC—a large

g factor and two significant group factors: Verbal and Spatial (Performance).

(The Memory factor could not emerge because the Digit Span subtest was not

used.) Schull and Neel had determined an index of inbreeding depression on

each of the subtests. In each subject sample, the column vector of the eleven

subtests’ g loadings was correlated with the column vector of the subtests’ index

of inbreeding depression (ID). (Subtest reliabilities were partialed out of these

correlations.) The resulting rank-order correlation between subtests’ g loadings

and their degree of inbreeding depression was + .79 (p < .025). The correlation

of ID with the Verbal factor loadings (independent of g) was +.50 and with the

Spatial (or Performance) factor the correlation was —.46. (The latter two cor­

relations are nonsignificant, each with p < .05.) Although this negative corre­

lation of ID with the spatial factor (independent of g) falls short of significance,

the negative correlation was found in all four independent samples. Moreover,

it is consistent with the hypothesis that spatial visualization ability is affected

by an X-linked recessive allele.34 Therefore, it is probably not a fluke.

 

A more recent study1351 of inbreeding depression, performed in India, was

based entirely on the male offspring of first-cousin parents and a control group

of the male offspring of genetically unrelated parents. Because no children of

second-cousin marriages were included, the degree of inbreeding depression was

considerably greater than in the previous study, which included offspring of

second-cousin marriages. The average inbreeding effect on the WISC-R Full

Scale IQ was about ten points, or about two-third of a standard deviation.1361

The inbreeding index was reported for the ten subtests of the WISC-R used in

this study. To apply the method of correlated vectors, however, the correlations

among the subtests for this sample are needed to calculate their g loadings.

Because these correlations were not reported, I have used the g loadings obtained

from a hierarchical factor analysis of the 1,868 white subjects in the WISC-R

standardization sample.1371 The column vector of these g loadings and the column

vector of the ID index have a rank-order correlation (with the tests’ reliability

coefficients partialed out) of +.83 (p < .01), which is only slightly larger than

the corresponding correlation between the g and ID vectors in the Japanese

study.

 

In sum, then, the g factor significantly predicts the degree to which perform­

ance on various mental tests is affected by inbreeding depression, a theoretically

predictable effect for traits that manifest genetic dominance. The larger a test’s

g loading, the greater is the depression of the test scores of the inbred offspring

of consanguineous parents, as compared with the scores of noninbred persons.

The evidence in these studies of inbreeding rules out environmental variables

as contributing to the observed depression of test scores. Environmental differ­

ences were controlled statistically, or by matching the inbred and noninbred

groups on relevant indices of environmental advantage.

 

pretty large effects. the footnote with the 14 studies mentioned is:

 

Adams & Neel, 1967; Afzal, 1988; Afzal & Sinha, 1984; Agrawal et al., 1984;

Badaruddoza & Afzil, 1993; Bashi, 1977; Book, 1957; Carter, 1967; Cohen et al., 1963;

Inbaraj & Rao, 1978; Neel, et al., 1970; Schull & Neel, 1965; Seemanova, 1971; Slatis

& Hoene, 1961.

 

-

 

Semantic Verification Test. The SVT uses the binary response console (Fig­

ure 8.3) and a computer display screen. Following the preparatory “ beep,” a

simple statement appears on the screen. The statement involves the relative

positions of the three letters A, B, C as they may appear (equally spaced) in a

horizontal array. Each trial uses one of the six possible permutations of these

three letters chosen at random. The statement appears on the screen for three

seconds, allowing more than enough time for the subject to read it. There are

fourteen possible statements of the following types: “ A after B,” “ C before

A,” “ A between B and C,” “ B first,” “ B last,” “ C before A and B,” “ C

after B and A” ; and the negative form of each of these statements, for instance,

“ A not after B.” Following the three-second appearance of one of these state­

ments, the screen goes blank for one second and then one of the permutations

of the letters A B C appears. The subject responds by pressing either the TRUE

or FALSE button, depending on whether the positions of the letters does or does

not agree with the immediately previous statement.

 

Although the SVT is the most complex of the many ECTs that have been

tried in my lab, the average RT for university students is still less than 1 second.

The various “ problems” differ widely in difficulty, with average RTs ranging

from 650 msec to 1,400 msec. Negative statements take about 200 msec longer

than the corresponding positive statements. MT, on the other hand, is virtually

constant across conditions, indicating that it represents something other than

speed of information processing.

 

The overall median RT and RTSD as measured in the SVT each correlates

about —.50 with scores on the Raven’s Advanced Progressive Matrices given

without time limit. The average RT on the SVT also shows large differences

between Navy recruits and university students,1201 and between academically

gifted children and their less gifted siblings.1211 The fact that there is a within-

families correlation between RT and IQ indicates that these variables are intrin­

sically and functionally related.

 

One study20 reveals that the average processing time for each of the fourteen

types of SVT statements in university students predicts the difficulty level of

the statements (in terms of error responses) in children (third-graders) who were

given the SVT as a nonspeeded paper-and-pencil test. While the SVT is of such

trivial difficulty for college students that individual differences are much more

reliably reflected by RT rather than by errors, the SVT items are relatively

difficult for young children. Even when they take the SVT as a nonspeeded

paper-and-pencil test, young children make errors on about 20 percent of the

trials. (The few university students who made even a single error under these

conditions, given as a pretest, were screened out.) The fact that the rank order

of the children’s error rates on the various types of SVT statements closely

corresponds to the rank order of the college students’ average RTs on the same

statements indicates that item difficulty is related to speed of processing, even

when the test is nonspeeded.

 

It appears that if information exceeds a critical level of complexity for the in­

dividual, the individual’s speed of processing is too slow to handle the infor­

mation all at once; the system becomes overloaded and processing breaks

down, with resulting errors, even for nonspeeded tests on which subjects are

told to take all the time they need. There are some items in Raven’s Advanced

Matrices, for example, that the majority of college students cannot solve with

greater than chance success, even when given any amount of time, although the

problems do not call for the retrieval of any particular knowledge. As already

noted, the scores on such nonspeeded tests are correlated with the speed of in­

formation processing in simple ECTs that are easily performed by all subjects

in the study.

 

interesting test. the threshold hypothesis is also interesting for makers of IQ tests.

 

-

 

There are many other kinds of simple tasks that do not resemble the con­

tents of conventional psychometric tests but that have significant correlations

with IQ. Many studies have confirmed Spearman’s finding that pitch discrim­

ination is g-loaded, and other musical discriminations, in duration, timbre,

rhythmic pattern, pitch interval, and harmony, are correlated with IQ, indepen­

dently of musical training.28 The strength of certain optical illusions is also

significantly related to IQ.1291 Surprisingly, higher-IQ subjects experience cer­

tain illusions more strongly than subjects with lower IQ, probably because

seeing the illusion implies a greater amount of mental transformation of the

stimulus, and tasks that involve transformation of information (e.g., backward

digit span) are typically more g loaded than tasks involving less transforma­

tion of the input (e.g., forward digit span). The positive correlation between

IQ and susceptibility to illusions is consistent with the fact that susceptibility

to optical illusions also increases with age, from childhood to maturity, and

then decreases in old age—the same trajectory we see for raw-score perform­

ance on IQ tests and for speed and intraindividual consistency of RT in ECTs.

The speed and consistency of information processing generally show an in­

verted U curve across the life span.

 

interesting.

 

-

 

Jensen mentions the en.wikipedia.org/wiki/Yerkes-Dodson_law

interesting. i link to Wikipedia since i think its explanation of the law is better than Jensens, who just briefly mentions it.

 

-

 

[...Localized damage to the brain

areas that normally subserve one of these group factors can leave the person

severely impaired in the expression of the abilities loaded on the group factor,

but with little or no impairment of abilities that are loaded on other group factors

or on g.]

 

A classic example of this is females who are born with a chromosomal anom­

aly known as Turner’s syndrome.1701 Instead of having the two normal female

sex chromosomes (designated XX), they lack one X chromosome (hence are

designated XO). Provided no spatial visualization tests are included in the IQ

battery, the IQs of these women (and presumably their levels of g) are normally

distributed and virtually indistinguishable from that of the general population.

Yet their performance on all tests that are highly loaded on the spatial-

visualization factor is extremely low, typically borderline retarded, even in

Turner’s syndrome women with verbal IQs above 130. It is as if their level of

g is almost totally unreflected in their level of performance on spatial tasks.

 

It is much harder to imagine the behavior of persons who are especially

deficient in all abilities involving g and all of the major group factors, but have

only one group factor that remains intact. In our everyday experience, persons

who are highly verbal, fluent, articulate, and use a highly varied vocabulary,

speaking with perfect syntax and appropriate expression, are judged to be of at

least average or probably superior IQ. But there is a rare and, until recently,

little-known genetic anomaly, Williams syndrome,1711 in which the above-listed

characteristics of high verbal ability are present in persons who are otherwise

severely mentally deficient, with IQs averaging about 50. In most ways, Wil­

liams syndrome persons appear to behave with no more general capability of

getting along in the world than most other persons with similarly low IQs. As

adults, they display only the most rudimentary scholastic skills and must live

under supervision. Only their spoken verbal ability has been spared by this

genetic defect. But their verbal ability appears to be “ hollow” with respect to

g. They speak in complete, often complex, sentences, with good syntax, and

even use unusual words appropriately. (They do surprisingly well on the Pea­

body Picture Vocabulary Test.) In response to a series of pictures, they can tell

a connected and fully elaborated story, accompanied by appropriate, if somewhat

exaggerated, emotional expression. Yet they have exceedingly little ability to

reason, or to explain or summarize the meaning of what they say. On most

spatial ability tests they generally perform on a par with Down syndrome persons

of comparable IQ, but they also differ markedly from Down persons in peculiar

ways. Williams syndrome subjects are more handicapped than IQ-matched

Down subjects in figure copying and block designs.

 

Comparing Turner’s syndrome with Williams syndrome obviously suggests

the generalization that a severe deficiency of one group factor in the presence

of an average level of g is far less a handicap than an intact group factor in the

presence of a very low level of g.

 

never heard of Williams syndrome befor.

 

en.wikipedia.org/wiki/Williams_syndrome

 

-

 

The correlation of IQ with grades and achievement test scores is highest (.60

to .70) in elementary school, which includes virtually the entire child population

and hence the full range of mental ability. At each more advanced educational

level, more and more pupils from the lower end of the IQ distribution drop out,

thereby restricting the range of IQs. The average validity coefficients decrease

accordingly: high school (.50 to .60), college (.40 to .50), graduate school (.30

to .40). All of these are quite high, as validity coefficients go, but they permit

far less than accurate prediction of a specific individual. (The standard error of

estimate is quite large for validity coefficients in this range.)

 

interesting. one thing that i hav been thinking about is that my GPA thruout my life has always been a bit abov average, but not close to the top. given that the intelligence requirement for each new step on the way thru the school system increases, one wud hav expected a drop in GPA, but no such thing happened. in fact, its the other way around. my GPA is the danish elementary school is 9.3 (9th grade) the average is ~8.1. this includes grades from non-intellectual subjects such as the ‘subject’ of having a nice hand-writing (yes seriusly). in 10th grade my average was 8.7, and the average is ~6.6. the max is 13 in all cases, altho normally grades abov 11 wer not given.

 

in gymnasiet (high school equiv.ish), my GPA was 7.8 and the average is 7.0. the slightly slower grades is becus the system was changed from a 13-step to a 7-step scale. and for comparison reasons, one can note that i went to HTX which has lower grades. the percentile level is 65th.

 

my university grades befor dropping out of filosofy were rather good, lots of 10′s, but i dont know the average, so cant compare. i suspect they were abov average again.

 

-

 

Unless an individual has made the transition from word reading to reading

comprehension of sentences and paragraphs, reading is neither pleasurable nor

practically useful. Few adults with an IQ of eighty (the tenth percentile of the

overall population norm) ever make the transition from word reading skill to

reading comprehension. The problem of adult illiteracy (defined as less than a

fourth-grade level of reading comprehension) in a society that provides an ele­

mentary school education to virtually its entire population is therefore largely a

problem of the lower segment of the population distribution of g. In the vast

majority of people with low reading comprehension, the problem is not word

reading per se, but lack of comprehension. These individuals score about the

same on tests of reading comprehension even if the test paragraphs are read

aloud to them by the examiner. In other words, individual differences in oral

comprehension and in reading comprehension are highly correlated.12’1

 

80.. but the american black average is only about 85. is it really true that ~37% of them ar too dull to learn to read properly? compared with ~10% of whites.

 

-

 

Virtually every type of work calls for behavior that is guided by cognitive

processes. As all such processes reflect g to some extent, work proficiency is g

loaded. The degree depends on the level of novelty and cognitive complexity

the job demands. No job is so simple as to be totally without a cognitive com­

ponent. Several decades of empirical studies have shown thousands of correla­

tions of various mental tests with work proficiency. One of the most important

conclusions that can be drawn from all this research is that mental ability tests

in general have a higher success rate in predicting job performance than any

other variables that have been researched in this context, including (in descend­

ing order of average predictive validity) skill testing, reference checks, class

rank or grade-point average, experience, interview, education, and interest meas­

ures.1221 In recent years, one personality constellation, characterized as “ consci­

entiousness,” has emerged near the top of the list (just after general mental

ability) as a predictor of occupational success.

 

reminds me that i ought to look into this field of psychology. its called I/O psychology. som time back i talked with a phd (i think) on 4chan who studied that area. he said that if he had his way, he wud just rely on g alone to predict job performance, training etc. he recommended me a textbook, which i found on the internet.

 

Psychology Applied to Work, An Introduction to Industrial and Organizational Psychology – Paul M. Muchinsky

 

it seems decent.

 

-

 

A person cannot perform a job successfully without the specific knowledge

required by the job. Possibly such job knowledge could be acquired on the job

after a long period of trial-and-error learning. For all but the very simplest jobs,

however, trial-and-error learning is simply too costly, both in time and in errors.

Job training inculcates the basic knowledge much more efficiently, provided that

later on-the-job experience further enhances the knowledge or skills acquired in

prior job training. Because knowledge and skill acquisition depend on learning,

and because the rate of learning is related to g, it is a reasonable hypothesis that

g should be an effective predictor of individuals’ relative success in any specific

training program.

 

The best studies for testing this hypothesis have been performed in the armed

forces. Many thousands of recruits have been selected for entering different

training programs for dozens of highly specialized jobs based on their perform­

ance on a variety of mental tests. As the amount of time for training is limited,

efficiency dictates assigning military personnel to the various training schools

so as to maximize the number who can complete the training successfully and

minimize the number who fail in any given specialized school. When a failed

trainee must be rerouted to a different training school better suited to his apti­

tude, it wastes time and money. Because the various schools make quite differing

demands on cognitive abilities, the armed services employ psychometric re­

searchers to develop and validate tests to best predict an individual’s probability

of success in one or another of the various specialized schools.

 

 

one is tempted to say ”common sense”, but apparently, only the military dares to do such things.

 

-

 

A rough analogy may help to make the essential point. Suppose that for some

reason it was impossible to measure persons’ heights directly in the usual way,

with a measuring stick. However, we still could accurately measure the length

of the shadow cast by each person when the person is standing outdoors in the

sunlight. Provided everyone’s shadow is measured at the same time of day, at

the same day of the year, and at the same latitude on the earth’s surface, the

shadow measurements would show exactly the same correlations with persons’

weight, shoe size, suit or dress size, as if we had measured everyone directly

with a yardstick; and the shadow measurements could be used to predict per­

fectly whether or not a given person had to stoop when walking through a door

that is only 5 ‘/2 -feet high. However, if one group of persons’ shadows were

measured at 9:00 a .m . and another group’s at 10:00 a .m ., the pooled measure­

ments would show a much smaller correlation with weight and other factors

than if they were all measured at the same time, date, and place, and the meas­

urements would have poor validity for predicting which persons could walk

through a 5 ‘/2 -foot door without stooping. We would say, correctly, that these

measurements are biased. In order to make them usefully accurate as predictors

of a person’s weight and so forth, we would have to know the time the person’s

shadow was measured and could then add or subtract a value that would adjust

the measurement so as to make it commensurate with measurements obtained

at some other specific time, date, and location. This procedure would permit the

standardized shadow measurements of height, which in principle would be as

good as the measurements obtained directly with a measuring stick.

 

Standardized IQs are somewhat analogous to the standardized shadow meas­

urements of height, while the raw scores on IQ tests are more analogous to the

raw measurements of the shadows themselves. If we naively remain unaware

that the shadow measurements vary with the time of day, the day of the year,

and the degrees of latitude, our raw measurements would prove practically

worthless for comparing individuals or groups tested at different times, dates,

or places. Correlations and predictions could be accurate only within each unique

group of persons whose shadows were measured at the same time, date, and

place. Since psychologists do not yet have the equivalent of a yardstick for

measuring mental ability directly, their vehicles of mental measurement—IQ

scores—are necessarily “ shadow” measurements, as in our height analogy, al­

beit with amply demonstrated practical predictive validity and construct validity

within certain temporal and cultural limits.

 

 

interesting. however, biologically based tests shud allow for absolut measurement, say tests based on RT in ECTs, or tests based on the amount of mylianation in the brain, or brain ph levels, brain size via brain imaging scans if we can make them better measurements of g, etc.

 

-

 

Many possible factors determine whether a person passes or fails a particular

test item. Does the person understand the item at all (e.g., “What is the sum of

all the latent roots of a 7 X 7 R matrix?” )? Has the person acquired the specific

knowledge called for by the item (e.g., “Who wrote Faust?”), or perhaps has

he acquired it in the past and has since forgotten it? Did the person really know

the answer, but just couldn’t recall it at the moment of being tested? Does the

item call for a cognitive skill the person either never acquired or has forgotten

through disuse (e.g., “ How much of a whole apple is two-thirds of one-half of

the apple?” )? Does the person understand the problem and know how to solve

it, but is unable to do it within the allotted time limit (e.g., substituting the

corresponding letter of the alphabet for each of the numbers from one to twenty-

six listed in a random order in one minute)? Or even when there is a liberal

time limit does the person give up on the item or just guess at the answer

prematurely, perhaps because the item looks too complicated at first glance (e.g.,

“ If it takes six garden hoses, all running for three hours and thirty minutes to

fill a tank, how many additional hoses would be needed to fill the tank in thirty

minutes?” )?

 

1) dunno

2) Goethe

3) 2/3*1/2=4/6*3/6=12/36=1/3

4) #hose*time=tank size

6*3.5=21

21 is the size of the tank

21=0.5*#hose, solve #hose

42=#hose

42-6=36

36 more hoses

 

-

 

The only study I have found that investigated whether there has been a secular

change (over thirty years) in the heritability of g-loaded test scores concluded

that “ the results revealed no unambiguous evidence for secular trends in the

heritability of intelligence test scores.” 1351 However, the heritability coefficients

(based on twenty-two same-age cohort samples of MZ and DZ male twins born

in Norway between 1930 and 1960) showed some statistically reliable nonlinear

trends over the thirty-year period, as shown in Figure 10.2. The overall trend

line goes equally down-up-down-up with heritability coefficients ranging from

slightly above .80 to slightly below .40. The heritability coefficient was the same

for the cohort born in 1930 as for the cohort born in 1960 (for both, h2 = .80).

The authors offer only weak ad hoc speculations about possible causes of this

erratic fluctuation of h2 across 22 points in time.

 

the hole is the german occupation of norway. the data from the 30s make sense to me, the depression wud result in civil unrest and the changing up of society. after a period of such, heritabilities shud stabilize again, as seen in the after war period. i dont understand the 50s down swing in heritability.

 

so, i thought it might be somthing economic. i gathered GDP data, and looked at the data. nope, not true.

 

www.norges-bank.no/pages/77409/p1_c6.xlsx

 

data from 1901 to 2000 looks like this:

gdp norway 50s

 

doesnt fit with the GDP hypothesis at all, except for missing data in the war.

 

i dunno, perhaps www.newsinenglish.no/2010/06/16/the-50s-in-norway-werent-so-nifty/

 

the authors of the study that found the drop in heritability also dont know ”We are, however, quite at a loss in explaining the dip from about 1950 to 1954. Thus, we feel that the best strategy at present is to leave the issue of secular trends open. ”

On the question of secular trends in the heritability of intelligence scores A study of Norwegian twins

-

 

Head Start. The federal preschool intervention known as Head Start, which

has been in continual existence now since 1964, is undoubtedly the largest-

scale, though not the most intensive, educational intervention program ever un­

dertaken, with an annual expenditure over $2 billion. The program is aimed at

improving the health status and the learning and social skills of preschoolers

from poor backgrounds so they can begin regular school more on a par with

children from more privileged backgrounds. The intervention is typically short­

term, with various programs lasting anywhere from a few months to two years.

 

The general conclusion of the hundreds of studies based on Head Start data

is that the program has little, if any, effect on IQ or scholastic achievement that

endures beyond more than two to three years after exposure to Head Start. The

program does, however, have some potential health benefits, such as inoculations

of enrollees against common childhood diseases and improved nutrition (by

school-provided breakfast or lunch). The documented behavioral effects are less

retention-in-grade and lower dropout rates. The cause(s) of these effects are

uncertain. Because eligible children were not randomly enrolled in Head Start,

but were selected by parents and program administrators, these scholastic cor­

relates of Head Start are uninterpretable from a causal standpoint. Selection,

rather than direct causation by the educational intervention itself, could be the

explanation of Head Start’s beneficial outcomes.

 

crazy amount of money spent for som slight health benefits. perhaps ther is a cheaper way to get such benefits.

 

-

 

The Milwaukee Project. Aside from Head Start, this is the most highly

publicized of all intervention experiments. It was the most intensive and exten­

sive educational intervention ever conducted for which the final results have

been published.55 It was also the most costly single experiment in the history of

psychology and education—over $14 million. In terms of the highest peak of

IQ gains for the seventeen children in the treatment condition (before the gains

began to vanish), the cost was an estimated $23,000 per IQ point per child.

 

holy shit. even tho i think iv seen this figur befor (in The g Factor by Chris Brand).

 

Jensen also doesnt mention the end of the project, but Wikipedia does:

en.wikipedia.org/wiki/Milwaukee_Project

 

The Milwaukee Project’s claimed success was celebrated in the popular media and by famous psychologists. However, later in the project Rick Heber, the principal investigator, was discharged from the University of Wisconsin–Madison and convicted and imprisoned for large-scale abuse of federal funding for private gain. Two of Heber’s colleagues in the project were also convicted for similar abuses. The project’s results were not published in any refereed scientific journals, and Heber did not respond to requests from colleagues for raw data and technical details of the study. Consequently, even the existence of the project as described by Heber has been called into question. Nevertheless, many college textbooks in psychology and education have uncritically reported the project’s results.[3][4]

 

this reminds me why open data is necessary in science.

 

-

 

[The Abecedarian Early Intervention Project.]

Both the T and C groups (each with about fifty subjects) were given age-

appropriate mental tests (Bayley, Stanford-Binet, McCarthy, WPPSI) at

six-month intervals from age six months to sixty months. The important com­

parisons here are the mean T-C differences at each testing. (Because the test

scores do not have the same factor composition across this wide age range,

the absolute scores of the T group alone are not as informative of the efficacy

of the intervention as are the mean T-C differences.) At every testing from six

months to five years of age, the T group outperformed the C group, and the

overall average T-C difference (103.3 — 95.5 = 7.8 IQ points) was highly

significant (p < .001). Peculiarly, however, the largest T-C differences (aver­

aging fifteen IQ points) occurred between eighteen and thirty-six months of

age and then declined during the last two years of intervention. At sixty

months, the average T-C difference was 7.5 IQ points. This decrease might

simply reflect the fact that with the children’s increasing age the tests become

increasingly more g-Ioaded. The tests used before two or three years of age

measure mainly perceptual-motor functions that have relatively little g satura­

tion. Only later does g becomes the predominant component of variance in

IQ. In follow-up studies at eight and twelve years of age, the T-C difference

on the WISC-R was about five IQ points,1571 a difference that has remained up

to age fifteen. At the last reported testing, the T-C difference was 4.6 IQ

points, or a difference of 0.35ct. Scholastic achievement test scores showed a

somewhat larger effect of the intervention up to age fifteen.1571 The interven­

tion effect on other criteria of the project’s success was demonstrated by the

decreased percentage of children who repeated at least one grade by age

twelve (T = 28 percent, C = 55 percent) and the percentage of children with

borderline or retarded intelligence (IQ < 85) (T = 12.8 percent, C = 44.2

percent).1561

 

Thus this five-year program of intensive intervention beginning in early in­

fancy increased IQ (at age fifteen years) by about five points. Judging from a

comparable gain in scholastic achievement, the effect had broad transfer, sug­

gesting that it probably raised the level of g to some extent. The finding that

the T subjects did better than the C subjects on a battery of Piaget’s tests of

conservation, which reflect important stages in mental development, is further

evidence. The Piagetian tests are not only very different in task demands from

anything in the conventional IQ tests used in the conventional assessments, but

are also highly g loaded.1571 The mean T-C difference on the Piagetian conser­

vation tests was equal to 0.33a (equivalent to five IQ points). Assuming that

the instructional materials in the intervention program did not closely resemble

Piaget’s tests, it is a warranted conclusion that the intervention appreciably

raised the Level of g.

 

im still skeptical as to the g effects. id like to see the data about them as adults, and a larger sample size.

 

again, Wikipedia has mor on the issue, both positiv and negativ:

en.wikipedia.org/wiki/Abecedarian_Early_Intervention_Project

Significant findings

Follow-up assessment of the participants involved in the project has been ongoing. So far, outcomes have been measured at ages 3, 4, 5, 6.5, 8, 12, 15, 21, and 30.[5] The areas covered were cognitive functioning, academic skills, educational attainment, employment, parenthood, and social adjustment. The significant findings of the experiment were as follows:[6][7]

Impact of child care/preschool on reading and math achievement, and cognitive ability, at age 21:

  • An increase of 1.8 grade levels in reading achievement
  • An increase of 1.3 grade levels in math achievement
  • A modest increase in Full-Scale IQ (4.4 points), and in Verbal IQ (4.2 points).

Impact of child care/preschool on life outcomes at age 21:

  • Completion of a half-year more of education
  • Much higher percentage enrolled in school at age 21 (42 percent vs. 20 percent)
  • Much higher percentage attended, or still attending, a 4-year college (36 percent vs. 14 percent)
  • Much higher percentage engaged in skilled jobs (47 percent vs. 27 percent)
  • Much lower percentage of teen-aged parents (26 percent vs. 45 percent)
  • Reduction of criminal activity

Statistically significant outcomes at age 30:

  • Four times more likely to have graduated from a four-year college (23 percent vs. 6 percent)
  • More likely to have been employed consistently over the previous two years (74 percent vs. 53 percent)
  • Five times less likely to have used public assistance in the previous seven years (4 percent vs. 20 percent)
  • Delayed becoming parents by average of almost two years

(Most recent information from Developmental Psychology, January 18, 2012, cited in uncnews.unc.edu, January 19, 2012)

The project concluded that high quality, educational child care from early infancy was therefore of utmost importance.

Other, less intensive programs, notably the Head Start Program, but also others, have not been as successful. It may be that they provided too little too late compared with the Abecedarian program.[4]

Criticisms

Some researchers have advised caution about the reported positive results of the project. Among other things, they have pointed out analytical discrepancies in published reports, including unexplained changes in sample sizes between different assessments and publications. It has also been noted that the intervention group’s reported 4.6 point advantage in mean IQ at age 15 was not statistically significant. Herman Spitz has noted that a mean IQ difference of similar magnitude to the final difference between the intervention and control groups was apparent already at age six months, indicating that “4 1/2 years of massive intervention ended with virtually no effect.” Spitz has suggested that the IQ difference between the intervention and control groups may have been present from the outset due to faulty randomization.[8]

 

not quite sure what to think. the sample sizes ar still kind small, and if Spitz is right in his criticism, the studies hav not shown much.

 

the reason that im skeptical to begin with is that the modern twin studies show, that shared environment, which is what these studies change to a large degree, has no effect on adult IQ.

 

in any case, if it requires so expensiv spendings to get slightly less dumb kids, its hard to justify as a public policy. at the very least, id like to see the calculation that finds that this has a net positiv benefit for society. it is possible, for instance, becus crime rates ar (supposedly) down, and job retention up which leads to mor taxes being paid, and so on.

 

-

 

Error distractors in multiple-choice answers are of interest as a method of

discovering bias. When a person fails to select the correct answer but instead

chooses one of the alternative erroneous responses (called “ distractors” ) offered

for an item in a multiple-choice test, the person’s incorrect choice is not random,

but is about as reliable as is the choice of the correct answer. In other words,

error responses, like correct responses, are not just a matter of chance, but reflect

certain information processes (or the failure of certain crucial steps in infor­

mation processing) that lead the person to choose not just any distractor, but a

particular one. Some types of errors result from a solution strategy that is more

naive or less sophisticated than other types of errors. For example, consider the

following test item:

 

If you mix a pint of water at 50° temperature with two pints of water at 80°

measured on the same thermometer, what will be the temperature of the mix­

ture? (a) 65°, (b) 70°, (c) 90°, (d) 130°, (e) Can’t say without knowing

whether the temperatures are Centigrade or Fahrenheit.

 

We see that the four distractors differ in the level of sophistication in mental

processing that would lead to their choice. The most naive distractor, for ex­

ample, is D, which is arrived at by simple addition of 50° and 80°. The answer

A at least shows that the subject realized the necessity for averaging the tem­

peratures. The answer 90° is the most sophisticated distractor, as it reveals that

the subject had a glimmer of the necessity for a weighted average (i.e., 50° +

8072 = 90°) but didn’t know how to go about calculating it. (The correct

answer, of course, is B, because the weighted average is [1 pint X 50° + 2

pints X 80°]/3 pints = 70°.) Preference for selecting different distractors changes

across age groups, with younger children being attracted to the less sophisticated

type of distractor, as indicated by comparing the percentage of children in dif­

ferent age groups that select each distractor. The kinds of errors made, therefore,

appear to reflect something about the children’s level of cognitive development.

 

interesting.

 

-

 

What is termed a cline results where groups overlap at their fuzzy boundaries

in some characteristic, with intermediate gradations of the phenotypic charac­

teristic, often making the classification of many individuals ambiguous or even

impossible, unless they are classified by some arbitrary rule that ignores biology.

The fact that there are intermediate gradations or blends between racial groups,

however, does not contradict the genetic and statistical concept of race. The

different colors of a rainbow do not consist of discrete bands but are a perfect

continuum, yet we readily distinguish different regions of this continuum as

blue, green, yellow, and red, and we effectively classify many things according

to these colors. The validity of such distinctions and of the categories based on

them obviously need not require that they form perfectly discrete Platonic cat­

egories.

 

while the rainbow analogy works to som extent, it is not that good. the reason is that with rainbows, all the colors (groups) ar on a continuum in such a way that ther isnt a blend between every two colors (groups). this is not how races work, as ther is always the possibility of a blend between any two groups, even odd groups such as amerindians and aboriginals.

 

-

 

Of the approximately 100,000 human polymorphic genes, about 50,000 are

functional in the brain and about 30,000 are unique to brain functions.[12] The

brain is by far the structurally and functionally most complex organ in the human

body and the greater part of this complexity resides in the neural structures of

the cerebral hemispheres, which, in humans, are much larger relative to total

brain size than in any other species. A general principle of neural organization

states that, within a given species, the size and complexity of a structure reflect

the behavioral importance of that structure. The reason, again, is that structure

and function have evolved conjointly as an integrated adaptive mechanism. But

as there are only some 50,000 genes involved in the brain’s development and

there are at least 200 billion neurons and trillions of synaptic connections in the

brain, it is clear that any single gene must influence some huge number of

neurons— not just any neurons selected at random, but complex systems of

neurons organized to serve special functions related to behavioral capacities.

 

It is extremely improbable that the evolution of racial differences since the

advent of Homo sapiens excluded allelic changes only in those 50,000 genes

that are involved with the brain.

 

the same point was made, altho less technically, in Hjernevask. ther is no good apriori reason to think that natural selection for som reason only worked on non-brain, non-behavioral genes. it simply makes no sense at all to suppose that.

 

-

 

Bear in mind that, from the standpoint of natural selection, a larger brain

size (and its corresponding larger head size) is in many ways decidedly disad­

vantageous. A large brain is metabolically very expensive, requiring a high-

calorie diet. Though the human brain is less than 2 percent of total body weight,

it accounts for some 20 percent of the body’s basal metabolic rate (BMR). In

other primates, the brain accounts for about 10 percent of the BMR, and for

most carnivores, less than 5 percent. A larger head also greatly increases the

difficulty of giving birth and incurs much greater risk of perinatal trauma or

even fetal death, which are much more frequent in humans than in any other

animal species. A larger head also puts a greater strain on the skeletal and

muscular support. Further, it increases the chances of being fatally hit by an

enemy’s club or missile. Despite such disadvantages of larger head size, the

human brain, in fact, evolved markedly in size, with its cortical layer accom­

modating to a relatively lesser increase in head size by becoming highly con­

voluted in the endocranial vault. In the evolution of the brain, the effects of

natural selection had to have reflected the net selective pressures that made an

increase in brain size disadvantageous versus those that were advantageous. The

advantages obviously outweighed the disadvantages to some degree or the in­

crease in hominid brain size would not have occurred.

 

this brain must hav been very useful for somthing. if som of this use has to do with non-social things, like environment, one wud expect to see different levels of ‘brain adaptation’ due to the relative differences in selection pressure in populations that evolved in different environments.

 

-

 

How then can the default hypothesis be tested empirically? It is tested exactly

as is any other scientific hypothesis; no hypothesis is regarded as scientific unless

predictions derived from it are capable of risking refutation by an empirical test.

Certain predictions can be made from the default hypothesis that are capable of

empirical test. I f the observed result differs significantly from the prediction, the

hypothesis is considered disproved, unless it can be shown that the tested pre­

diction was an incorrect deduction from the hypothesis, or that there are artifacts

in the data or methodological flaws in their analysis that could account for the

observed result. If the observed result does in fact accord with the prediction,

the hypothesis survives, although it cannot be said to be proven. This is because

it is logically impossible to prove the null hypothesis, which states that there is

no difference between the predicted and the observed result. If there is an al­

ternative hypothesis, it can also be tested against the same observed result.

 

For example, if we hypothesize that no tiger is living in the Sherwood Forest

and a hundred people searching the forest fail to find a tiger, we have not proved

the null hypothesis, because the searchers might have failed to look in the right

places. I f someone actually found a tiger in the forest, however, the hypothesis

is absolutely disproved. The alternative hypothesis is that a tiger does live in

the forest; finding a tiger clearly proves the hypothesis. The failure of searchers

to find the tiger decreases the probability of its existence, and the more search­

ing, the lower is the probability, but it can never prove the tiger’s nonexistence.

 

Similarly, the default hypothesis predicts certain outcomes under specified

conditions. If the observed outcome does not differ significantly from the pre­

dicted outcomes, the default hypothesis is upheld but not proved. If the predic­

tion differs significantly from the observed result, the hypothesis must be

rejected. Typically, it is modified to accord better with the existing evidence,

and then its modified predictions are empirically tested with new data. If it

survives numerous tests, it conventionally becomes a “ fact.” In this sense, for

example, it is a “ fact” that the earth revolves around the sun, and it is a “ fact”

that all present-day organisms have evolved from primitive forms.

 

meh, mediocre or bad filosofy of science.

 

-

 

 

 

the problem with this data is that the women were not don having children. the data is from women aged 34. since especially smart women (and so mor whites) hav children later than that age, their fertility estimates ar spuriusly low. see also the data in Intelligence: A Unifying Construct for the Social Sciences (Richard Lynn and Tatu Vanhanen, 2012).

 

-

 

Whites perform significantly better than blacks on the subtests called Com­

prehension, Block Design, Object Assembly, and Mazes. The latter three tests

are loaded on the spatial visualization factor of the WISC-R. Blacks perform

significantly better than whites on Arithmetic and Digit Span. Both of these tests

are loaded on the short-term memory factor of the WISC-R. (As the test of

arithmetic reasoning is given orally, the subject must remember the key elements

of the problem long enough to solve it.) It is noteworthy that Vocabulary is the

one test that shows zero W-B difference when g is removed. Along with Infor­

mation and Similarities, which even show a slight (but nonsignificant) advantage

for blacks, these are the subtests most often claimed to be culturally biased

against blacks. The same profile differences on the WISC-R were found in

another study|8lbl based on 270 whites and 270 blacks who were perfectly

matched on Full Scale IQ.

 

seems inconsistent with typical environment only theories.

 

-

 

 

Do Bad Things Happen When Works Enter the Public Domain Empirical Tests of Copyright Term Extension

papers.ssrn.com/sol3/papers.cfm?abstract_id=2130008

 

The most interesting thing about this paper was the arguments put forward by the supporters of copyright extension. They are so distressingly bad that it seems pointless to empirically test them. Theoretical arguments are sufficient to show them to be faulty. Nevertheless, the authors carried out some experiments that show the obvious to be true.

Abstract:

According to the current copyright statute, in 2018, copyrighted works of music,
film, and literature will begin to transition into the public domain. While this will
prove a boon for users and creators, it could be disastrous for the owners of these
valuable copyrights. Accordingly, the next few years will witness another round of
aggressive lobbying by the film, music, and publishing industries to extend the
terms of already-existing works. These industries, and a number of prominent
scholars, claim that when works enter the public domain bad things will happen
to them. They worry that works in the public domain will be underused, overused,
or tarnished in ways that will undermine the works’ cultural and economic value.
Although the validity of their assertions turn on empirically testable hypotheses,
very little effort has been made to study them.  
 
This Article attempts to fill that gap by studying the market for audiobook
recordings of bestselling novels. Data from our research, including a novel
human subjects experiment, suggest that the claims about the public domain are
suspect. Our data indicate that audio books made from public domain bestsellers
(1913-22) are significantly more available than those made from copyrighted
bestsellers (1923-32). In addition, our experimental protocol suggests that
professionally made recordings of public domain and copyrighted books are of
similar quality. Finally, while a low quality recording seems to lower a listener’s
valuation of the underlying work, our data do not suggest any correlation
between that valuation and legal status of the underlying work. Accordingly, our
research indicates that the significant costs of additional copyright protection for
already-existing works are not justified by the benefits claimed for it.  These
findings will be crucially important to the inevitable congressional and judicial
debate over copyright term extension in the next few years.

Richard Lynn was so kind to send me a signed copy of his latest book. i immediately paused the reading of another book to read this one. some comments and quotes are below. quotes are from the ebook version of the book which i found on the internet.

Richard Lynn, Tatu Vanhanan – Intelligence, a A Unifying Construct for the Social Sciences, 2012

Review

Some general conclusions about the book. All in all this is a typical Richard Lynn book. It has a very dry style, and is somewhat repetitive. On the other hand, it is not overly long at 400 pages. Many of these are long lists of tables, so are not normally read except if one wants to look up specific countries. It would perhaps have been a good idea to just publish them on the internet for the curious and other researchers. The book contains a wealth of citations revealing a very impressive scholarship. The areas investigated on a global level are many, and the results interesting. The people who think that national IQs are “meaningless” and that human races do not exist or are social constructions (whatever that means, if anything) have the difficult job of explaining why, if these numbers are meaningless, do they fare so well in predicting things on a global level? In other words, why do they have so high validity for a multitude of things? One cannot just regard IQ as “academic intelligence” or some such thing if one can effectively use national IQs to predict things like the lack of proper sanitation. Most often national IQs are found to be better predictors than various non-IQ variables. Although one some occasions I would have liked the authors to use some more variables to see whether they made an impact. I think the authors are sometimes a bit too pessimistic about the possibilities of changing the situation for the low-IQ countries, but I agree with them that one should not expect many of these correlations to change drastically in the near future.

 

Thoughts and comments to various things

The introduction of the book neatly and shortly explains what the book is about:

The physical sciences are unified by a few common theoretical
constructs, such as mass, energy, pressure, atoms, molecules and
momentum, that are defined and measured in the same ways and
explain a wide range of phenomena in physics, astrophysics,
chemistry and biochemistry. This has been beneficial for the
development of the physical sciences, because it has allowed the
transfer of concepts from one field to others. It has allowed
interface subjects like chemical physics and biochemistry to
develop their own insights and concepts on the basis of those
already developed in their parent fields. Physics is the most basic
of the natural sciences, because the phenomena of the others can
be explained by the laws of physics. For this reason, physics has
been called the queen of the physical sciences.

Hitherto, the social sciences have lacked common unifying
constructs of this kind. The disciplines of the social sciences,
comprising psychology, economics, political science,
demography, sociology, criminology, anthropology and
epidemiology are largely isolated from one another, each with
their own vocabulary and theoretical constructs.
Psychology can be considered the most basic of the social
sciences because it is concerned with differences between
individuals, while the other social sciences are principally
concerned with differences between groups such as socio-
economic classes, ethnic and racial populations, regions within
countries, and nations. These groups are aggregates of
individuals, so the laws that have been established in psychology
should be applicable to the group phenomena that are the concern
of the other social sciences.
Our objective in this book is to develop the case that the
psychological construct of intelligence can be a unifying
explanatory construct for the social sciences. Intelligence is
measured by the intelligence test that was constructed by Alfred
Binet in 1905. During the succeeding century it has been shown
that intelligence, measured as the IQ (the intelligence quotient),
is a determinant of many important social phenomena,
including educational attainment, earnings, socio-economic
status, crime and health. Our theme is that the explanatory value
of intelligence that has been established for individuals can be
extended to the explanation of the differences between groups,
that have been found in the other social sciences, and in
particular to the explanation of the differences between nations.
Thus, we propose that psychology is potentially the queen of
the social sciences, analogous to the position of physics as the
queen of the physical sciences. (p. 1-2)

It is difficult to disagree with this.

-

one of the things that bother me with the Health chapter is that it doesnt try to compare with and adjoin with the data from The Spirit Level. The authors of SPL contend that many of the things that Lynn&Vanhanan (LV) thinks is due to intelligence, is really due to economic (in)equality. unfortunately, LV does not try to control for this. it wud be interesting to see if the effects of high econ. equality goes away if one controls for intelligence. in other words, that the effects of econ. equality is really just intelligence working thru it.

For a video introduction to the SPL, see this:

-

one annoying thing about this book, is that it is full of data tables, and the data from these cannot easily be copied into something useful. at least, i have failed to do it in any easy way. it requires a lot of fiddling to get the formatting right in calc/excel. hopefully, LV will make data tables available on their websites where they can easily be downloaded so that others can test out other hypotheses.

many of the tables span two pages but are not that big and cud easily fit into a single table on one page. unfortunately, having to use the image now requires that one either zooms out a lot to fit it all into one screen before taking a screenshot and hence makes the text small, or take two screenshots and edit them together in an image editor. it wud be very nice if they were made available on the website for free use.

-

a recurrent thing about the book is that the editor did quite a poor job. there are a lot of easily visible typografical mistakes that are a bit annoying. they dont distract too much from the reading of the book, except in the rare cases where a missing word makes interpretation necessary. for instance, on p. 83-84 table 4.5, the 10th line is missing the prefix “in” which makes it appear as if the data presented varies wildly from a positive 0.61 correlation to three other strong negative correlations between -.52 and -0.60.

there was also another place where a “not” was missing and this left me confused for a few seconds.

as for formatting, look at table 7.1, line 1, the word “All” is strangely located in a line below the other information. look also to lines 10-11 and notice how the two “F” are floating to the left.

these mistakes shud be fixed and a new online edition released. this cant be too difficult to do.

-

notice how low the dysgenic effects are. i was under the impression that they were stronger. also keep in mind that the lines 14-17 are those with the best data. the reason for that is that:

Rows 2, 3 and 4 give negative correlations between
intelligence and fertility based on a nationally representative
American sample showing that the negative correlation is higher
for white women than for white men, and higher for white
women than for black women. This study is not wholly
satisfactory because the age of the sample was 25 to 34 years and
many of them would not have completed their fertility.

To overcome this problem, Vining (1995) published data on
the fertility of his female sample of the ages between 35 and 44,
which can be regarded as close to completed fertility. The results
are given in rows 4 and 5 for white and black women and show
that the correlations between intelligence and fertility are still
significantly negative and are higher for black women (-0.226)
than for white women (-0.062). These correlations are probably
underestimates because the samples excluded high-school
dropouts, who were about 14 per cent of whites and 26 per cent
of blacks at this time, and who likely had low IQs and high
average fertility. (p. 201-2)

which is to say that if one gathers the data before women are done having children, one will miss out some older women who get children late. since such women are especially likely to be well-educated (and hence, smart), this is an important bias.

still given that there are some consistent negative correlations, then there is a dysgenic effect – its just smaller than i had imagined. at least on a within population basis.

-

It would be interesting to explore to what extent differences
in geographical circumstances and water resources affect the
access to clean water, but unfortunately it is difficult to find
appropriate indicators of geographical factors. However, there is
one indicator for this purpose.WDI-09 (Table 3.5) includes data
on renewable internal freshwater resources per capita in cubic
metres in 2007 (Freshwater). It measures internal renewable
resources (internal river flows and groundwater from rainfall) in
the country. It is noted that these “estimates are based on different
sources and refer to different years, so cross-country
comparisons should be made with caution” (WDI-09, p. 153). It
could be assumed that freshwater resources per capita are
negatively correlated with Water-08, but in fact there is no
correlation between these variables (0.050, N=139). The
correlation between national IQ and Freshwater is also in zero
(0.014, N=147). Access to clean water seems to be completely
independent from freshwater resources, whereas it is
significantly dependent on national IQ (39%) and several
environmental variables. Therefore, it is interesting to see how
well national IQ explains the variation in Water-08 at the level of
single countries and what kinds of countries deviate most from
the regression line. Figure 8.1 summarizes the results of the
regression analysis of Water-08 on national IQ in the group of
166 countries. Detailed results for single countries are reported in
Table 8.3. (p. 246)

Very interesting! Is this a direct disproof of Jared Diamond (1997)‘s environment theory regarding access to water?

Figure 8.1 shows that the relationship between national IQ
and Water-08 is linear as hypothesized, but many highly
deviating countries weaken the relationship. In the countries
above the regression line, the percentage of people without
access to improved water services is higher than expected on the
basis of the regression equation, and in the countries below the
regression line it is lower than expected. In all countries above
the national IQ level of 90, the percentage of the population
without access to clean water is zero or near zero, except in
Cambodia, China and Mongolia, whereas this percentage varies
greatly in the countries below the national IQ level of 85.
National IQ is not able to explain the great variation in Water-08
in the group of countries with low national IQs. Most of that
variation seems to be due to some environmental and local
factors, perhaps also to measurement errors. ( p. 247-8)

in the case of China it seems very unhelpful to category it as one country. it is a HUGE place. it wud be better to split it up into provinces, and calculate these instead. en.wikipedia.org/wiki/Provinces_of_the_People%27s_Republic_of_China altho this will result in many of them having no data. i doubt that there is IQ data for all the regions of China. perhaps those in the regions away from the ocean are not quite as clever as those near the ocean, and near Japan. but surely there is data about Hong Kong, Macau, and some other city or city-like states.

-

one thing that bothers me a bit is that when LV discuss outliers to their correlation, they use some seemingly arbitrarily picked number. heres a random example (p. 258):

Table 8.3 shows the countries which deviate most from the
regression line and for which positive or negative residuals are
large. An interesting question is whether some systematic
differences between large positive and negative outliers could
help to explain their deviations from the regression line. Let us
regard as large outliers countries whose residuals are ±15 or
higher (one standard deviation is 13).

they note that the sd is 13, but instead opt to use 15 without an explanation. this is the same every time they adopt such an analysis, which do they every chapter. normally, they choose some number slightly larger than 1sd. in p. 155 sd = 1.7, and they use 2. in p. 146 they use 11 while sd = 10.1. in p. 103 they use 12 while the sd is 12.017. the general rule seems to be: choose an arbitrary but nicely looking number just a bit larger than the sd. i dont think this skews the analysis much, but i wud have prefered just if they used 1sd as the border for counting as an outlier.

-

one odd thing is that when LV finds that a relationship between national IQs and some other variable is curvilinear, they still go on to use the linear model in their explanation. they do this time and time again. it results in some bad points of analysis, for instance:

It is remarkable that this group does not include any
economically highly developed countries, Caribbean tourist
countries, Latin American countries, or oil exporting countries.
Most of them are poor sub-Saharan African countries (17). China
is not really a large positive outlier for the reason that its
predicted value of Water-08 is negative -6. The other eight
positive outliers are poor Asian and Oceanian countries. Most of
them (especially Afghanistan, Cambodia, Myanmar and Timor-
Leste) have suffered from serious civil wars, which have
hampered socio-economic development. (p.259)

if they had made a proper model, one where negative values are impossible, then they wud have avoided such details. its not that LV doesnt know this, as they discuss on page. 79:

Rows 13 through 18 give six correlations between national
IQs and various measures of per capita income reported. The
author analyzed further the relationship by fitting linear, quadratic
and exponential curves to the data for 81 and 185 nations and
found that fitting exponential curves gave the best results. His
interpretation was that “a given increment in IQ, anywhere along
the IQ scale, results in a given percentage in GDP, rather than a
given dollar increase as linear fitting would predict” (Dickerson,
2006, p. 291). He suggests that

exponential fitting of GDP to IQ is logically
meaningful as well as mathematically valid. It is
inherently reasonable that a given increment of IQ
should improve GDP by the same proportional ratio,
not the same number of dollars. An increase of GDP
from $500 to $600 is a much more significant change
than is a linear increase from $20,000 to $20,100. The
same proportional change would increase $20,000 to
$24,000. These data tell us that the influence of
increasing IQ is a proportional effect, not an absolute
one (p. 294).

heres as example of a plot where LV acknowledges that it is curvilinear:

i wud replicate this plot myself and fit an exponential function to it, and then look for outliers, but i wud need the raw data for that in a useable form. see the previous point about how it is difficult to extract the data from the PDF and the need to publish it in some other format, preferably excel/calc.

-

Some systematic differences in the characteristics of large
positive and negative outliers provide partial explanations for
their large residuals. Most countries with large negative residuals
have benefitted from investments, technologies, and
management from countries of higher national IQs, whereas
most countries with large positive residuals have received much
less such foreign help. (p.260)

tourism is not the only way to receive money from the rich countries. it wud be interesting to look at the effects of foreign aid to poor countries. is there any discernible effect of it? perhaps it has had effects on water supply, for instance.

-

Table 8.4 shows that the indicators of sanitation are a little
more strongly correlated with national IQ than the indicators of
water (cf. Table 8.2). The explained part of variation varies from
41 to 60 percent. Differences between the three groups of
countries are relatively small, although the correlations are
strongest in the group of countries with more than one million
inhabitants. It should be noted that the correlations between
national IQ and Sanitation-08 are negative because Sanitation-08
concerns the percentage of the population without access to
improved sanitation services (see section 2). (p. 261)

i understand their wish to stay true to the sources numbers, but i wud have prefered if they had multiplied the numbers by -1 to make them fit with the direction of the other numbers.

-

Row 7 gives a low but statistically significant positive
correlation of 0.18 between national IQ and son preference. This
may be a surprising result, because it might be expected that
liberal and more modern populations would not have such a
strong preference for sons as more traditional peoples. (p. 273)

surprising indeed.

-

Consistent with Frazer’s analysis, it has been found in a
number of studies of individuals within nations that there is a
negative relationship between intelligence and religious belief.
This negative relationship was first reported in the United States
in the 1920s by Howells (1928) and Sinclair (1928), who both
reported studies showing negative correlations between
intelligence and religious belief among college students of -0.27
to -0.36 (using different measures of religious belief). A number
of subsequent studies confirmed these early results, and a review
of 43 of these studies by Bell (2002) found that all but four found
a negative correlation. To these can be added a study in the
Netherlands of a nationally representative sample (total N=1,538)
that reported that agnostics scored 4 IQs higher than believers
(Verhage, 1964). In a more recent study Kanazawa (2010) has
analyzed the data of the American National Longitudinal Study of
Adolescent Health, a national sample initially tested for
intelligence with the PPVT (Peabody Picture Vocabulary Test) as
adolescents and interviewed as young adults in 2001-2
(N=14,277). At this interview they were asked: “To what extent
are you a religious person?” The responses were coded “not
religious at all”, “slightly religious”, “moderately religious”, and
“very religious”. The results showed that the “not religious at all”
group had the highest IQ (103.09), followed in descending order
by the other three groups (IQs = 99.34, 98.28, 97.14). The
negative relationship between IQ and religious belief is highly
statistically significant. (p. 278)

the Bell article sounds interesting, but after spending some time trying to locate it, i failed. it seems that im not the only one having such problems.

regardless of that, there was a similar article: “The Effect of Intelligence on Religious Faith,” Free Inquiry, Spring 1986: (1). There is an online parafrase of it here.

-

one of the interesting datasets that id love to see a nonlinear function fitted to. i want to know how much we need to boost intelligence to almost remove religiousness. perhaps one can discover this from using high-IQ samples. at which IQ are there <5% religious people?

-

another of those tables that have problems with the direction. Legatum and Newsweek shud be positive with each other, right? since they are measuring in the same direction, that is, the one opposite of HDI and IHC (which correlate positively).

-

LV mention the 2008 study by Kanazawa: Temperature and evolutionary novelty as forces behind the evolution of general intelligence. The interesting thing about this study is that it sort of tests my idea that i wrote about earlier. Kanazawa goes on with his novelty hypothesis using distance from Africa to predict national IQs. However, compared with Ashraf and Galor (2012) paper, he just uses bird distance instead of actual travel distance (humans are not birds, after all!, nor did they just sail straight from Africa to populate America). So im not really sure what his computed r’s are useful for. It wud be interesting to add together the data from the Ashraf and Galor (2012) paper about distances, and genetic diversity to the climate model. LV does mention at one point that lack of genetic diversity make evolution slower:

A further
anomaly is that the Australian Aborigines inhabit a relatively
warm region but have small brain sizes and low IQs. The
explanation for this anomaly is that these were a small isolated
population numbering only around 300,000 at the time of
European colonization, so the mutant alleles for higher IQs did
not appear in them. (p. 381)

consider also the criticism of Kanazawa’s paper in Why national IQs do not support evolutionary theories of intelligence, Wicherts et al (2009):

5. Migration and geographic distance
Kanazawa (2008) was concerned with the relation between lev-
els of general intelligence, as they were distributed geographically
thousands of years ago, and the degree of ‘‘evolutionary novelty” of
the relevant geographic locations. Lacking data regarding evolu-
tionary novelty, Kanazawa proposed, as a measure of evolutionary
novelty, the geographic distance to the EEA, i.e., a large region of
sub-Saharan Africa. The idea is that the greater the distance from
the EEA, the more evolutionarily novel the corresponding environ-
ment. There are several problems with this operationalization.
First, Kanazawa operationalized geographic distance using
Pythagoras’ first theorem (a2+ b2= c2). However, Pythagoras’ theo-
rem applies to Euclidian space, not to the surface of a sphere. Sec-
ond, even if these calculations were accurate, distances as traveled
on foot do not in general correspond to distances ‘‘as the crow flies”
(Kanazawa 2008, p. 102). According to most theories, ancestors of
the indigenous people in Australia (i.e., the Aborigines) moved out
of Africa on foot. They probably crossed the Red Sea from Africa to
present day Saudi Arabia, went on to India, and then through Indo-
nesia to Australia. Thus the distance covered on foot must have
been much larger than the distances computed by Kanazawa. This
suggests that the real distances covered by humans to reach a gi-
ven location, i.e., data of central interest to Kanazawa, are likely
to differ appreciably from the distances as the crow flies. One
can avoid this problem by using maps that exist of the probable
routes that humans followed in their exodus from Africa, and esti-
mating the distances between the cradle of humankind and various
other locations accordingly (Relethford, 2004).
Third, it is not obvious that locations farther removed from the
African Savannah are geographically and ecologically more dissim-
ilar than locations closer to the African Savannah. For instance, the
rainforests of central Africa or the mountain ranges of Morocco are
relatively close to the Savannah, but arguably are more dissimilar
to it than the great plains of North America or the steppes of Mon-
golia. In addition, some parts of the world were quite similar to the
African savannas during the relevant period of evolution (e.g., Ray
& Adams, 2001). Clearly, there is no strict correspondence between
evolutionary novelty and geographic distance. This leaves the use
of distances in need of theoretical justification. It is also notewor-
thy that given the time span of evolutionary theories, it is hardly
useful to speak of environmental effects as if these were fixed at
a certain geographical location.
People migrate, and have done so extensively in the time since
the evolutionarily period relevant to the evolutionary theories by
Kanazawa and others. A simple, yet imperfect, solution to this
problem is to use data solely from countries that have predomi-
nantly indigenous inhabitants (Templer, 2008; Templer & Arika-
wa, 2006). However, Kanazawa used national IQs of all
countries in Lynn and Vanhanen’s survey, including Australia
and the United States. This casts further doubt on the relevance
of Kanazawa’s data vis-à-vis the evolutionary theories that he
set out to test. Given persistent migration, it is likely that many
of the people, whose test scores Lynn and Vanhanen used to cal-
culate national IQs, are genetically unrelated to the original
inhabitants of their respective countries. In at least 50 of the
192 countries in Kanazawa’s (2008) study, the indigenous people
represent the ethnic minority.

Via Steve Sailer.

The Out of Africa Hypothesis, Human Genetic Diversity, and Comparative Economic Development

Quamrul Ashraf and Oded Galor

Abstract
This research advances and empirically establishes the hypothesis that, in the course of the prehistoric exodus of Homo sapiens out of Africa, variation in migratory distance to various settlements across the globe affected genetic diversity and has had a long-lasting hump-shaped effect on comparative economic development, reáecting the trade-offs between the beneficial and the detrimental effects of diversity on productivity. While intermediate levels of genetic diversity prevalent among Asian and European populations have been conducive for development, the high diversity of African populations and the low diversity of Native American populations have been detrimental for the development of these regions.

A very interesting paper. As can be seen in the link, it receives the usual backlash of dumbness.

The interesting thing about this that wasnt explored – even if it screamed to be explored – is how it works together with Lynn’s world wide IQ data. Lynn’s theory of cold climate has difficulties explaining why the arctic people are not smarter than they are. They are by no means dumb like africans, but they shud be smarter than they are going by the latitude theory and climate theory. I suggested that this might be due to inbreeding due to small populations. Perhaps. Perhaps its due to less genetic variation. It shud be possible to run a multiple regression analysis, and see how these two together explain IQ and income per capita.

The theory behind this is: First humans were in Africa, then they migrated out to live in other places. These other places differed in environment by coldness among other things. Those that lived in colder places were under (stronger) selection pressure for intelligence. How fast this adaptation happens is controlled by population size and genetic variation in the populations.

Very strange that the paper does not even cite Lynn, or mentioned IQ anywhere. These seem obvious to explain differences in income per capita.

Against intellectual Monopoly

In general, this is an interesting book about patents. It is at times combatant in its language use, other times more neutral. I think it wud have been wiser to use less loaded terms, but it didnt bother me too much. The criticism of IPR is generally sensible, and their case persuasive and plausible, but not as plausible as the case in Patent Failure. References are sometimes missing for questionable claims, but in general there are lots of references. The reference system is annoying, as the notes are at the end of chapters and not in links (it was intended to be published as an ebook, after all) or footnotes or something of that sort.

 

Below are some more comments and a lot of quotes.

 

As usual. Colored text is a quote. Colored+italic text is a quote which is also a quote in the source. Black text is my comments. Blue text also mine i.e. links.

-

Why, however, should creators have the right to control

how purchasers make use of an idea or creation? This gives

creators a monopoly over the idea. We refer to this right as

“intellectual monopoly,” to emphasize that it is this monopoly over

all copies of an idea that is controversial, not the right to buy and

sell copies. The government does not ordinarily enforce

monopolies for producers of other goods. This is because it is

widely recognized that monopoly creates many social costs.

Intellectual monopoly is no different in this respect. The question

we address is whether it also creates social benefits commensurate

with these social costs.

-

Even on the desktop – open source is spreading and not

shrinking. Ten years ago there were two major word processing

packages, Word and Wordperfect. Today the only significant

competitor to Microsoft for a package of office software including

word-processing is the open source program Openoffice.

 

Or rather LibreOffice now. But there is also Google Docs, which isnt open source. It is, however, free.

-

Start with English authors selling books in the United

States in the nineteenth century. “During the nineteenth century

anyone was free in the United States to reprint a foreign

publication”10 without making any payment to the author, besides

purchasing a legally sold copy of the book. This was a fact that

greatly upset Charles Dickens whose works, along with those of

many other English authors, were widely distributed in the U.S.,

and

 

yet American publishers found it profitable to make

arrangements with English authors. Evidence before the

1876-8 Commission shows that English authors sometimes

received more from the sale of their books by American

publishers, where they had no copyright, than from their

royalties in [England]11

 

where they did have copyright. In short without copyright, authors

still got paid, sometime more without copyright than with it.12

How did it work? Then, as now, there is a great deal of

impatience in the demand for books, especially good books.

English authors would sell American publishers the manuscripts of

their new books before their publication in Britain. The American

publisher who bought the manuscript had every incentive to

saturate the market for that particular novel as soon as possible, to

avoid cheap imitators to come in soon after. This led to mass

publication at fairly low prices. The amount of revenues British

authors received up front from American publishers often

exceeded the amount they were able to collect over a number of

years from royalties in the UK. Notice that, at the time, the US

market was comparable in size to the UK market.13

 

More broadly, the lack of copyright protection, which

permitted the United States publishers’ “pirating” of English

writers, was a good economic policy of great social value for the

people of United States, and of no significant detriment, as the

Commission report and other evidence confirm, for English

authors. Not only did it enable the establishment and rapid growth

of a large and successful publishing business in the United States;

also, and more importantly, it increased literacy and benefited the

cultural development of the American people by flooding the

market with cheap copies of great books. As an example: Dickens’

A Christmas Carol sold for six cents in the US, while it was priced

at roughly two dollars and fifty cents in England. This dramatic

increase in literacy was probably instrumental for the emergence of

a great number of United States writers and scientists toward the

end of the nineteenth century.

 

But how relevant for the modern era are copyright

arrangements from the nineteenth century? Books, which had to be

moved from England to the United States by clipper ship, can now

be transmitted over the internet at nearly the speed of light.

Furthermore, while the data show that some English authors were

paid more by their U.S. publishers than they earned in England –

we may wonder how many, and if they were paid enough to

compensate them for the cost of their creative efforts. What would

happen to an author today without copyright?

 

This question is not easy to answer – since today virtually

everything written is copyrighted, whether or not intended by the

author. There is, however, one important exception – documents

produced by the U.S. government. Not, you might think, the stuff

of best sellers – and hopefully not fiction. But it does turn out that

some government documents have been best sellers. This makes it

possible to ask in a straightforward way – how much can be earned

in the absence of copyright? The answer may surprise you as much

as it surprised us.

 

The most significant government best seller of recent years

has the rather off-putting title of The Final Report of the National

Commission on Terrorist Attacks Upon the United States, but it is

better known simply as the 9/11 Commission Report.14 The report

was released to the public at noon on Thursday July 22, 2004. At

that time, it was freely available for downloading from a

government website. A printed version of the report published by

W.W. Norton simultaneously went on sale in bookstores. Norton

had signed an interesting agreement with the government.

 

The 81-year-old publisher struck an unusual publishing

deal with the 9/11 commission back in May: Norton agreed

to issue the paperback version of the report on the day of

its public release.…Norton did not pay for the publishing

rights, but had to foot the bill for a rush printing and

shipping job; the commission did not hand over the

manuscript until the last possible moment, in order to

prevent leaks. The company will not reveal how much this

cost, or when precisely it obtained the report. But expedited

printings always cost extra, making it that much more

difficult for Norton to realize a profit.

 

In addition, the commission and Norton agreed in May on

the 568-page tome’s rather low cover price of $10, making

it that much harder for the publisher to recoup its costs.

(Amazon.com is currently selling copies for $8 plus

shipping, while visitors to the Government Printing Office

bookstore in Washington, D.C. can purchase its version of

the report for $8.50.) There is also competition from the

commission’s Web site, which is offering a downloadable

copy of the report for free. And Norton also agreed to

provide one free copy to the family of every 9/11 victim.15

 

This might sound like Norton struck a rather bad deal – one

imagines that other publishers were congratulating themselves on

not having been taken advantage of by sharp government

negotiators. It turns out, however, that Norton’s rivals were in fact

envious of this deal. One competitor in particular – the New York

Times – described the deal as a “royalty-free windfall,”16 which

does not sound like a bad thing to have.

 

Thats pretty cool!

-

Literature and a market for literary works emerged and

thrived for centuries in the complete absence of copyright. Most of

what is considered “great literature” and is taught and studied in

universities around the world comes from authors who never

received a penny of copyright royalties. Apparently the

commercial quality of the many works produced without copyright

has been sufficiently great that Disney, the greatest champion of

intellectual monopoly for itself, has made enormous use of the

public domain. Such great Disney productions as Snow White,

Sleeping Beauty, Pinocchio and Hiawatha are, of course, all taken

from the public domain. Quite sensibly, from its monopolistic

viewpoint, Disney is reluctant to put anything back. However, the

economic argument that these great works would not have been

produced without an intellectual monopoly is greatly weakened by

the fact that they were.

 

Hah! :D

-

At least in the case of sheet music, the police campaign did

not work. After a few months, police stations were filled with tons

of paper on which various musical pieces were printed. Being

unable to bring to court what was a de-facto army of “illegal”

music reproducers, the police itself stopped enforcing the

copyright law.

 

Pretty much that which i suggested earlier today that we shud do with DMCA notices. Just send them en masse and overwhelm the system from within. After all, companies already send out a massive amount of DMCA notices, and lots of them are bogus auto-generated ones, and this is true even tho they must stand for perjury if they are caught lying!

 

Surely, there is no intent to deceive if we do the same, since there is no intent at all in writing generating them.

-

The authors mention some obscure catholic principle in passing. Their reference for it is to AiG. But that makes no sense. AiG is a YEC organisation, not catholic. Catholics are theistic evolutionists, not creationists.

-

Effective price discrimination is costly to implement and

this cost represents pure waste. For example, music producers love

Digital Rights Management (DRM) because it enables them to

price discriminate. The reason that DVDs have country codes, for

example, is to prevent cheap DVDs sold in one country from being

resold in another country where they have a higher price. Yet the

effect of DRM is to reduce the usefulness of the product. One of

the reasons the black market in MP3s is not threatened by legal

electronic sales is that the unprotected MP3 is a superior product to

the DRM protected legal product. Similarly, producers of computer

software sell constrained products to consumers in an effort to

price discriminate and preserve their more lucrative corporate

market. One consequence of price discrimination by monopolists,

especially intellectual monopolists, is that they artificially degrade

their products in certain markets so as not to compete with other

more lucrative markets.

-

In recent years there have been innovative efforts to extend

the use of patents to block competitors. For example we find

 

A federal trade agency might impose $13 million in

sanctions against a New Jersey company that rebuilds used

disposable cameras made by the Fuji Photo Film Company

and sells them without brand names at a discount. Fuji said

yesterday that the International Trade Commission found

that the Jazz photo Corporation infringed Fuji’s patent

rights by taking used Fuji cameras and refurbishing them

for resale. The agency said Jazz sold more that 25 million

cameras since August 2001 in violation of a 1999 order to

stop and will consider sanctions. Fuji, based in Tokyo, has

been fighting makers of rebuilt cameras for seven years.

Jazz takes used shells of disposable cameras, puts in new

film and batteries and then sells them. Jazz’s founder, Jack

Benun, said the company would appeal. “It’s unbelievable

that the recycling of two plastic pieces developed into such

a long case.” Mr. Benun said. ‘There’s a benefit to the

customer. The prices have come down over the years. And

recycling is a good program. Our friends at Fuji do not like

it.20

 

Sigh.

-

One annoying thing about this book is that it uses the annoying and misleading loaded terms that IP maximalists use. I.e. “steal an idea” instead of “copy an idea” etc.

-

Another astounding example of American intellectual imperialism

is in – not so surprising – Iraq

 

The American Administrator of [Iraq] Paul Bremer,

updated Iraq’s intellectual property law to ‘meet current

internationally-recognized standards of protection.’ The

updated law makes saving seeds for next year’s harvest,

practiced by 97% of Iraqi farmers in 2002, the standard

farming practice for thousands of years across human

civilizations, newly illegal. Instead, farmers will have to

obtain a yearly license for genetically modified seeds from

American corporations. These GM seeds have typically

been modified from IP developed over thousands of

generations by indigenous farmers like the Iraqis, shared

freely like agricultural ‘open source.’ Other IP provisions

for technology in the law further integrate Iraq into the

American IP economy.24

 

Fucking derp.

-

The private sector has no monopoly on inadequacy.

Government bureaucrats are notorious for their inefficiency. The

U.S. Patent office is no exception. Their questionable competence

increases the cost of getting patents, but this is a small effect, and,

perhaps a good thing, rather than bad. They also issue many

patents of dubious merit. Since the legal presumption is that a

patent is legitimate unless proven otherwise, there is a substantial

legal advantage to the patent holder, who may use it for blackmail,

or other purposes. Moreover, while some bad patents may be

turned down, an obvious strategy is simply to file a great many bad

patents in hopes that a few will get through. Here is a sampling of

some of the ideas the US Patent office thought worthy of patenting

in recent years.41

 

# U.S. Patent 6,080,436: toasting bread in a toaster operating

beween 2500 and 4500 degrees.

# U.S. Patent 6,004,596: the sealed crustless peanut butter and

jelly sandwich.

# U.S. Patent 5,616,089: a “putting method in which the golfer

controls the speed of the putt and the direction of the putt

primarily with the golfer’s dominant throwing hand, yet uses

the golfer’s nondominant hand to maintain the blade of the

putter stable.”

# U.S. Patent 6,368,227: “A method of swing on a swing is

disclosed, in which a user positioned on a standard swing

suspended by two chains from a substantially horizontal tree

branch induces side to side motion by pulling alternately on

one chain and then the other.”

# U.S. Patent 6,219,045, from the press release by Worlds.com:

[The patent was awarded] for its scalable 3D server

technology … [by] the United States Patent Office. The

Company believes the patent may apply to currently, in use,

multi-user games, e-Commerce, web design, advertising and

entertainment areas of the Internet.” This is a refreshing

admission that instead of inventing something new,

Worlds.com simply patented something already widely used.

# U.S. Patent 6,025,810: “The present invention takes a

transmission of energy, and instead of sending it through

normal time and space, it pokes a small hole into another

dimension, thus, sending the energy through a place which

allows transmission of energy to exceed the speed of light.”

The mirror image of patenting stuff already in use: patent stuff

that can’t possibly work.

 

I had thought of the same shotgun style idea.

-

That monopoly is generally bad for society is well

accepted. It is not surprising that the same should be true of

intellectual monopoly: the evidence presented here is no more than

the tip of the iceberg. Many other inefficiencies, bad business

practices, technological regressions, etc. are documented daily by

the press. These are a consequence of the especially strong form of

monopoly power that current IP legislation bestows upon patent

and copyright holders. We insist on documenting and discussing a

subset of these facts for the simple reason that we have become so

accustomed to them that we inclined to take them for granted. Yet

these inefficiencies are not natural – they are manmade, and we

need not choose to tolerate them. We argue in later chapters that

neither patents nor copyright succeed in fostering innovation and

creativity. So we must ask: what is the point of keeping institutions

that provide so little good while inflicting so much harm?

-

Examples of individual creativity abound. An astounding

example of the impact of copyright law on individual creativity is

the story of Tarnation.120

 

Tarnation, a powerful autobiographical documentary by

director Jonathan Caouette, has been one of the surprise

hits of the Cannes Film Festival – despite costing just $218

(£124) to make. After Tarnation screened for the second

time in Cannes, Caouette – its director, editor and main

character – stood up. […] A Texan child whose mother was

in shock therapy, Caouette, 31, was abused in foster care

and saw his mother’s condition worsen as a result of her

treatment.” He began filming himself and his family aged

11, and created movie fantasies as an escape. For

Tarnation, he has spliced his home movie footage together

to create a moving and uncomfortable self-portrait. And

using a home computer with basic editing software,

Caouette did it all for a fraction of the price of a

Hollywood blockbuster like Troy. […] As for the budget,

which has attracted as much attention as the subject

matter, Caouette said he had added up how much he spent

on video tapes – plus a set of angel wings – over the years.

But the total spent will rise to about $400,000 (£230,000),

he said, once rights for music and video clips he used to

illustrate a mood or era have been paid for.9

 

Yes, you read this right. If he did not have to pay the copyright

royalties for the short clips he used, Caouette’s movie would have

cost a thousand times less.

-

The most disturbing feature of the DMCA is section 1201,

the anti-circumvention provision. This makes it a criminal offense

to reverse engineer or decrypt copyrighted material, or to distribute

tools that make it possible to do so. On July 27, 2001, Russian

cryptographer Dmitri Sklyarov had the dubious honor of being the

first person imprisoned under the DMCA. Arrested while giving a

seminar publicizing cryptographical weaknesses in Adobe’s

Acrobat Ebook format, Sklyarov was eventually acquitted on

December 17, 2002.

The DMCA has had a chilling effect on both freedom of

speech, and on cryptographical research. The Electronic Frontier

Foundation (EFF) reports on the case of Edward Felten and his

Princeton team of researchers

 

In September 2000, a multi-industry group known as the

Secure Digital Music Initiative (SDMI) issued a public

challenge encouraging skilled technologists to try to defeat

certain watermarking technologies intended to protect

digital music. Princeton Professor Edward Felten and a

team of researchers at Princeton, Rice, and Xerox took up

the challenge and succeeded in removing the watermarks.

 

When the team tried to present their results at an academic

conference, however, SDMI representatives threatened the

researchers with liability under the DMCA. The threat

letter was also delivered to the researchers employers and

the conference organizers. After extensive discussions with

counsel, the researchers grudgingly withdrew their paper

from the conference. The threat was ultimately withdrawn

and a portion of the research was published at a

subsequent conference, but only after the researchers filed

a lawsuit.

 

After enduring this experience, at least one of the

researchers involved has decided to forgo further research

efforts in this field.13

 

Disgusting!

-

The DMCA is not just a threat to economic prosperity and

creativity, it is also a threat to our freedom. The best illustration is

the recent case of Diebold, which makes computerized voting

machines now used in various local, state and national elections.

Unfortunately, it appears from internal corporate documents that

these machines are highly insecure and may easily be hacked.

Those documents were leaked, and posted at various sites on the

Internet. Rather than acknowledge or fix the security problem,

Diebold elected to send “takedown” notices in an effort to have the

embarrassing “copyrighted” material removed from the Internet.

Something more central to political discourse than the

susceptibility of voting machines to fraud is hard to imagine. To

allow this speech to be repressed in the name of “copyright” is

frightening.

 

Perhaps this sounds cliched and exaggerated – a kind of

“leftist college kids” over-reactive propaganda. In keeping with

this tone here is a college story about the leaked documents, and

how the Diebold and the DMCA helped to teach our future

generations about the first amendment.

 

Last fall, a group of civic-minded students at Swarthmore

[... came] into possession of some 15,000 e-mail messages

and memos – presumably leaked or stolen – from Diebold

Election Systems, the largest maker of electronic voting

machines in the country. The memos featured Diebold

employees’ candid discussion of flaws in the company’s

software and warnings that the computer network was

poorly protected from hackers. In light of the chaotic 2000

presidential election, the Swarthmore students decided that

this information shouldn’t be kept from the public. Like

aspiring Daniel Ellsbergs with their would-be Pentagon

Papers, they posted the files on the Internet, declaring the

act a form of electronic whistle-blowing. Unfortunately for

the students, their actions ran afoul of the 1998 Digital

Millennium Copyright Act (D.M.C.A.), [...] Under the law,

if an aggrieved party (Diebold, say) threatens to sue an

Internet service provider over the content of a subscriber’s

Web site, the provider can avoid liability simply by

removing the offending material. Since the mere threat of a

lawsuit is usually enough to scare most providers into

submission, the law effectively gives private parties veto

power over much of the information published online — as

the Swarthmore students would soon learn.

 

Not long after the students posted the memos, Diebold sent

letters to Swarthmore charging the students with copyright

infringement and demanding that the material be removed

from the students’ Web page, which was hosted on the

college’s server. Swarthmore complied. [...]19

 

The story did not end there, nor did it end too badly. The

controversy went on for a while. The Swarthmore students held

their ground and bravely fought against both Diebold and

Swarthmore. They managed to create enough negative publicity

for Diebold and for their liberal arts college, that Diebold

eventually had to back down and promise not to sue for copyright

infringement. Eventually the memos went back on the net.

All’s well what ends well? When the wise man points at the

moon, the dumb man looks at the finger.

-

Economists refer to the net benefit to society from an

exchange as “social surplus.” With intellectual property the

innovator collects a share of the social surplus she generates,

without intellectual property the innovator collects a smaller share:

this is the competitive value of an innovation. When such

competitive value is enough to compensate the innovator for the

cost of creation the allocation of resources is efficient, neither too

few nor too many innovations are brought about, and social surplus

is maximized. One can show mathematically that, under a variety

of competitive mechanisms, the private value accruing to an

innovator increases with the social surplus: inventors of better

gadgets make more money. This is true even when the private

value becomes a smaller share of the social surplus as the latter

increases.

 

Notice that we insist on “a share of the social surplus”, not

the entire surplus. Contrary to what many pundits repeat over and

over, there is nothing terrifying about this: even under intellectual

monopoly innovators receives a less than 100% share of the social

surplus from innovation, the rest going to consumers. Under

competition for those innovations that are produced both

consumers and imitators receive a portion of the social surplus an

innovation generates, and such portion is strictly larger than in the

previous case. These pundits use the jargon “uncompensated

spillovers” to refer to the social surplus accruing to those besides

the original innovator. There is nothing wrong with such

spillovers, however. That competitive markets do allow for social

surplus to accrue to people other than producers is, indeed, one of

their most valuable features, at least from a social perspective; it is

what makes capitalism a good system also for the not-so-

successful among us. The goal of economic efficiency is not that of

making monopolists as rich as possible, in fact: it is almost the

opposite. The goal of economic efficiency is that of making us all

as well off as possible. To accomplish this producers must be

compensated for their costs, thereby providing them with the

economic incentive of doing what they are best at doing. But they

do not need to be compensated more than this. If, by selling her

original copy of the idea in a competitive market and thereby

establishing the root of the tree from which copies will come, the

innovator earns her opportunity cost, that is: she earns as much or

more than she could have earned while doing the second best thing

she knows how to do, then efficient innovation is achieved, and we

should all be happy.

 

This no copyright at all is interesting. Notice how it instantly solves all problems with sampling. Under a for profit copyright only, sampling is difficult to deal with.

-

Consider the problem of automobiles and air pollution.

When I drive my car, I do not have to pay you for the harm the

poison in my exhaust does to your health. So naturally, people

drive more than is socially desirable and there is too much air

pollution. Economists refer to this as a negative externality, and we

all agree it is a problem. Even conservative economists usually

agree that government intervention of some sort is required.

 

We propose the following solution to the problem of

automobile pollution: the government should grant us the exclusive

right to sell automobiles. Naturally, as a monopolist, we will insist

on charging a high price for automobiles, fewer automobiles will

be sold, there will be less driving, and so less pollution. The fact

that this will make us unspeakably rich is of course beside the

point; the sole purpose of this policy is to reduce air pollution. This

is of course all logically correct – but so far we don’t think anyone

has had the chutzpah to suggest that this is a good solution to the

problem of air pollution.

 

If someone were to make a serious suggestion along these

lines, we would simply point out that this “solution” has actually

been tried. In Eastern Europe, under the old communist

governments, each country did in fact have a government

monopoly over the production of automobiles. As the theory

predicts, this did indeed result in expensive automobiles, fewer

automobiles sold, and less driving. It is not so clear, however, that

it actually resulted in less pollution. Sadly, the automobiles

produced by the Eastern European monopolists were of such

miserably bad quality that for each mile they were driven they

created vastly more pollution than the automobiles driven in the

competitive West. And, despite their absolute power, the

monopolies of Eastern Europe managed to produce a lot more

pollution per capita than the West.

 

Arguments in favor of intellectual monopoly often have a

similar flavor. They may be logically correct, but they tend to defy

common sense. Ed Felten suggests applying what he calls the

“pizzaright” test. The pizzaright is the exclusive right to sell pizza

and makes it illegal to make or serve pizza without a license from

the pizzaright owner.1 We all recognize, of course, that this would

be a foolhardy policy and that we should allow the market to

decide who can make and sell pizza. The pizzaright test says that

when evaluating an argument in favor of intellectual monopoly, if

your argument serves equally well as an argument for a pizzaright,

then your argument is defective – it proves too much. Whatever

your argument is, it had better not apply to pizza.

 

Heh

-

While replacing secrecy with legal monopoly may have

some impact on the direction of innovation, there is little reason to

believe that it actually succeeds in making important secrets public

and easily accessible to other innovators. For most innovations, it

is the details that matter, not the rather vague descriptions required

in patent applications. Take for example, the controversial Amazon

one-click patent, U.S. Patent 5,960,411. The actual idea is rather

trivial, and there are a variety of ways in which one-click purchase

can be implemented by computer, any one of which can be coded

by a competent programmer given a modest investment of time

and effort. For the record, here is the detailed description of the

invention from the patent application:

 

The present invention provides a method and system for

single-action ordering of items in a client/server

environment. The single-action ordering system of the

present invention reduces the number of purchaser

interactions needed to place an order and reduces the

amount of sensitive information that is transmitted between

a client system and a server system. In one embodiment, the

server system assigns a unique client identifier to each

client system. The server system also stores purchaser-

specific order information for various potential purchasers.

The purchaser-specific order information may have been

collected from a previous order placed by the purchaser.

The server system maps each client identifier to a

purchaser that may use that client system to place an order.

The server system may map the client identifiers to the

purchaser who last placed an order using that client

system. When a purchaser wants to place an order, the

purchaser uses a client system to send the request for

information describing the item to be ordered along with its

client identifier. The server system determines whether the

client identifier for that client system is mapped to a

purchaser. If so mapped, the server system determines

whether single-action ordering is enabled for that

purchaser at that client system. If enabled, the server

system sends the requested information (e.g., via a Web

page) to the client computer system along with an

indication of the single action to perform to place the order

for the item. When single-action ordering is enabled, the

purchaser need only perform a single action (e.g., click a

mouse button) to order the item. When the purchaser

performs that single action, the client system notifies the

server system. The server system then completes the order

by adding the purchaser-specific order information for the

purchaser that is mapped to that client identifier to the item

order information (e.g., product identifier and quantity).

Thus, once the description of an item is displayed, the

purchaser need only take a single action to place the order

to purchase that item. Also, since the client identifier

identifies purchaser-specific order information already

stored at the server system, there is no need for such

sensitive information to be transmitted via the Internet or

other communications medium.28

 

As can be seen, the “secret” that is revealed is, if anything, less

informative than the simple observation that the purchaser buys

something by means of a single click. Information that might

actually be of use to a computer programmer – for example the

source code to the specific implementation used by Amazon – is

not provided as part of the patent, nor is it required to be. In fact,

the actual implementation of the one-click procedure consists of a

complicated system of subcomponents and modules requiring a

substantial amount of human capital and of specialized working

time to be assembled. The generic idea revealed in the patent is

easy to understand and “copy,” but of no practical value

whatsoever. The useful ideas are neither revealed in the patent nor

easy to imitate without reinventing them from scrap, which is what

lots of other people beside Amazon’s direct competitors (books are

not the only thing sold on the web, after all) would have done to

everybody’s else benefit, had the U.S. Patent 5,960,411 not

prevented them from actually doing so. Certainly it is hard to argue

that the social cost of giving Amazon a monopoly over purchasing

by clicking a single button is somehow offset by the social benefit

of the information revealed in the patent application.

-

What we have argued so far may not sound altogether

incredible to the alert observer of the economics of innovation.

Theory aside, what have we shown, after all? That thriving

innovation has been and still is commonplace in the absence of

intellectual monopoly and that intellectual monopoly leads to

substantial and well-documented reductions in economic freedom

and general prosperity. However, while expounding the theory of

competitive innovation, we also recognized that under perfect

competition some socially desirable innovations will not be

produced because the indivisibility involved with introducing the

first copy or implementation of the new idea is too large, relative

to the size of the underlying market. When this is the case,

monopoly power may generate the necessary incentive for the

putative innovator to introduce socially valuable goods. And the

value for society of these goods could dwarf the social losses we

have documented. In fact, were standard theory correct so that

most innovators gave up innovating in a world without intellectual

property, the gains from patents and copyright would certainly

dwarf those losses. Alas, as we noted, standard theory is not even

internally coherent, and its predictions are flatly violated by the

facts reported in chapters 2 and 3.

 

Nevertheless, when in the previous chapter we argued

against all kinds of theoretical reasons brought forward to justify

intellectual monopoly on “scientific grounds”, we carefully

avoided stating that it is never the case the fixed cost of innovation

is too large to be paid for by competitive rents. We did not argue it

as a matter of theory because, as a matter of theory, fixed costs can

be so large to prevent almost anything from being invented. So, by

our own admission, it is a theoretical possibility that intellectual

monopoly could, at the end of the day, be better than competition.

But does intellectual monopoly actually lead to greater innovation

than competition?

 

From a theoretical point of view the answer is murky. In

the long-run, intellectual monopoly provides increased revenues to

those that innovate, but also makes innovation more costly.

Innovations generally build on existing innovations. While each

individual innovator may earn more revenue from innovating if he

has an intellectual monopoly, he also faces a higher cost of

innovating: he must pay off all those other monopolists owning

rights to existing innovations. Indeed, in the extreme case when

each new innovation requires the use of lots of previous ideas, the

presence of intellectual monopoly may bring innovation to a

screeching halt.1

 

Difficult indeed to say on theoretical grounds alone. Only empirical data can show.

-

On the problem of measuring innovation.

 

One important difficulty is in determining the level of

innovative activity. One measure is the number of patents, of

course, but this is meaningless in a country that has no patents, or

when patent laws change. Petra Moser gets around this problem by

examining the catalogs of innovations from 19th century World

Fairs. Of the catalogued innovations, some are patented, some are

not, some are from countries with patent systems, and some are

from countries without. Moser catalogues over 30,000 innovations

from a variety of industries.

 

Mid-nineteenth century Switzerland [a country without

patents], for example, had the second highest number of

exhibits per capita among all countries that visited the Crystal

Palace Exhibition. Moreover, exhibits from countries without

patent laws received disproportionate shares of medals for

outstanding innovations.7

 

Moser does, however, find a significant impact of patent law on

the direction of innovation

 

The analysis of exhibition data suggests that patent laws may

be an important factor in determining the direction of

innovative activity. Exhibition data show that countries without

patents share an exceptionally strong focus on innovations in

two industries: scientific instruments and food processing. At

the Crystal Palace, every fourth exhibit from a country without

patent laws is a scientific instrument, while no more than one

seventh of other countries innovations belong to this category.

At the same time, the patentless countries have significantly

smaller shares of innovation in machinery, especially in

machinery for manufacturing and agricultural machinery.

After the Netherlands abolished her patent system in 1869 for

political reasons, the share of Dutch innovations that were

devoted to food processing increased from 11 to 37 percent.8

 

Moser then goes on to say that

 

Nineteenth-century sources report that secrecy was

particularly effective at protecting innovations in scientific

instruments and in food processing. On the other hand,

patenting was essential to protect and motivate innovations in

machinery, especially for large-scale manufacturing.9

 

Evidence that secrecy was important for scientific instruments

and food processing is provided, but no evidence is given that

patenting was actually essential to protect and motivate

innovations in machinery. Notice that in an environment in which

some countries provide patent protection, and others do not, bias

caused by the existence of patent laws will be exaggerated.

Countries with patent laws will tend to specialize in innovations

for which secrecy is difficult, while those without will tend to

specialize in innovations for which secrecy is easy. This means

that variations of patent protection would have different effects in

different countries.

 

It is interesting also that patent laws may reflect the state of

industry and innovation in a country

 

Anecdotal evidence for the late nineteenth and for the twentieth

century suggests that a country’s choice of patent laws was

often influenced by the nature of her technologies. In the

1880s, for example, two of Switzerland’s most important

industries chemicals and textiles were strongly opposed to the

introduction of a patent system, as it would restrict their use of

processes developed abroad.10

 

The 19th century type of innovation – small process innovations

– are of the type for which patents may be most socially beneficial.

Despite this and the careful study of economic historians, it is

difficult to conclude that patents played an important role in

increasing the rate of 19th and early 20th century innovation.

 

More recent work by Moser,11 exploiting the same data set

from two different angles, strengthens this finding – that is, that

patents did not increase the level of innovation. In her words:

“Comparisons between Britain and the United States suggest that

even the most fundamental differences in patent laws failed to raise

the proportion of patented innovations.”12 Her work appears to

confirm two of the stylized facts we have often repeated in this

book. First that, as we just mentioned in discussing the work of

Sokoloff, Lamoreaux and Khan, innovations that are patented tend

to be traded more than those that are not, and therefore to disperse

geographically farther away from the original area of invention.

Based on data for the period 1841-1901, innovation for industries

in which patents are widely used is not higher but more dispersed

geographically than innovation in industries in which patents are

not or scarcely used. Second, when the “defensive patenting”

motive is absent, as it was in 1851, an extremely small percentage

of inventors (less than one in five) chooses patents as a method for

maximizing revenues and protect intellectual property.

 

Summing up: careful statistical analyses of the 19th century’s

available data, carried out by distinguished economic historians,

uniformly shows two things. Patents neither increase the rate of

innovation, nor are the best instrument to maximizes inventors’

revenue. Patents create a market in patents and in the legal and

technical services required to trade and enforce them.

 

Very interesting data.

-

Quoting this for linguistic reasons…

Nevertheless, the core idea of a unified European patent

system was not abandoned and continued to be pursued in various

forms, first under the leadership of the European Commission, and

then under the European Union. In 2000 a Community Patent

Regulation proposal was approved, which was considered a major

step toward the final establishment of a European Patent. Things,

nevertheless, did not proceed as expeditiously as the supporters of

a E.U. Patent had expected. As of 2007 the project is still, in the

words of E.U. Commissioner Charlie McCreevy, “stuck in the

mud”13 and far from being finalized. Interestingly the obstacles are

neither technical nor due to a particularly strong political

opposition to the establishment of a continent-wide form of

intellectual monopoly. The obstacles are purely due to rent-seeking

by interest groups in the various countries involved, the number of

which notoriously keeps growing. Current intellectual monopolists

(and their national lawyers) would rather remain monopolists

(legal specialists) for a bit longer in their own smaller markets than

risk the chance of loosing everything to a more powerful

monopolist (or to a foreign firm with more skilled lawyers) in the

bigger continental market.

 

That feel when reading academic books in revised editions… and they still fail to do lose vs. loose distinction. Useless distinction. At least, they chose the most sensible spelling. The spelling loose still has a pointless and silent e in the end.

-

It could be, and sometimes is, argued that the modern

pharmaceutical industry is substantially different from the

chemical industry of the last century. In particular, it is argued that

the most significant cost of developing new drugs lies in testing

numerous compounds to see which ones work. Insofar as this is

true, it would seem that the development of new drugs is not so

dependent on the usage and knowledge of old drugs. However, this

is not the case according to the chief scientific officer at Bristol

Myers Squib, Peter Ringrose, who

 

told The New York Times that there were ‘more than 50

proteins possibly involved in cancer that the company was

not working on because the patent holders either would not

allow it or were demanding unreasonable royalties.18

 

Truth-telling remarks by pharmaceutical executives aside,

there is a deeper reason why the pharmaceutical industry of the

future will be more and more characterized by complex innovation

chains: biotechnology. As of 2004, already more than half of the

research projects carried out in the pharmaceutical industry had

some biomedical foundation. In biomedical research gene

fragments are, in more than a metaphorical sense, the initial link of

any valuable innovation chain. Successful innovation chains depart

from, and then combine, very many gene fragments, and cannot do

without at least some of them. As gene fragments are in finite

number, patenting them is equivalent to artificially fabricating

what scientists in this area have labeled an “anticommons”

problem. So it seems that the impact of patent law in either

promoting or inhibiting research remains, even in the modern

pharmaceutical industry.19

-

A few additional facts may help the reader get a better

understanding of why, at the end, we reach the conclusion we do.

Sales are growing, fast; at about 12% a year for most of the 1990s,

and still now at around 8% a year; R&D expenditure during the

same period has been rising of only 6%. A company such as

Novartis (a big R&D player, relative to industry’s averages) spends

about 33% of sales on promotion, and 19% on R&D. The industry

average for R&D/sales seems to be around 16-17%, while

according to the CBO [1998] report the same percentage was

approximately 18% for American pharmaceuticals in 1994;

according to PhRMA [2007] it was 19% in 2006. The point here is

not that the pharmaceutical companies are spending “too little” in

R&D – no one has managed (and we doubt anyone could manage)

to calculate what the socially optimal amount of pharmaceutical

R&D is. The point here is that the top 30 firms spend about twice

as much in promotion and advertising as they do in R&D; and the

top 30 are where private R&D expenditure is carried out, in the

industry.

 

Next we note that no more than 1/3 – more likely 1/4 – of

new drug approvals are considered by the FDA to have therapeutic

benefit over existing treatments, implying that, under the most

generous hypotheses, only 25-30% of the total R&D expenditure

goes toward new drugs. The rest, as we will see better in a

moment, goes toward the so called “me-too” drugs. Related to this,

is the more and more obvious fact that the amount of price

discrimination carried out by the top 30 firms between North

America, Europe and Japan is dramatically increasing, with price

ratios for identical drugs reaching values as high as two or three.

The designated victims, in this particular scheme, are apparently

the U.S. consumers and, to a lesser extent, the Northern European

and the Swiss. At the same time, operating margins in the

pharmaceutical industry run at about 25% against 15% or less for

other consumer goods, with peaks, for US market-based firms, as

high as 35%. The U.S. pharmaceutical industry has been topping

the list of the most profitable sectors in the U.S. economy for

almost two decades, never dropping below third place; an

accomplishment unmatched by any other manufacturing sector.

Price discrimination, made possible by monopoly power, does

have its rewards.

 

Summing up and moving forward, here are the symptoms

of the malaise we should investigate further.

• There is innovation, but not as much as one might think

there is, given what we spend.

• Pharmaceutical innovation seems to cost a lot and

marketing new drugs even more, which makes the final

price for consumers very high and increasing.

• Some consumers are hurt more than others, even after the

worldwide extension of patent protection.

 

Very interesting data. Perhaps some kind of government sponsorship cud do better?

-

Where do Useful Drugs Come From?

Useful new drugs seem to come in a growing percentage

from small firms, startups and university laboratories. But this is

not an indictment of the patent system as, probably, such small

firms and university labs would have not put in all the effort they

did without the prospect of a patent to be sold to a big

pharmaceutical company.

 

Next there is the not so small detail that most of those

university laboratories are actually financed by public money,

mostly federal money flowing through the NIH. The

pharmaceutical industry is much less essential to medical research

than their lobbyists might have you believe. In 1995, according to

a study by two well reputed University of Chicago economists, the

U.S. spent about $25 billion on biomedical research. About $11.5

billion came from the Federal government, with another $3.6

billion of academic research not funded by the feds. Industry spent

about $10 billion.26 However, industry R&D is eligible for a tax

credit of about 20%, so the government also picked up about $2

billion of the cost of “industry” research. That was then, but are

things different now? They do not appear to be. According to

industry’s own sources27

, total research expenditure by the industry

was, in 2006, about $57 billion while the NIH budget in the same

year (the largest but by no means the only source of public funding

for biomedical research) reached $28.5 bn. So, it seems, things are

not changing: private industry pays for only about 1/3rd of

biomedical R&D. By way of contrast, outside of the biomedical

area, private industry pays for more than 2/3rds of R&D.

Many infected with HIV can still recall the 1980s when no

effective treatment for AIDS was available, and being HIV

positive was a slow death sentence. Not unnaturally many of these

individuals are grateful to the pharmaceutical industry for bringing

to market drugs that – if they do not eliminate HIV – make life

livable.

 

the “evil” pharmaceutical companies are, in fact, among

the most beneficent organizations in the history of mankind

and their research in the last couple of decades will one

day be recognized as the revolution it truly is. Yes, they’re

motivated by profits. Duh. That’s the genius of capitalism -

to harness human improvement to the always-reliable yoke

of human greed. Long may those companies prosper. I owe

them literally my life.28

 

But it is wise to remember that the modern “cocktail” that is used

to treat HIV was not invented by a large pharmaceutical company.

It was invented by an academic researcher: Dr. David Ho.

-

The bottom line is rather simple: even today, more than

thirty years after Germany, Italy and Switzerland adopted patents

on drugs and a good half a century after pharmaceutical companies

adopted the policy of patenting anything they could develop, more

than half of the top selling medicines around the world do not owe

their existence to pharmaceutical patents. Are we still so certain

that valuable medicines would stop to be invented if drug patents

were either abolished or drastically curtailed?

 

This is not particularly original news, though. Older

American readers may remember of the Kefauver Committee of

1961, which investigated monopolistic practices in the

pharmaceutical industry.33 Among the many interesting findings

reported, the study showed that 10 times as many basic drug

inventions were made in countries without product patents as were

made in nations with them. It also found that countries that did

grant product patents had higher prices than those who did not,

again something we seem to be well aware of.

 

The next question then is, if not in fundamental new

medical discoveries, where does all that pharmaceutical R&D

money go?

Rent-Seeking and Redundancy

There is much evidence of redundant research on

pharmaceuticals. The National Institutes of Health Care

Management reveals that over the period 1989-2000, 54% of FDA-

approved drug applications involved drugs that contained active

ingredients already in the market. Hence, the novelty was in

dosage form, route of administration, or combination with other

ingredients. Of the new drug approvals, 35% were products with

new active ingredients, but only a portion of these drugs were

judged to have sufficient clinical improvements over existing

treatments to be granted priority status. In fact, only 238 out of

1035 drugs approved by the FDA contained new active ingredients

and were given priority ratings on the base of their clinical

performances. In other words, about 77% percent of what the FDA

approves is “redundant” from the strictly medical point of view.34

The New Republic, commenting on these facts, pointedly

continues

 

If the report doesn’t convince you, just turn on your

television and note which drugs are being marketed most

aggressively. Ads for Celebrex may imply that it will enable

arthritics to jump rope, but the drug actually relieves pain

no better than basic ibuprofen; its principal supposed

benefit is causing fewer ulcers, but the FDA recently

rejected even that claim. Clarinex is a differently packaged

version of Claritin, which is of questionable efficacy in the

first place and is sold over the counter abroad for vastly

less. Promoted as though it must be some sort of elixir, the

ubiquitous “purple pill,” Nexium, is essentially

AstraZeneca’s old heartburn drug Prilosec with a minor

chemical twist that allowed the company to extend its

patent. (Perhaps not coincidentally researchers have found

that purple is a particularly good pill color for inducing

placebo effects.)35

 

Sad but ironically true, me-too or copycat drugs are largely

the only available tool capable of inducing some kind of

competition in an otherwise monopolized market. Because of

patent protection lasting long enough to make future entry by

generics nearly irrelevant, the limited degree of substitutability and

price competition that copycat drugs bring about is actually

valuable. We are not kidding here, and this is a point that many

commentators often miss in their “anti Big Pharma” crusade.

Given the institutional environment pharmaceutical companies are

currently operating in, me-too drugs are the obvious profit

maximizing tools, and there is nothing wrong with firms

maximizing profits. They also increase the welfare of consumers,

if ever so slightly, by offering more variety of choice and a bit

lower prices. Again, they are an anemic and pathetic version of the

market competition that would take place without patents, but

competition they are. The ironic aspect of me-too drugs, obviously,

is that they are very expensive because of patent protection, and

this cost we have brought upon ourselves for no good reason.

 

Very interesting. One thing i want to point out, tho, is that it may be worth it to develop drugs that work via a different route or with a slightly different form. Even tho to many people these differences make no difference medically, they can increase comfort by being administered by a difference route. Compare orally taking a pill vs. getting a shot vs. suppositories. It might also be the case that some patients cannot use, for medical reasons, a given route of delivery. In such cases it is medically useful to use another route, ofc. Finally, some patients may be allergic to a drug, and in that case having slightly different form may help.

 

But in general, i agree with the authors.

-

The Bad

Despite the fact that our system of intellectual property is

badly broken, there are those who seek to break it even further.

The first priority must be to stem the tide of rent-seekers

demanding ever greater privilege. Within the United States and

Europe, there is a continued effort to expand the scope of

innovations subject to patent, to extend the length of copyright, and

to impose ever more draconian penalties for intellectual property

violation. Internationally, the United States – as a net exporter of

ideas – has been negotiating dramatic increases in protection of

U.S. intellectual monopolists as part of free trade agreements; the

recent Central American Free Trade Agreement (CAFTA) is an

outstanding example of this bad practice.

 

There seems to be no end to the list of bad proposals for

strengthening intellectual monopoly. To give a partial list starting

with the least significant

 

# Extend the scope of patent to include sports moves and plays.2

# Extend the scope of copyright to include news clips, press

releases and so forth.3

# Allow for patenting of story lines – something the U.S. Patent

Office just did by awarding a patent to Andrew Knight for his

“The Zombie Stare” invention.4

# Extend the level of protection copyright offers to databases,

along the lines of the 1996 E.U. Database Directive, and of the

subsequent WIPO’s Treaty proposal.5

# Extend the scope of copyright and patents to the results of

scientific research, including that financed by public funds;

something already partially achieved with the Bayh-Dole Act.6

# Extend the length of copyright in Europe to match that in the

U.S. – which is most ironic, as the sponsors of the CTEA and

the DMCA in the USA claimed they were necessary to match

… new and longer European copyright terms.7

# Extend the set of circumstances in which “refusal to license” is

allowed and enforced by anti-trust authorities. More generally,

turn around the 1970’s Antitrust Division wisdom that lead to

the so called “Nine No-No’s” to licensing practices. Previous

wisdom correctly saw such practices as anticompetitive

restraints of trade in the licensing business. Persistent and

successful, lobbying from the beneficiaries of intellectual

monopoly has managed to turn the table around, portraying

such monopolistic practices as “necessary” or even “vital”

ingredients for a well functioning patents’ licensing market.8

# Establish, as a relatively recent U.S. Supreme Court ruling in

the case of Verizon vs Trinko did, that legally acquired

monopoly power and its use to charge higher prices is not only

admissible, it “is an important element of the free-market

system” because “it induces innovation and economicgrowth.”9

# Impose legal restrictions on the design of computers forcing

them to “protect” intellectual property.10

# Make producers of software used in P2P exchanges directly

liable for any copyright violation carried out with the use of

their software, something that may well be in the making after

the Supreme Court ruling in the Grokster case.11

# Allow the patenting of computer software in Europe – this we

escaped, momentarily, due to a sudden spark of rationality by

the European Parliament.12

# Allow the patenting of any kind of plant variety outside of the

United States, where it is already allowed.13

# Allow for generalized patenting of genomic products outside of

the United States, where it is already allowed.14

# Force other countries, especially developing countries, to

impose the same draconian intellectual property laws as the

U.S., the E.U. and Japan.15

 

-

Pharmaceuticals

Handling properly the pharmaceutical industry constitutes

the litmus test for the reform process we are advocating. Simple

abolition, or even a progressive scaling down of patent term, would

not work in this sector for the reasons outlined earlier. Reforming

the system of intellectual property in the pharmaceutical industry is

a daunting task that involves multiple dimensions of government

intervention and regulation of the medical sector. While we are

perfectly aware that knowledgeable readers and practitioners of the

pharmaceutical and medical industry will probably find the

statements that follow utterly simplistic, when not arrogantly

preposterous, we will try nevertheless. In sequential order, here is

our list of desiderata.

 

• Free the pharmaceutical industry of the stage II and III

clinical trials’ costs, which are the cost-intensive ones.

Have them financed by the NIH, on a competitive basis:

pharmaceutical companies that have completed stage I

trials, submit applications to the NIH for having stages II

and III financed. In parallel, medical clinics and university

hospitals submit competitive bids to the NIH to have the

approved trials assigned to them. Match the winning drugs

to the best bids, and use public choice common sense to

minimize the most obvious risks of capture. Clinical trial

results become public goods and are available, possibly for

a fee covering administrative and maintenance costs, to all

that request them. This would not prevent drug companies

from deciding that, for whatever reason, they carry out their

clinical trials privately and pay for them; that is their

choice. Nevertheless, allowing the public financing of

stages II and III of clinical trials – by far the largest

component of the private fixed cost associated with the

development of new drugs – would remove the biggest

(nay, the only) rationale for allowing drugs’ patents longer

than a handful of years.

 

• Begin reducing the term of pharmaceutical patents

proportionally. Should we take pharmaceuticals’ claims at

their face value, our reform eliminates between 70% and

80% of the private fixed cost. Hence, patent length should

be lowered to 4 years, instead of the current 20, without

extension. Recall that, again according to the industry,

effective patent terms are currently around 12 years from

the first day the drug is commercialized, hence we are

proposing to cut them down by 2/3, which is less than the

proportional cost reduction. To compensate for the fact that

NIH-related inefficiencies may slow down the clinical trial

process, start patent terms from the first day in which

commercialization of the drug is authorized. A ten year

transition period would allow enough time to prepare for

the new regulatory environment.

 

• Sizably reduce the number of drugs that cannot be sold

without medical prescription. For many drugs this is less a

protection of otherwise well informed consumers than a

way of enforcing monopolistic control over doctors’

prescription patterns, and to artificially increase distribution

costs, with rents accruing partly to pharmaceutical

companies and partly to the inefficient local monopolies

called pharmacies.

 

• Allow for simultaneous or independent discovery, along the

lines of Gallini and Scotchmer.29 Further, because patent

terms should be running from the start of

commercialization, applications should be filed (but not

disclosed) earlier, and mandatory licensing of “idle” or

unused active chemical component and drugs should be

introduced. In other words, make certain the following

monopolistic tactic becomes unfeasible: file a patent

application for entire families of compounds, and then

develop them sequentially over a long period of time,

postponing clinical trials and production of some

compounds until patents on earlier members of the same

family have been fully exploited.

-

 

Certainly every person with an interest in patents shud read this book. It is rather clearly written, it is not overly long (260 pages), has a good use of illustrations. The authors take an admireable clear-headed, disinterested, empirical look at the patent system. I definitely recommend this book.

James Bessen and Michael J. Meurer – Patent Failure How Judges, Bureaucrats, and Lawyers Put Innovators at Risk

Below are some quotes and comments to the book.

Chapter 2

Claims to veins of minerals create the third, hybrid case, where surface

claims can not entirely avoid costly disputes and the tragedy of the com-

mons might occur, even when miners hold fairly broad rights. A remark-

able example is the so-called War of the Copper Kings in Butte, Montana

(Glasscock 1935). The mountain standing outside of Butte was once

known as the Richest Hill on Earth. It was mined for gold, silver, and,

most notably, copper. The early miners at Butte exhausted the relatively

small supplies of gold and silver in the 1860s and 1870s. At that point

four large mining interests began to buy old claims in a search for copper

ore. By the mid-1880s it was becoming clear that the mountain was laced

with a rich tangle of copper veins that penetrated deep into the mountain.

It was very difficult to trace the copper veins to the surface of the moun-

tain. As a result, it did not become clear until about twenty years later who

owned what copper.

Glasscock explains the source of uncertainty:

The federal mining laws . . . protect[ed] the prospector who first lo-

cated an outcropping mineral vein. Such surface indication of valuable

ore was known as the apex of the vein. The owner was guaranteed the

right to follow that vein downward, even when it led under the hold-

ings of claims located behind it. That would have been fine if veins

were always continuous from the surface down, but too frequently

they are not. They are broken or faulted, cut off here and elsewhere by

worthless rock. If a vein leading down from the surface is lost near the

vertical side wall of a claim, and a similar vein of identical ore is found

below it or to one side in the adjoining claim, who is to decide

whether the second discovery is a geological continuation of the first?

Who but the courts, basing decision on the expert testimony of geol-

ogists and engineers?

The interlaced veins meant that different mining companies often dug

tunnels beneath or beside the tunnels of their rivals. Occasionally, miners

would break through into a neighboring tunnel. Sales (1964) reports that

gun fights and chemical warfare occurred in the mines. Sales and Glasscock

both suggest malicious blasting by one mining company injured miners in

other mines. Glasscock reports that one company would develop its claims

so that the water in its mines would drain into rivals’ mines. And both writ-

ers relate that the mining companies would use inefficient extraction meth-

ods in their race to mine a contested vein before their rival was able to.

Legal control over these socially harmful tactics was difficult to achieve be-

cause ownership was unclear and litigation was protracted and costly.17

-

Chapter 3

In some cases, when tangible property is taken from nature, the scope

of the property rights is not so clear. In these cases, simple physical char-

acteristics are not so useful for establishing legal boundaries because the

relevant characteristics change over time or are not fully known initially

(that is, they are revealed over time). The mining disputes discussed in the

previous chapter make this point. Another example comes from water law.

In certain jurisdictions, the right to use water from a stream running

through a property depends on the consumption of others elsewhere on

the streamcourse. Hence, a newcomer will need to investigate her neigh-

bors’ water use to determine whether and to what extent property rights

already exist for the stream flow.

In the case of migratory wild animals, property law follows the “rule of

capture”: you can own what you capture, but not the stock from which it

came. Thus, when someone shoots a wild duck, she does not gain rights to

the flock. It is easy to see how the rule of capture promotes clear notice.

Suppose the first hunter to shoot a duck in a flock actually gained owner-

ship over the flock. It would be virtually impossible for hunters in the next

county to recognize the flock was owned. Furthermore, the counterfactual

property rule would invite endless disputes about who was the true owner

of the flock, and which ducks belong to which flock.31

Similarly, the possession rule in patent law is designed to mitigate notice

problems. Paragraph 1, Section 112 of the patent statute, United States

Code Title 35, requires that the patent describe how to make and use the

invention in sufficient detail so that others can do so. This “enablement” re-

quirement makes the patentee demonstrate the practical knowledge needed

to usefully own the claimed invention.32

This possession requirement allows courts to invalidate patent claims

that are “too broad” insofar as the inventor did not really possess all the

claimed technology. A famous example concerns patents on the light bulb.

Thomas Edison was not the first inventor of the incandescent light bulb.

He had many competitors, and his light bulb built on many earlier contri-

butions.33 William Sawyer and Albon Man together obtained a light bulb

patent before Edison achieved his famous invention and they sued Edison.

Their patent claimed a light bulb with a “conductor of carbon, made of fi-

brous or textile material.” Edison made a light bulb with a bamboo filament

that fell within the language of the broad Sawyer and Man claim. The court

ruled in favor of Edison because Sawyer and Man had actually only made a

light bulb using carbonized paper as a filament. They did not make light

bulbs with other filaments drawn from the wide range of fibrous and textile

carbon-based filaments—in fact, most of those filaments would not work.

Edison labored mightily to find a bamboo filament, which worked very

well—he tried over six thousand different substances before settling on

bamboo. But the Sawyer and Man patent did not describe this important

detail. They possessed the specific invention of a light bulb using car-

bonized paper, but they did not possess the claimed knowledge to make

and use all “fibrous or textile” forms of carbon, including the bamboo later

discovered by Edison. Therefore, the court invalidated Sawyer and Man’s

claim because it claimed more than they actually possessed—they claimed

technology that had not yet been invented.34

Ideally, enablement restricts patent scope so that inventors’ property

rights do not stray far from the invention they actually possess. In the past,

inventors had to demonstrate a working prototype or scale model of the

invention in order to demonstrate possession. Inventors no longer need to

provide a working prototype in order to obtain a patent; the general pos-

session requirement, however, remains central to patent law.

Thus, we are troubled by the many recent examples of patent claims

that have been read broadly to cover infringing technologies that are dis-

tant from the invention actually possessed by the patent owner. Many of

these infringers have arrived at significant inventions independent of any

information contained in the patent at issue. Consider, for example, the

following two cases.

-

Chapter 4

Perhaps one of the clearest lessons of the Cold War was that private-property

and market economies can be powerful engines of economic growth and

innovation. While centralized economies have mustered impressive eco-

nomic efforts, especially during times of war, they have generally failed to

provide a high and rapidly growing standard of living. Moreover, what they

have achieved has sometimes come at a horrible human cost.

The experience of the Cold War seems to lend force to arguments that

intellectual property, too, promotes economic growth and innovation.

Indeed, it is now often argued that the institutions responsible for the suc-

cess of Western economies are “the rule of law and private property rights,

including intellectual property.”1 Similarly the Intellectual Property Own-

ers Association suggests that property-based incentives explain U.S. tech-

nological leadership: “The possibility of patent rights gives incentives to

inventors and their employers to create new technology and to invest in

commercializing technology. Policy makers have generally agreed that the

American tradition of strong patent laws has contributed to making this

country the world’s technological leader, a position it has held for more

than a century.”2 This is a seductive argument. There is solid empirical ev-

idence that secure property rights are conducive to economic growth. So it

might seem to follow that “strong” patent laws should also promote inno-

vation and economic growth. But what is the actual empirical evidence

that patents and other forms of intellectual property are responsible for

the technological leadership of the United States in particular and the

West generally?

Casual observation suggests that the United States and other Western

nations share both technologically advanced economies and well-developed

patent systems. But this is a correlation, not evidence of causation. That is,

well-developed patent systems might cause economic growth in these na-

tions. Or it might be, instead, that successful technology companies or

other groups, such as the patent bar, have lobbied for patent protection. In

this latter case, economic success promotes the expansion of the patent

system, not the other way around. Indeed, the patent systems in advanced

nations today consist of highly sophisticated institutions supported by sub-

stantial funding. These institutions were not simply legislated, but rather

developed, along with a wide variety of other legal and social institutions.

Their evolution required both extensive experience and a large allocation

of resources and they would seem as out of place in nineteenth-century

America as they would in many of today’s less-developed nations. Thus the

correlation between the sophistication of a nation’s technology and the

sophistication of its patent system does not provide evidence of a causal

link in and of itself; a more advanced analysis is required.

It might well be true, as the Intellectual Property Owners maintain, that

most policymakers see a link between “strong” patent laws and U.S. tech-

nology leadership.3 But as James Boyle acerbically notes, policymakers have

too often ignored empirical evidence, basing policy, instead, on “faith-

based” reasoning about property rights with regard to such matters as soft-

ware patents, broadcast rights, copyright term, and database rights.4

-

Of course, the economic effectiveness of all forms of property depends

on details of the supporting institutions—this is evident from the disparate

growth-paths of Soviet Bloc economies. But the economic effectiveness of

patents might be much more sensitive to the details of the relevant institu-

tions than are general property rights. Perhaps this is because patent law

might be much more specialized, complex, and sophisticated than, say, real

property law, and thus effective institutions might be more difficult to

develop and maintain.

In any case, the empirical economic evidence strongly rejects simplistic

arguments that patents universally spur innovation and economic growth.

“Property” is not a ritual incantation that blesses the anointed with the

fruits of innovation; legislation of “stronger” patent rights does not auto-

matically mean greater innovation. Instead, the effectiveness of patents as

a form of property depends critically on the institutions that implement

patent law. And there appear to be important differences in the effective-

ness of the implementation across different technologies and industries.

On the other hand, we can also reject the view that patents uniformly

stifle innovation. In the pharmaceutical industry and in the nineteenth-

century United States, we see definite evidence that patents do and did

sometimes provide positive private incentives for innovation.

Of course, we have asked and answered an intentionally narrow ques-

tion here. We have not asked whether the patent system is the best way

to encourage innovation. Nor have we even asked whether the total net

effect of the patent system is positive. Some argue, for instance, that

mechanisms such as rewards or purchase contracts would be more so-

cially efficient ways of encouraging pharmaceutical research. Others,

such as Boldrin and Levine (2005), argue that even though patents pro-

vide some individuals with rewards, they are not necessary to encourage

innovation and that they are socially wasteful because they make subse-

quent innovations more difficult. These are interesting and important

questions, but we doubt that they can be answered very well at this time

based strictly on the empirical evidence. That is, the evidence is incon-

clusive with regard to these questions.

Our approach in the following chapters is to focus on the narrower

questions of whether and where today patents do function effectively as a

property system, what factors affect this performance, and what institu-

tional changes might improve the effectiveness of the patent system. We

limit our inquiry to the extent that we seek to obtain definitive answers. We

do, however, think that the effectiveness of patents as a property system is

central in any case to some of the other considerations noted above. If the

patent system can be made more effective, then this necessarily affects any

comparison to alternative policies. It also affects any assessment regarding

the balance between private incentives for initial innovation and those for

follow-on innovations. If patents can be made to work like property, then

this constitutes a powerful argument in favor of the patent system.

-

Chapter 5

Moreover, even after controlling for a wide range of variables, the more a

firm spends on R&D, all else being equal, the more likely it is to be sued for

infringement. This is inconsistent with the notion that infringers cheat to

avoid R&D. We would expect cheaters to spend less on R&D, all things

again being equal. And to the extent that R&D expenditures can be used to

hide infringing technology, we would also expect greater R&D spending

to be associated with a lower risk of detection. Instead, this pattern is en-

tirely consistent with the inadvertent-infringement explanation—the more

a firm invests in technology, the more it inadvertently exposes itself to

patents of which it is not aware.3

The idea that patent infringers are large R&D spenders also seems to be

at odds with the picture of pirates we hold from other areas of law. Copy-

right and trademark pirates are often small-time operators such as street

vendors. They hope to “fly under the radar” of the property owners’ mon-

itoring efforts. Large retailers, on the other hand, take great pains to make

sure that they are not selling counterfeit goods because any infractions

would likely come to the notice of the property owners and their cus-

tomers. We would expect large technology companies to take great pains

to avoid infringement (as Kodak did) precisely because they are so visible.

This raises yet another point: if RIM consciously stole NTP’s property,

then one would expect RIM to at least make some effort to hide its crime.

Instead, RIM publicized its allegedly infringing technology. RIM came to

NTP’s attention because of a press release that RIM put out—the func-

tional description of RIM’s product in the press release was sufficient for

NTP to determine that an infringement lawsuit could be filed.

It would appear that actual evidence of hiding seems rather limited. Al-

leged infringers often act like RIM. For example, in lawsuits involving soft-

ware, the alleged infringer typically has a publicly available product or service.

Quite frequently patent holders claim that certain publicly observable prod-

uct features are infringing. Moreover, the powerful reverse-engineering tools

available for software mean that publicly available products can easily be

checked for infringement. If most alleged infringers were cheaters, then we

would expect relatively few lawsuits over publicly observable products—

cheaters would avoid technologies where they could not hide their theft. But,

in fact, most patent lawsuits involving software appear to involve publicly

observable features and litigation rates on software patents are relatively high

(Allison et al. [2004]; see also chapter 9 in the present volume). And in gen-

eral, firms report that they can detect infringement in most products, but not

in most processes.4 This does not seem to inhibit patent lawsuits over prod-

ucts relative to processes.

-

Even simple delay can impose large business costs. Consider, for example,

litigation against Cyrix, a start-up firm that introduced Intel-compatible

microprocessors. Intel, the dominant maker of microprocessors, sued Cyrix

and the litigation lasted four years (there were multiple suits). During much

of that time Cyrix had difficulty selling microprocessors to computer manu-

facturers because most of them were also customers of Intel and they were

reluctant to buy a product that might infringe. Cyrix also had difficulty find-

ing fabricators willing to manufacture their chips—again, for fear of being

sued themselves. In the meantime, Intel responded by accelerating its devel-

opment of chips that would compete against Cyrix’s offerings. In the end,

Cyrix won the lawsuit, but lost the war, having lost much of its competitive

advantage. In effect, Cyrix lost the window of opportunity to establish itself

in the marketplace. Litigation exacted a heavy toll, indeed.

Never heard of that. Fuck you Intel!

en.wikipedia.org/wiki/Cyrix

-

Chapter 8

More notable still is that some of the most successful individual inven-

tors succeeded not because of their inventive contribution but because of

their patents. Jerome Lemelson, a prolific inventor with close to 600

patents, is renowned among patent lawyers as the master of “submarine”

patents—patents kept hidden for many years. Lemelson slowed the prose-

cution of his patents, sometimes for over twenty years.3 He waited until

his technologies were independently invented and commercialized, and

then he brought his patent to the surface and negotiated royalties after the

potential licensees were locked into the patented technology.4 Although

his patents covered breakthrough technologies such as bar-code scanning,

he did not contribute these breakthroughs to society.

-

Chapter 9

In sum, patents on software are not just like other patents. The evidence

shows that software patents are particularly prone to litigation and to dis-

putes over patent boundaries, a concern that has been raised about them

since the 1960s. We attribute these problems to the abstract nature of soft-

ware technology; too many software patents claim all technologies with

similar form or all means of achieving a result, when the actual invention

is much more limited and often trivial.

Patent law has developed a number of doctrines to circumscribe ab-

stract patent claims. Unfortunately, the Federal Circuit has set software-

specific precedents that essentially remove most restrictions on abstract

claims in software. Perhaps the court acted out of a desire to promote

patents in this field of technology that has historically not used patents.

The result has been both a proliferation of software patents and lawsuits.

Software patents are not the only patents to suffer problems of abstract

claims. Any technology can be claimed abstractly and, to make matters

worse, the Federal Circuit has recently eroded limits on abstract patents

for nonsoftware business processes and even basic scientific ideas (for ex-

ample, Laboratory Corp. of America v. Metabolite Laboratories). But overall,

software patents likely have a far greater influence on the performance of

the patent system than do nonsoftware business processes.

Software patents are, in fact, responsible for a major share of patent

lawsuits. They thus play a central role in the failure of the patent system as

a whole. Any serious effort at patent reform must address these problems

and the failure to deal with the problems of software patents—either with

software-specific measures or general reforms—will likely doom any reform

effort. We turn to possible changes in patent policy in the next chapter.

-

Chapter 12

Real property rights, as opposed to abstract conceptions of property, have

limits, however. The messy, practical details of defining boundaries, provid-

ing public notice, facilitating clearance, and so forth, place real constraints

on where property can be effective. A reasonable property system recognizes

such limits. A landowner gets no rights to untapped oil flowing beneath her

land nor to migratory ducks who put down on it nor to the airplanes that

fly over it. Property rights should be granted only when property owners

can manage them efficiently, and only if third parties can effectively cope

with them.

The same is true with property rights in inventions. Economics re-

search confirms that the effectiveness of patents varies by type of inven-

tion. For example, patents have worked best where boundaries can be

staked in verifiable physical characteristics, like small molecules. With

many chemical patents, third parties can test alternative substances and

unambiguously determine whether they fall within the patent claims or

not. In this case, the boundaries are clear, disputes and litigation are rela-

tively infrequent, and the economic benefits of patents are high.2

On theother hand, patents work poorly when they are highly abstract, claiming

technologies that are not known to the patentee or not even developed at

the time of application. As was seen in chapter 9 with respect to software,

it is sometimes difficult, or even impossible, to distinguish which tech-

nologies are covered by abstract patent claims; not surprisingly, software

patents have high litigation rates and high costs, as do patents on financial

and other business inventions.

And so we return to our theme of abstraction in another guise. As with

limitations on other property, the law has long recognized that there are

substantive limits on which inventions can be patented, including limita-

tions on abstract patents. Yet implementing this limitation is one of the

most intractable problems facing any property rights system for inven-

tions. Since the eighteenth century, patent law has attempted to proscribe

abstract patents, but the doctrines used and their application have not al-

ways been successful or uncontroversial. It bears repeating that we do not

claim to know how to craft the best policy regarding abstract patents. Yet

the empirical evidence convinces us that allowing patents on “everything

under the sun” while simultaneously encouraging that patenting by relax-

ing non-obviousness and enablement standards for key technologies con-

stitute a major departure from the policy of the past. And although this

departure might sound good in the abstract, its record, like the record

regarding claim construction, has been one of failure.

The problem with mistaking abstract conceptions of property for the

real thing is that this substitutes rhetoric for reasoned policy, where per-

formance can be measured, evaluated, and adjusted. The result is policy

that loses touch with reality. In the worst case, abstract rhetoric about

property rights or about the sanctity of the patent statute simply provides

cover for special interests.3 The antidote is empirical evidence, and the ev-

idence we have assembled unequivocally shows an all-too-real patent sys-

tem far removed from the ideal found in so much of the rhetoric. But the

picture we paint is also far removed from what the patent system could be.

-

We think the historical record is clear—the patent system can perform

well, and it can perform badly. The legal and institutional details are criti-

cal. So is the economic and technological environment. Like other times

in American history, we face a challenge today to improve the perform-

ance of the patent system. Yet the data in figure 12.1 give us pause. The

challenge facing the patent system today might be more difficult and the

stakes might be higher than they have been in the past. A unitary patent

system simply cannot survive if it works well in some industries, but fails

critically in others. If patent institutions prove inflexible, then perhaps we

will be left with a patent system for chemicals and pharmaceuticals and lit-

tle else. In any case, the future of the patent system will depend on getting

beyond rhetoric and abstract thinking to build institutions that improve

patent notice, even if this comes with realistic limits on what can be

patented and how it can be claimed. Then, perhaps, the patent system can

deliver on its promise as a property system for inventions.

These are the closing words.

A friend of mine ‘told’ me (by letting the book lie on his table when i came to visit him!) about this book, Fooled by Randomness. If i weren’t busy with reading another book, i wud definitely read this one right away. It’s short (220 p.) and deals with things i find very interesting. The author also has a website with some further useful information and free papers. And, it seems that he has written another interesting book, The Black Swan.

I found an ebook version on torrent. Altho for convenience, here it is (i edited it for easier reading): Fooled by Randomness – Role of Chance in Markets and Life PROPER