Clear Language, Clear Mind

April 27, 2018

Review: Cognitive Capitalism (Heiner Rindermann)

Heiner was kind enough to send me a reviewer’s (paper) copy. Unfortunately, I lack the time to write up a formal book review, and so this blogpost will have to do. James Thompson already has his review up. The book description is:

Nations can vary greatly in their wealth, democratic rights and the wellbeing of their citizens. These gaps are often obvious, and by studying the flow of immigration one can easily predict people’s wants and needs. But why are there also large differences in the level of education indicating disparities in cognitive ability? How are they related to a country’s economic, political and cultural development? Researchers in the paradigms of economics, psychology, sociology, evolution and cultural studies have tried to find answers for these hotly debated issues. In this book, Heiner Rindermann establishes a new model: the emergence of a burgher-civic world, supported by long-term background factors, furthered education and thinking. The burgher-civic world initiated a reciprocal development changing society and culture, resulting in past and present cognitive capital and wealth differences. This is an important text for graduate students and researchers in a wide range of fields, including economics, psychology, sociology and political science, and those working on economic growth, human capital formation and cognitive development.

As you can surmise, it is a kind of more complicated version of Garett Jones‘ recent book The Hive Mind. Only Heiner spent years writing this book, so it is wildly comprehensive at 576 pages, of which 50 are references. The book aims to give a broad review of most intelligence research, but has a strong focus on national differences. Unlike Stuart Ritchie’s recent textbook, Heiner does not shy away from the controversial matters of group differences. Heiner is not happy with his politically correct colleagues who have been blocking scientific progress on these matters. In particular, he takes stabs at Robert Sternberg and Steven Jay Gould. The latter case is obvious, but the words are harsher than normally seen. In a discussion of non-epistemic (not truth seeking motivations), he writes:

One example could be the book by Stephen Gay Gould (1981, pp. 50-69), The Mismeasure of Man. In this book, which is still credited to some extent by the public and by some ‘scientists’, Gould alleged different researchers had dishonest motives, particularly having cheated due to racist motives. One ‘case’ for him was Samuel George Morton (1799-1851), an American physician, natural scientist and anthropologist from Philadelphia. Using craniometry, Morton came to the result that Europeans have on average larger brains than Native Americans and Africans. Gould claiming that this result was based on an unconscious manipulation of the data due to ‘prejudices’ (‘finagling’). However, a student (Michael, 1988) has checked Morton’s data and found no systematic error, only deficits in precision. John Michael sent his results to Gould but he never reacted. But dealing with the results of others and dealing with questioning and critique is essential for an epistemic attitude (searching truth) and scientific progress (finding new truth). And not dealing with them, except for time and cognitive constraints, hints to a non-epistemic attitude (pursuing other aims than truth).

Psychologically interesting is that Gould alleged that others were biased in their research; however, he was himself biased. Projection is an indicator of a poorly integrated cognitive system. E.g. Blinkhorn (1982, p. 506) on Gould:

The theme of this [Gould’s] particular book is that since science is embedded in society, one must expect to find the prejudices of the age presented by scientists as fact. Most authors, given such a theme, would be content to document and catalogue instances in support of the proposition. Gould, however, goes one better by writing a book which exemplifies its own thesis. It’s a masterwork of propaganda, research in the service of a point of view rather than from a fund of knowledge.

Or Carroll (1995, p. 122), who sees Gould not only as prejudiced, but as producing prejudices:

His [Gould’s] account of the history of mental testing, however, may be regarded as badly biased, and crafted in such a way as to prejudice the general public and even some scientists against almost any research concerning human cognitive abilities. [p. 111-112]

In reference to Robert Sternberg’s abuses, someone came up with a triangle theory of Sternberg.

The criticism of Sternberg is particularly well-timed given that he is currently facing trouble because it has been revealed that he has been engaging in massive self-citing, text recycling (copypasting his own words verbatim over and over again), abusing editor roles to publish his own stuff (citing himself >40 times in short papers). The situation is still developing. After giving 2 examples of Sternberg giving a so-called moral reading of someone else’s writing, Rindermann writes:

Sternberg’s comments attribute value judgments and motives to Hunt that he did not state, and they do so in a manner that leaves a disparaging impression of him (see also critique by Coyle et al 2013). Hereafter, an unethical and ubsubstantiated criticism of scientists with a feeling of moral superiority should be dubbed to sternberg.

Fun fact of history comes in handy here: Sternberg’s loves things that come in threes, triangular theory of love, triarchic theory of intelligence. He really dislikes g factor models, immaturely calling referring to Jensen as a child who refuses to leave his house (of g). Turns out in history, there was another group of people who disliked so-called theoretical intelligence and were heavily into practical intelligence. These people were called Nazis and they viewed the aforementioned things as being Jewish intellectualizing (not entirely wrong, Wechsler was Jewish). It seems that no German speaking intelligence researcher has bothered to do the homework and discover these inconvenient facts about history before now. It should be mentioned, as Rindermann does, that these historical facts about who was against this or that (should) matter nothing for our current beliefs about reality. One cannot use guilt by association reasoning to get the truth.

Rindermann’s main focus is as mentioned on national comparisons and most of the book is on this. He gives a very broad introduction to thinking about these matters covering macroeconomics, historical models, Piagetian psychology (especially the work of Georg W. Oesterdiekhoff). He introduces the reader to the technical aspects of comparing scholastic ability scores to IQ studies, the role of the 95th centile (intellectual class) in his thinking. Rindermann faults researchers like Richard Lynn for relying on mere correlations between variables. Rindermann prefers instead path models. These are usually also cross-sectional and so are only somewhat more informative than bivariate associations, but they can be used to see the general patterns of relationships between variables if one is willing to make strong assumptions about the causal network. In my view, Rindermann overuses and overestimates of the utility of path models to get at causality, but it is a step up from mere correlations. One can use cross-lagged path models to get a better view, but this is generally hindered by the lack of longitudinal data. Rindermann uses this method when data are available.

Concerning race, Rindermann writes:

Thus the entire race issue is less scientifically relevant. [because it is a crude description of the complex population genetics underlying human differences] But it is relevant as indicator of epistemic rationality of a person and scientist, of a scientific field and an intellectual climate. Denying the subspecies concept without denying it for other species and without comparing the applied criteria for its refutation or acceptance with the ones applied for other living beings certainly fails in epistemic rationality. The race issue is the litmus test of a scientific attitude.

He also argues that confronting them is necessary for discussion of national differences because national differences to a large extent are race differences (complicated by mixed populations in countries). Rindermann discusses the various pieces of evidence concerning nutrition deficiencies, stunting etc., as well as the indirect genetic evidence from skin tone associations. He doesn’t come to any firm conclusions on relative importance. Indeed, Heiner takes a fairly middle of the road position on most gaps: some genetics, some environmental effects, we don’t really know the exact proportions yet, but we could find out if we want to.

Towards the end, he compares his pluralistic big theory approach to others, such as institutional models in economics. Generally speaking, other researchers ignore the intelligence literature completely, and he faults them for that. It seems obvious that if one ignores the most detailed literature on the most important trait for human capital, one will have a lot of problems understanding human social inequality caused by human capital.

Lastly, he considers a variety of things one can do to improve matters. His attitude is basically go-head for everything, but the evidence he cites is usually hopelessly confounded by genetic factors, so I was not convinced. This was also the case for his earlier review of environmental effects on intelligence, probably because we simply do not have much good quality evidence of environmental effects on intelligence. Some exists, however, such as  (Almond et al 2009) finding that fetuses exposed to radiation seems to perform worse on tests decades later. The finding was made possible by Swedish records and the freak accident in Chernobyl. For most other cases, one will have to rely on various natural experiments using behavioral genetic designs, but these generally fail to find much effect of environments on intelligence. For instance, a sibling control study failed to find any support for breastfeeding (Der 2006).

All in all though, this is a highly informative book for people interested in intelligence research and even experts will learn many things.

Review: The Censor’s Hand (Carl E. Schneider)

Filed under: Book review,Ethics,Medicine — Tags: , — Emil O. W. Kirkegaard @ 06:36

Medical and social progress depend on research with human subjects. When that research is done in institutions getting federal money, it is regulated (often minutely) by federally required and supervised bureaucracies called “institutional review boards” (IRBs). Do–can–these IRBs do more harm than good? In The Censor’s Hand, Schneider addresses this crucial but long-unasked question.
Schneider answers the question by consulting a critical but ignored experience–the law’s learning about regulation–and by amassing empirical evidence that is scattered around many literatures. He concludes that IRBs were fundamentally misconceived. Their usefulness to human subjects is doubtful, but they clearly delay, distort, and deter research that can save people’s lives, soothe their suffering, and enhance their welfare. IRBs demonstrably make decisions poorly. They cannot be expected to make decisions well, for they lack the expertise, ethical principles, legal rules, effective procedures, and accountability essential to good regulation. And IRBs are censors in the place censorship is most damaging–universities.
In sum, Schneider argues that IRBs are bad regulation that inescapably do more harm than good. They were an irreparable mistake that should be abandoned so that research can be conducted properly and regulated sensibly.

Did you read Scott Alexander’s blogpost on his horror story of IRB and was wondering whether there’s more of that kind? Well, there is, and Schneider has written an entire book about it. Schneider is a rare breed of a polymath, being professor of both medicine and law (University of Michigan). The book proceeds in fairly simple steps:

  1. Contrary to a few sensationalized stories about Nazi camp experiments and the Tuskegee experiment, harm to patients in research was actually very rare before the implementation of IRBs. In fact, it’s safer to be in research than to be in regular medical care. So, for IRB to make sense, it must further reduce the already low levels of harm while not creating additional harm by wasting researchers time and money, delaying useful treatments etc.
  2. Based on the available evidence, IRB as currently practiced completely fails the above. It is expensive, slow, and arbitrary. He cites experiments where the same protocols where sent to different IRBs only to get different judgments, sometimes even with contradictory revision requirements (one demanded children’s parents be told, one demanded they shouldn’t be told).
  3. He diagnoses the problems of IRBs as being due to lack of clear regulations (paradoxically), instead relying on supposedly clear but vague principles like those in the Belmont Report: respect for persons, beneficence, and justice. Furthermore, members of IRBs don’t have the necessary expertise to know how to deal with the studies they are supposed to regulate since they are rarely subject matter experts, and indeed, some are complete laymen.
  4. He further argues that the very nature of IRBs’ event licensing — need permission before doing anything, instead of the usual get punished if you do something wrong, i.e. reverse burden of proof — results in ever creeping scope of IRBs, and the system is thus fundamentally broken and cannot be fixed with reforms. They originally were meant for medical research, but not try to regulate pretty much everything in social science.

He illustrates the above with a number of disturbing case studies, similar to that from Alexander’s post. Let’s start with a not extremely egregious one:

Intensive care units try to keep desperately ill people alive long enough for their systems to recover. Crucial to an ICU’s technology is the plastic tube threaded through a major vein into the central circulatory system. This “central line” lets doctors give drugs and fluids more quickly and precisely and track the patient’s fluid status better.

Every tool has drawbacks. An infected IV in your arm is a nuisance, but the tip of a central line floats near your heart and can spread bacteria throughout your body. When antibiotics fail to stop these infections, patients die. Because there is one central-line infection for every 100 or 200 patient-days, a hospital like St. Luke’s in Houston, with about 100 ICU beds, will have a central-line infection every day or two. There are perhaps 10,000 or 20,000 central-line fatalities annually, and a 2004 study estimated 28,000.

There is a well-known sequence of steps to follow to reduce these infections: (1) wash your hands, (2) don cap, mask, and gown, (3) swab the site with antibiotic, (4) use a sterile full-length drape, and (5) dab on antibiotic ointment when the line is in. Simple enough. But doctors in one study took all five steps only 62% of the time. No surprise. Doctors might forget to wash their hands. Or use an inferior alternative if the right drape or ointment is missing.

Peter Pronovost is a Johns Hopkins anesthesiologist and intensivist who proposed three changes. First, have a nurse with a checklist watching. If the doctor forgets to wash his hands, the nurse says, “Excuse me, Doctor McCoy, did you remember to wash your hands?” Second, tell the doctor to accept the nurse’s reminder—to swallow hard and say, “I know I’m not perfect. I’ll do it right.” Third, have ICUs stock carts with everything needed for central lines.

It worked. Central-line infections at Johns Hopkins fell from 11 to about zero per thousand patient-days. This probably prevented 43 infections and 8 deaths and saved $2 million. In medical research, reducing a problem by 10% is ordinarily a triumph. Pronovost almost eliminated central-line infections. But would it work in other kinds of hospitals? Pronovost enlisted the Michigan Hospital Association in the Keystone Project. They tried the checklist in hospitals big and small, rich and poor. It worked again and probably saved 1,900 lives.

Then somebody complained to OHRP that Keystone was human-subject research conducted without informed consent. OHRP sent a harsh letter ordering Pronovost and the MHA to stop collecting data. OHRP did not say they had to stop trying to reduce infections with checklists; hospitals could use checklists to improve quality. But tracking and reporting the data was research and required the patients’, doctors’, and nurses’ consent. And what research risks did OHRP identify? Ivor Pritchard, OHRP’s Acting Director, argued that

“the quality of care could go down,” and that an IRB review makes sure such risks are minimized. For instance, in the case of Pronovost’s study, using the checklist could slow down care, or having nurses challenge physicians who were not following the checklist could stir animosity that interferes with care. “That’s not likely, but it’s possible,” he said.

Basically, experimenting is okay, as long as you don’t collect any data to know whether something worked or not. Obvious example of stifling of useful research. Another one:

In adult respiratory distress syndrome (ARDS), lungs fail. Patients not ventilated die, as do a third to a half of those who are. Those who survive do well, so ventilating properly means life or death.

Respirators have multiple settings (for frequency of breathing, depth of breathing, oxygen percentage, and more). The optimal combination depends on factors like the patient’s age, sex, size, and sickness. More breathing might seem better but isn’t, since excessive ventilation can tax bodies without additional benefit. Respirator settings also affect fluid balance. Too much fluid floods the lungs and the patient drowns; too little means inadequate fluid for circulatory functions, so blood pressure drops, then disappears.

In 1999, a National Heart, Lung, and Blood Institute study was stopped early when lower ventilator settings led to about 25% fewer deaths. But that study did not show how low settings should be or how patients’ fluid status should be handled. So the NHLBI got eminent ARDS specialists to conduct a multisite randomized trial of ventilator settings and fluid management.

In November 2001, two pulmonologists and two statisticians at the NIH Clinical Center sent OHRP a letter criticizing the study design Pressed by OHRP, the NHLBI suspended enrollment in July 2002: the federal institute with expertise in lung disease bowed to an agency with no such expertise. NHLBI convened a panel approved by OHRP. It found the study well-designed and vital. OHRP announced its “serious unresolved concerns” and demanded that the trials remain suspended. Meanwhile, clinicians had to struggle.

Eight months later, OHRP loosed its hold, without comment on the costs, misery, and death it had caused. Rather, it berated IRBs for approving the study without adequately evaluating its methodology, risks and benefits, and consent practices. It did not explain how an IRB could do better when OHRP and NHLBI had bitterly disagreed.

And the ridiculous:

Helene Cummins, a Canadian sociologist, knew that many farmers did not want their children to be farmers because the life was hard and the income poor. She wondered about “the meaning of farm life for farm children.” She wanted to interview seven- to twelve-year-olds about their parents’ farms, its importance to them, pleasant and unpleasant experiences, their use of farm machinery, whether they wanted to be farmers, and so on.

Cummins’ REB [same as IRB] first told her she needed consent from both parents. She eventually dissuaded them. They then wanted a neutral party at her interviews. A “family/child therapist” told the REB that “there would be an inability of young children to reply to some of the questions in a meaningful way,” that it was unlikely that children would be able to avoid answering a question, and that the neutral party was needed to ensure [again] an ethical level of comfort for the child, and to act as a witness.” Cummins had no money for an observer, thought one might discomfit the children, and worried about the observers’ commitment to confidentiality. Nor could she find any basis for requiring an observer in regulations or practice. She gathered evidence and arguments and sought review by an institutional Appeal REB, which took a year. The Appeal REB eventually reversed the observer requirement.

Farm families were “overwhelmingly positive.” Many children were eager and excited; siblings asked to be interviewed too. Children showed Cummins “some of their favorite places on the farm. I toured barns, petted cows, walked to ponds, sat on porches, and watched the children play with newborn kittens.” Cummins concluded that perhaps “a humble researcher who respects the kids who host her as smart, sensible, and desirous of a good life” will treat them ethically.

There are some links to the growing snowflake craze, namely that IRBs are tasked with protecting so-called vulnerable groups. But what exactly is that? Well, because IRBs want more power, they continuously expand the category to include pretty much everybody. When dealing with vulnerable groups, extra rules apply, so basically this is a power grab to expand the extra rules to just about all cases.

Regulationists’ most common questions about vulnerability are expanding IRB authority, “with the answer usually being ‘yes.’”99 The regulations already say that subjects “likely to be vulnerable to coercion or undue influence, such as children, prisoners, pregnant women, mentally disabled persons, or economically or educationally disadvantaged persons,” require “additional safeguards” to protect their “rights and welfare.”100 IRBs can broaden their authority in two ways. First, “additional safeguards” and “rights and welfare” are undefined. Second, the list of vulnerable groups is open-ended and its criteria invitingly unspecified.

Who might not be vulnerable? Children are a quarter of the population. Most women become pregnant. Millions of people are mentally ill or disabled. “Economically and educationally disadvantaged” may comprise the half of the population below the mean, the three quarters of the adults who did not complete college, the quarter that is functionally illiterate, the other quarter that struggles with reading, or the huge majority who manage numbers badly. And it is easily argued, for example, that the sick and the dying are vulnerable.

Basically, you can expect to be persuaded somewhat towards libertarianism by reading this book. It’s a prime example of inept, slow, counter-productive, expensive regulation slowing down progress for everybody.

July 21, 2017

Workers are people too: review of We wanted workers (Borjas)

Filed under: Book review,Immigration — Tags: — Emil O. W. Kirkegaard @ 05:01

Next up in my review series I picked something anti-libertarian. I ended up with We wanted workers, based on a recommendation by Heiner Rindermann. It turned out to be a great choice. Borjas is my type of researcher, in his words:

One underlying theme of this book is that viewing immigrants as purely a collection of labor inputs leads to a very misleading appraisal of what immigration is about, and gives an incomplete picture of the economic impact of immigration. Because immigrants are not just workers, but people as well, calculating the actual impact of immigration requires that we take into account that immigrants act in particular ways because some actions are more beneficial than others. Those choices, in turn, have repercussions and unintended consequences that can magnify or shrink the beneficial impact of immigration that comes from the contribution to widget production.

For instance, it is self-evident that not every person in a sending country wants to be an emigrant. People often choose to stay in their place of birth, despite the sizable economic gains to be had by moving from one place to another. The movers almost certainly differ in significant ways from the stayers; they have different motivations, different capabilities, and so on. To calculate the impact of immigration correctly, it is not just a matter of counting the number of bodies that filled the slots in the proverbial widget factory. We also need to worry about which types of persons the receiving country ended up attracting. It would not be surprising if a receiving country, channeling Max Frisch’s observation through Obi-Wan Kenobi, concluded that perhaps “those were not the workers we were looking for.”

We Wanted Workers summarizes what we learn about the economic impact of immigration on the United States once we view it from this broader perspective. Although I am myself an immigrant, this is not an ideological sermon on immigration; there is no attempt to moralize or to either canonize or demonize immigrants. Instead, a recurring refrain is that the economic consequences of immigration are not evenly distributed among the many people that immigration affects. Put simply, some people win and some people lose. Devoid of all the ideological trappings and all the deliberate obfuscations, immigration can be viewed for what it plainly is: another redistributive social policy.

Under some conditions, the grand total of the gains accruing to the natives who win will exceed the grand total of the losses suffered by the natives who lose, so that immigration (like international trade) increases national wealth. It is also entirely possible that these gains could be greatly reduced or even reversed by other real-world circumstances, such as the fiscal burden that may arise from excessive immigrant participation in public assistance programs or the social costs resulting from an unassimilated foreign-born population.

Instead of leading to grand universal statements, the broader and more realistic approach forces us to think about what determines the economic impact of immigration and to isolate the various factors that can make immigration either more beneficial or more costly. That approach also helps to identify the groups that win and the groups that lose. In the end, these insights could be used to formulate an immigration policy that, if the United States wanted to, would make immigration more advantageous and would more evenly spread the gains and losses.

Paul Collier, a renowned British public intellectual and a professor at Oxford University, published a book in 2013 entitledExodus: How Migration Is Changing Our World. Collier himself had never conducted research on immigration issues in his academic work; instead, he had written a number of influential books on such diverse topics as the impact of government aid to poor countries and the politics of global warming. The main point of Exodus is that the presumed large benefits that immigration can impart to receiving countries may be greatly reduced as the number of immigrants increases substantially and the migration flow continues indefinitely. Large and persistent flows, Collier argued, could have many other (sometimes harmful) unintended consequences.

Regardless of how one feels about this conclusion, I found it particularly insightful to read Collier’s overall perception of the many social science studies that he reviewed as he prepared to write the book:

A rabid collection of xenophobes and racists who are hostile to immigrants lose no opportunity to argue that migration is bad for indigenous populations. Understandably, this has triggered a reaction: desperate not to give succor to these groups,social scientists have strained every muscle to show that migration is good for everyone.1

This is as damning a statement about the value of social science research on immigration as one can find. As far as I know, Collier is the first distinguished academic to state publicly that social scientists have attempted to construct an intricate narrative that shows the measured impact of immigration to be “good for everyone.”

I have never made such an assertion in public. But I have long suspected that a lot of the research (particularly, but not exclusively, outside economics) was ideologically motivated, and was being censored or filtered to spin the evidence in a way that would exaggerate the benefits from immigration and downplay the costs.

Many conceptual assumptions and statistical manipulations can affect the nature of the evidence. A computer program that analyzes data from a survey of millions of persons can have hundreds, if not thousands, of lines of code, and a seemingly innocuous programming assumption here or sample selection there can lead to very different results. Moreover, dissecting a published study to isolate precisely which conceptual assumption or statistical manipulation may be responsible for a specific claim involves a lot of time and effort, and there is little professional reward for playing detective.

We Wanted Workers argues that it is crucial to carefully examine the nuts and bolts of the underlying research before one can trust the claims made about the impact of immigration. I will try to make the discussion of the data that are often used in immigration research, and how those data are manipulated, as transparent as it can possibly be. The book, in fact, will provide various examples in which arbitrary conceptual assumptions, questionable data manipulations, and a tendency to overlook inconvenient facts help build the not-so-subtle narrative that Collier detected.

There is a lot more like this, but you will have to read it yourself. Now, what kind of data does Borjas present? I repost his central (IMO) figures:

(in reverse order because WordPress is stupid)

Borjas concludes:

Finally, it is wise to be skeptical of expert opinion in politically contentious topics like immigration. The strong influence of the narrative that immigration is “good for everyone” makes it imperative that we carefully examine the nuts and bolts of exactly how we learn certain things about its impact.

Unfortunately, the nuts and bolts are often hidden in obscure technical discussions, making them inaccessible to most nonspecialists. That is why I have repeatedly attempted to clarify the underlying details. A key lesson: the nuts and bolts matter. An assumption here or a data manipulation there makes a difference, and it can often make a big difference in determining the takeaway point. Estimates of the impact of immigration and promises of what will happen are intrinsically tied to the choice of conceptual assumptions and statistical manipulations. And we should treat those choices, particularly when there exists a temptation to further an ideological narrative, with all the suspicion they deserve.

Before I turn to the evidence, let me note that I began this book by describing some of the personal circumstances that led, in a very circuitous way, to my professional interest in immigration. My role in the trenches of immigration research raises a couple of paradoxes that many readers might have detected and be curious about. Let me address the first of these puzzles now. As Paul Collier observed, social scientists “have strained every muscle” to build the politically correct narrative that immigration is good for everyone. I never did that type of heavy lifting. Nevertheless, my career somehow progressed nicely.

I was able to get away with this because the issues addressed by the main research papers I wrote on immigration—such as how to measure assimilation or how to measure the labor market impact or how to measure the economic gains for the native population—skirted the ideology that increasingly suffocates the immigration debate. They were how-to papers—technical contributions that addressed very specific questions about how to measure a number of great interest.

Many of the answers implied by my how-to papers did not support the narrative, so they might seem easy to ignore. My solutions are difficult to discount, however, because they are the solutions that follow easily from the theory and statistical methods that are widely used in mainstream economics. For better or worse, little in my research departs from the standard ways in which economists think about and measure outcomes in the labor market.

It has been three decades since I began to think about immigration seriously. What do I take away from the evidence? Which insights do I find most useful when the time comes to think about the future of immigration policy?

•Not everyone wants to move to the United States, and those who choose to move are fundamentally different from those who choose to stay behind. The nature of the selection, however, can vary dramatically from place to place. The United States will attract high-skill workers when we offer a higher payoff for their abilities, but the high-skill workers will stay behind if they can get a better deal at home. The fact that different kinds of people will want to move out of different countries (and that the skills they bring are not always transferable to the American setting) creates considerable inequality in economic outcomes across immigrant groups at the time of their arrival.

•Assimilation is not inevitable. The speed of economic assimilation—the narrowing of the gap in economic outcomes between immigrants and natives—depends crucially on conditions on the ground. Sometimes those conditions speed up the process, and sometimes they slow it down. In fact, economic assimilation today is far slower than it was two or three decades ago. This trend, however, masks crucial differences in the assimilation of different immigrant groups. Some groups assimilate very rapidly and some do not. Typically, groups that are more skilled and that do not have access to large and vibrant ethnic enclaves assimilate faster.

•The experience of the descendants of the Ellis Island–era immigrants shows that the melting pot did indeed melt away the differences in economic outcomes across those ethnic groups, but it took nearly a century for the melting pot to do its job. The same process may be starting to take place with the current mass migration, as the children of today’s immigrants earn higher wages and exhibit less ethnic inequality than their parents did. But we truly do not know how things will pan out in the next few decades, because the economic and social conditions that kept the melting pot busy throughout the 1900s may not be reproducible in the next century.

•Immigrants affect the job opportunities of natives. The laws of supply and demand apply to the price of labor just as much as to the price of gas. The data suggest that a 10 percent increase in the number of workers in a particular skill group probably lowers the wage of that group by at least 3 percent. The temptation to play with assumptions and manipulate the data, however, is particularly strong when examining this very contentious issue, so the reported effects often depend on such assumptions and manipulations. Our look inside the black box of how research is done suggests one lesson: the more one aggregates skill groups, the more likely one hides away the specific group of affected workers—making it harder to document whether immigration made anyone worse off. The more laser-focused the group of native workers examined, the easier it is to detect that immigration affected the targeted group.

•Immigrant participation in the workforce redistributes wealth from those who compete with immigrants to those who use immigrants. But because the gains accruing to the winners exceed the losses suffered by the losers, immigrants create an “immigration surplus,” a net increase in the aggregate wealth of the native population. However, the surplus is small, about $50 billion annually. That calculation also suggests a half-trillion-dollar redistribution of wealth from workers to firms. The surplus could be much larger, if there are many exceptional immigrants and if some of the unique abilities brought by those immigrants rub off on the native workforce.

•The welfare state introduces the possibility that the gains measured by the immigration surplus might disappear if immigrants are net users of social assistance programs rather than net contributors. There is little doubt that immigrants receive assistance at higher rates than natives, creating a fiscal burden in the short run. In the long run, immigration may be fiscally beneficial because the unfunded liabilities in Social Security and Medicare are unsustainable and will require either a substantial increase in taxes or a substantial cut in benefits. Immigrants expand the taxpayer base, perhaps helping to spread out the burden. It is extremely difficult to accurately measure the fiscal benefit in the long run, however, because much depends on the assumptions made about the future path of taxes and government spending.

•It is probably not too far-fetched to conclude that, at least in the short run, the economic gains captured by the immigration surplus are offset by the fiscal burden of providing public services to immigrants. Given the scale and the skill mix of the immigrants who entered our country in the past few decades, the economic impact of immigration, on average, is at best a wash. This near-zero effect conceals a substantial redistribution of wealth from workers to firms.

•The argument that open borders would exponentially increase the economic gains from immigration depends crucially on the perspective of immigrants as workers rather than immigrants as people. The multi-trillion-dollar gains promised by the proponents of open borders could quickly disappear (and even become an economic debacle) if immigrants adversely influence the social, political, and economic fabric of receiving countries. In the end, the impact of open borders will depend not only on whether the movers bring along their raw labor and productive skills, but also on whether they bring the institutional, cultural, and political baggage that may have hampered development in the poor countries.

Readers of this blog will not be surprised by his conclusions or results. My main complaint against Borjas is that he spends too much time on economist things like supply shocks and too little time on other important outcomes like the net fiscal contribution, crime and so on., and no time on how immigration changes the media discourse to the worse. Borjas is also quiet on the ultimate causes of social inequality: between countries, between groups of people within countries, and between individuals within groups within countries. Like other economists, he mostly discusses the vague concept of skills without discussion of just what skills. Mostly, he takes education as a proxy, but in one passage he comes close to realizing an important point:

However, the reason that originating in a rich country matters is not just that people born in those countries are better educated. Even among immigrants who have the same education, those who come from richer countries still do better. It may be that the skills acquired in industrialized economies tend to be equally useful in other industrialized economies, while the skills acquired in developing countries are less useful in an industrialized setting.16 Put differently, a larger fraction of the skills acquired in a rich country “survive” the move to the United States, further improving the economic performance of those immigrants.

Overall, I highly recommend this book. It is well-written and not very technical, yet it covers a lot of ground.

Austrian economics: worse than expected — Review of Democracy, the God that failed (Hoppe)

Filed under: Book review,Economics,Government form,Politics,Science,Sociology — Tags: , , , — Emil O. W. Kirkegaard @ 04:25

After reading a book defending limitations to free trade/protectionism, it was time for something completely different.

So I looked around after any current, well-regarded (in their circles) Austrian libertarian economist. Because I know many ancap people, I just picked one of those they often mentioned: Hans-Hermann Hoppe. Looking over his Wikipedia profile, you’d probably get the general idea. We combine a Kantian synthetic a priori approach with economics and a natural law theory of morality. As our foundational principle of the moral system, we choose some kind of self-ownership property etc. principle (not the non-aggression one apparently). Then we try to derive everything in morality and economics from these principles per deduction. Sounds crazy? Sure. As mentioned in the previous post, humans really aren’t that consistent, self-interested, knowledgeable, rational etc. for these kinds of deductions to make correct predictions. What does Hoppe say when faced with incorrect predictions? Naturally, he correctly deduces that:

And in accordance with common sense, too, I would regard someone who wanted to “test” these propositions, or who reported “facts” contradicting or deviating from them, as con- fused. A priori theory trumps and corrects experience (and logic overrules observation), and not vice-versa.

So, from that perspective, things are quite simple. We establish (to our satisfaction) some principles of economics and morality, then we just deduce the rest from there. We don’t need to care about any actual sciencing involving data. These merely serve as illustrations of the deductive results (when in agreement) or ??? when not.

Hoppe gives some examples of a priori propositions:

Examples of what I mean by a priori theory are: No material thing can be at two places at once. No two objects can occupy the same place. A straight line is the shortest line between two points. No two straight lines can enclose a space. Whatever object is red all over cannot be green (blue, yellow, etc.) all over. Whatever object is colored is also extended. Whatever object has shape has also size. If A is a part of Band B is a part of C, then A is a part of C. 4 = 3 +1. 6 = 2 (33-30).

None of them concern politics, but we might already see some problems. Some material things are in two places at once, like galaxies, which are distributed objects kept together by gravity (in fact, atoms, molecules etc. are the same way). A straight line is not always the shortest between two points: it depends on the geometry in question. It just so happens that reality is not actually Euclidian per general relativity theory, so this statement is actually empirically false. The same is true for the next. Black holes, which have no extension, might send out light of certain wave lengths (Hawking radiation). I’m not sure.

Hoppe then goes on to mention some social science ones:

More importantly, examples of a priori theory also abound in the social sciences, in particular in the fields of political economy and philosophy: Human action is an actor’s purposeful pursuit of valued ends with scarce means. No one can purposefully not act. Every action is aimed at improving the actor’s subjective well-being above what it otherwise would have been. A larger quantity of a good is valued more highly than a smaller quantity of the same good. Satisfaction earlier is preferred over satisfaction later. Production must precede consumption. What is consumed now cannot be consumed again in the future. If the price of a good is lowered, either the same quantity or more will be bought than otherwise. Prices fixed below market clearing prices will lead to lasting shortages. Without private property in factors of production there can be no factor prices, and without factor prices cost-accounting is impossible. Taxes are an imposition on producers and/ or wealth owners and reduce production and/ or wealth below what it otherwise would have been. Interpersonal conflict is possible only if and insofar as things are scarce. No thing or part of a thing can be owned exclusively by more than one person at a time. Democracy (majority rule) is incompatible with private property (individual ownership and rule). No form of taxation can be uniform (equal), but every taxation involves the creation of two distinct and unequal classes of taxpayers versus tax-receiver consumers. Property and property titles are distinct entities, and an increase of the latter without a corresponding increase of the former does not raise social wealth but leads to a redistribution of existing wealth.

For an empiricist, propositions such as these must be interpreted as either stating nothing empirical at all and being mere speech conventions, or as forever testable and tentative hypotheses. To us, as to common sense, they are neither. In fact, it strikes us as utterly disingenuous to portray these propositions as having no empirical content. Clearly, they state something about “real” things and events! And it seems similarly disingenuous to regard these propositions as hypotheses. Hypothetical propositions, as commonly understood, are statements such as these: Children prefer McDonald’s over Burger King. The worldwide’ ratio of beef to pork spending is 2:1. Germans prefer Spain over Greece as a vacation destination. Longer education in public schools will lead to higher wages. The volume of shopping shortly before Christmas exceeds that of shortly after Christmas. Catholics vote predominantly “Democratic.” Japanese save a quarter of their disposable income. Ger- mans drink more beer than Frenchmen. The United States produces more computers than any other country. Most inhabitants of the U.S. are white and of European descent. Propositions such as these require the collection of historical data to be validated. And they must be continually reevaluated, because the asserted relationships are not necessary (but “contingent”) ones; that is, because there is nothing inherently impossible, inconceivable, or plain wrong in assuming the opposite of the above: e.g., that children prefer Burger King to McDonald’s, or Germans Greece to Spain, etc. This, however, is not the case with the former, theoretical propositions. To negate these propositions and assume, for in- stance, that a smaller quantity of a good might be preferred to a larger one of the same good, that what is being consumed now can possibly be consumed again in the future, or that cost-accounting could be accomplished also without factor prices, strikes one as absurd; and anyone engaged in “empirical research” and “testing” to determine which one of two contradictory propositions such as these does or does not hold appears to be either a fool or a fraud.

Where to begin? Actually many or even most of these are open to question. E.g Human action is an actor’s purposeful pursuit of valued ends with scarce means. Are we defining ‘human action’ this way, or trying to state something true about the world? Well, unless they have odd definition of ‘action’ in mind, then not all human actions are purposeful. There’s an entire category of such non-purposeful actions, like sneezing, hiccups, most coughing, and knee jerks. The remaining statements have similar problems.

Anyway, so Hoppe does not seem to get this. Instead he treats his derivations based on such principles as matter of fact, for morality and policy. Everything that doesn’t agree is then morally dubious or doesn’t understand the facts, or is deluded that empirical data can overrule logic. Actually, Hoppe doesn’t actually formally deduce anything (i.e. with symbols or rigid prose), presumably because he lacks actual knowledge of logic, so why should we trust his informal, hand-waving arguments?

The rest of the book is essentially his speculations and moral condemnations of non-anarchists. The culty nature of things is revealed in the extreme reliance on a select few authors. Searching the book for Rothbard yields 170 mentions and von Mises 144 in a book with 330 pages (I removed the use of the latter in cases where it referred to the publisher).

This book is not recommended unless you have a particular interest in this breed of pseudoscience (praxeology). Among pseudosciences, it is surprisingly rarely mentioned, especially considering how anti-left wing it is (for exceptions, see here and here).

July 10, 2017

Free trade: yay or not? (Review of Free Trade Doesn’t Work: What Should Replace It and Why)

Filed under: Book review,Government form,Political science — Tags: , — Emil O. W. Kirkegaard @ 00:44

Recently, I decided it was time for catching up on my to-read list. I try to read >=30 books a year, and I was behind, owing to spending a lot of time on company work. I also wanted to avoid reading too much of the same stuff. Two reasons. First, I want to avoid getting too much confirmation bias that inevitably happens from reading a lot of stuff that’s in high agreement with each other. Second, knowledge in general has strong diminishing returns. Knowing, say, 50% about physics is almost as practically useful as knowing 90%, But knowing 50% is a lot more useful than knowing 0%. Furthermore, there are diminishing returns to knowledge accumulation too because the material will inevitably cover some of the same stuff, meaning that you aren’t learning something new.

Taken together, I wanted to try reading something new to me. I decided on Big Politics, a topic I normally avoid because it’s full of feelings and the relevant data to decide the issues does not generally exist and in many cases could not even be realistically gathered if we were determined to do so.

I generally lean towards freedom on questions of policy, but I’m not a principled libertarian. What I have is a kind of libertarian default policy, which can be undermined by reasonable evidence that regulation/less freedom works better to further our collective goals. I’ve never really considered free trade, tariffs etc. (i.e. between-country trading) in detail so generally leaned towards free trade being good. On the other hand, macro economists — whose opinion people copy — tend to be not my cup of tea. Essentially basing their ideas on various mathematical models with totally unrealistic assumptions: everybody has only self-interest, consistent goals, perfect rationality, consistent time-preference, humans being substituteable (no individual differences), unrealistic beliefs in the causality of education in itself and so on.

So I looked around for a anti free trade book. There were several to pick from. I incidentally stumbled upon this article by the author of one such book: Free Trade Doesn’t Work by Ian Fletcher. It has reasonable reviews: 4.4/5 (n=69) on Amazon, 4.2/5 (n=59) on Goodreads. Good enough for me.

The book isn’t technical, but it gets the job done reasonably well. Since comparative advantage á la David Ricardo is the basic foundation for most claims about the benefits of free trade, naturally the author spends his time arguing against this argument. Because this argument is based on a lot of assumptions and some mathematical modeling, all one has to do is attack the assumptions. If they are shown to be very wrong, then the conclusion about free trade’s benefits won’t follow. This doesn’t establish that free trade is bad/protectionism is good, but it’s a start.

The criticism of free trade

Fletcher lists 7 dubious assumptions of free trade.

  1. Free trade is sustainable. One cannot keep a negative or positive trade balance forever. Keeping a negative one means importing more than one produces, which creates debt to foreigners. Not even keeping a positive one is always a good idea. If it’s based on selling off non-renewable property to foreigners such as natural resources. When they run out, there is nothing left for one’s ancestors.
  2. There are no externalities. These come in two kinds: negative and positive. The textbook negative externality are environmental pollution. If a country has lax environmental standards, one can import goods from it cheaply. However, this causes accumulated pollution in the production country. In the unlucky scenario where pollution is global in scope, it means that the world is essentially relying on the weakest environmental protection of any country. Because there are 200 countries or so in the world, probably some country or another will have misinformed, low protections. Free trade using this will cause global destruction of the environment.Positive externalities are the opposite: where some sectors of the economy produce spillover effects, like making it easy to break into another sector. For instance, production of materials used to manufacture computers makes it easier to break into the computer production sector because one can source locally. These factors are ignored in free trade. In the worst case, countries can get stuck in an agriculture sector. Agriculture doesn’t really lead easily to other sectors, but free trade will push a country towards it if wages are low and the climate warm. Ring any bells?
  3. Factors of production move easily between industries. The simplest example here are workers’ skills. While some workers are able to retrain, most are not or it is very costly. When the trade opportunities change, free trade forces a country to move its production into a new sector. However, since workers cannot adapt as fast as circumstances change, they will instead move into unemployment or underemployment.
  4. Trade does not raise income inequality. Comparative advantage, when it holds, implies that the economy as a whole will grow, not that it will grow equally. Fletcher gives a hypothetical example of importing clothing and exporting airplanes. Suppose we start with an economy that produces both. But then we find a trade partner with lower wagers that is able to produce clothes but not planes more efficiently. Great, so we start exporting planes and importing clothes. All good so far. But then, what’s the distribution of jobs required for producing planes and clothes? Maybe planes requires 3 high skill workers and 7 low skill, while clothes require 1 high skill and 9 low skill. So, by doing this, we’ve increased the demand for high skill workers and decreased that for low skill workers. Since workers can’t just change places (individual differences in basic traits + acquired skills), then this will cause lower wages or unemployment.
  5. Capital is not internationally mobile. Basically, capitalists won’t necessarily invest in creating jobs in your own country. Rather, if wages and free trade allows it, they will invest in other countries’ infrastructure. Fletches gives the example of British engineers building railroads in other countries instead of Britian. In 1914, 35% of British owned railroads were not in Britain. Furthermore, when capitalists move jobs to other countries, this will lower production costs (per free trade), which is good for consumers. But if they move too many jobs away, then it’s problematic too. Consumers and workers are the same people, and they can’t consume cheaper goods if they have no jobs, or can’t afford them if they have lower paying jobs. There is no theorem that says that these will necessarily balance out in favor of your country.
  6. Short-term efficiency causes long-term growth. Comparative advantage theory is a static model, about what would happen if things balanced out in an instant. It just so happens the world is not like that, things take time. Generally speaking, one wants growth, and change means positive change, be it skills, income or knowledge. Comparative advantage is about being the most efficient at what one currently does. If you work as a secretary, you don’t want to become the most efficient secretary. You want to build skills so you can move into a better job. Burkino Faso doesn’t want to be the most efficient Burkino Faso, it wants to become something like Denmark. Comparative advantage itself does not imply anything about how one would accomplish such goals.

    It goes back to Ricardo’s own example of wine and wool production. Britain produced wool and Portugal produced wine. Then they traded and everybody was happy. However, wine production did not spawn another useful sectors. Wine production has been essentially the same for hundreds or thousands of years. It has no node above it in the tech tree. Wool production, however, lead to mechanical treatment of wool, and then other mechanical parts and eventually a whole mechanical industry that lead to steam ships and trains. Lots of nodes in the tech tree above. If you’re a country, you want to move your production towards sectors with nodes in the tech tree above them.

    A personal example of this as applied to science. I have many co-authors. They almost inevitably want me to write the analysis because I’m so much better at it and much faster. This is the more effectively solution given the present distribution of abilities and results in us getting done with the study faster. Comparative advantage they say. However, as I keep telling them, if we keep splitting the work up like this, they will not learn to program, and this will have long-term consequences for their output. Programming ability is a force multiplier, allowing one person to do quickly what previously one person took a long time to do, many persons a long time, or was impossible before.

  7. Trade does not induce adverse productivity growth abroad. Trading with others might cause them to attain high growth rates, which changes their opportunity costs. These can become so high that they stop producing cheap products which you were previously importing. But now, you lost your own production in this sector, and it’s hard to start one up again. So you’ll have to keep importing more expensive products from the other country if they are necessary for you. If you had kept your own production on-going, then you would not have this problem because you would have kept the know-how in the country all along.

The prior

Seems pretty convincing to me, but we should be skeptical. The book itself has tons of footnotes: about 700 for the book or .about 2 per page. I did not generally check up on them, so maybe the references are not very convincing. Maybe they are like when Nisbett cites stuff.

What about the author? Is he a well known economist, so that we can be reasonably sure he knows his stuff? Seems not. His page lists no academic publications. He says he was educated at Columbia and the University of Chicago, but not in what. Might be a ‘Doctor’ Laura case. She speaks as if she’s a doctor of psychology or psychotherapy (a really low bar to pass!), but actually she has a ph.d. in physiology. On diabetes. In rats. I am of course not very hostile to outsiders (my degree is in linguistics, bachelor only!), but not taking the prior into account would be foolish.

Searching for publications of his does reveal some. Mostly in obscure outlets (associated with the post-autistic movement). (Sounds familiar?). On the other hand, regular economists do cite his book not just for criticism.

So, I’m not wholly convinced yet, but will read more. Naturally, next up I decided to read something in the totally other direction: Democracy, the God that failed. An Austrian economics book.

June 22, 2017

Regression Modeling Strategies (2nd ed.) – Frank Harrell (review)

Filed under: Book review,Math/Statistics,R — Tags: , — Emil O. W. Kirkegaard @ 19:11

I heard some good things about this book, and some of it is good. Surely, the general approach outlined in the introduction is pretty sound. He sets up the following principles:

  1. Satisfaction of model assumptions improves precision and increases statistical power.
  2. It is more productive to make a model fit step by step (e.g., transformation estimation) than to postulate a simple model and find out what went wrong.
  3. Graphical methods should be married to formal inference.
  4. Overfitting occurs frequently, so data reduction and model validation are important.
  5. In most research projects, the cost of data collection far outweighs the cost of data analysis, so it is important to use the most efficient and accurate modeling techniques, to avoid categorizing continuous variables, and to not remove data from the estimation sample just to be able to validate the model.
  6. The bootstrap is a breakthrough for statistical modeling, and the analyst should use it for many steps of the modeling strategy, including derivation of distribution-free confidence intervals and estimation of optimism in model fit that takes into account variations caused by the modeling strategy.
  7. Imputation of missing data is better than discarding incomplete observations.
  8. Variance often dominates bias, so biased methods such as penalized maximum likelihood estimation yield models that have a greater chance of accurately predicting future observations.
  9. Software without multiple facilities for assessing and fixing model fit may only seem to be user-friendly.
  10. Carefully fitting an improper model is better than badly fitting (and overfitting) a well-chosen one.
  11. Methods that work for all types of regression models are the most valuable.
  12. Using the data to guide the data analysis is almost as dangerous as not doing so.
  13. There are benefits to modeling by deciding how many degrees of freedom (i.e., number of regression parameters) can be “spent,” deciding where they should be spent, and then spending them.

Readers will recognize many of these from my writings. Not mentioned in the principles is that the book takes a somewhat anti-P value stance (roughly ‘they have some uses but are widely misused, so beware!’), and pro effect size estimation stance. And some the chapters do seem to follow these principles, but IMO the majority of the book does not really follow it. Mostly it is about endless variations on testing for non-linear effects of predictors, whereas in real life a lot of predictors will be boringly linear. There’s some decent stuff about overfitting, bootstrapping and penalized regression, but they have been done better already (read Introduction to Statistical Learning). I did learn some new things, including on the applied side (e.g. the ease of applying cubic splines, something that would have been useful for this study), and the book comes with a complimentary R package (rms) so one can apply the ideas to one’s own research immediately. On the other hand, most of the graphics in the book are terrible base plot ones, and only some are ggplot2.

This edition needed (came out 2015, first edition 2001) more work before it should have been published, but it is still worth reading for people with an interest in post-replication crisis statistics. Frank Harrell should team up with Andy Field, who’s a much better writer, and with someone with good ggplot2 skills (throw in Shiny too for extra quality). Then they could write a really good stats book.

March 21, 2017

The neuroscience of intelligence: very preliminary because of power failure and lack of multivariate studies

I don’t have time to provide extensive citations for this post, so some things are cited from memory. You should be able to locate the relevant literature, but otherwise just ask.

  • Haier, R. J. (2016). The Neuroscience of Intelligence (1 edition). New York, NY: Cambridge University Press.

Because I’m writing a neuroscience-related paper or two, it seemed like a good idea to read the recent book by Richard Haier. Haier is a rare breed of a combined neuroscientist and intelligence researcher, and he’s also the past ISIR president.

Haier is refreshingly honest about what the purpose of intelligence research is:

The challenge of neuroscience is to identify the brain processes necessary for intelligence and discover how they develop. Why is this important? The ultimate purpose of all intelligence research is to enhance intelligence.

While many talk about how important it is to understand something, understanding something is arguably just a preliminary goal on the road to what we really want: control it. In general, one can read the history of science as man’s attempt to control nature, and this requires having some rudimentary understanding of it. The understanding does not need to be causally deep, as long as one can make above chance level predictions. Newtonian physics is not the right model of how the world works, but it’s good enough to get to the Moon and building skyscrapers. Animal breeders historically had no good idea about how genetics worked, but they knew that when you breed stuff, you tend to get similar offspring to the stuff you bred, no matter whether this is corn or dogs.

Criticism of intelligence boosting studies

Haier criticizes a number of studies that attempted to raise intelligence. However, his criticisms are not quite good. For example, in reply to the n-back training paradigm, he spends about a page covering criticism of administration of one validation test:

The first devastating critique came quickly (Moody, 2009). Dr. Moody pointed out several serious flaws in the PNAS cover article that rendered the results uninterpretable. The most important was that the BOMAT used to assess fluid reasoning was administered in a flawed manner. The items are arranged from easy ones to very difficult ones. Normally, the test-taker is given 45 minutes to complete as many of the 29 problems as possible. This important fact was omitted from the PNAS report. The PNAS study allowed only 10 minutes to complete the test, so any improvement was limited to relatively easy items because the time limit precluded getting to the harder items that are most predictive of Gf, especially in a sample of college students with restricted range. This non-standard administration of the test transformed the BOMAT from a test of fluid intelligence to a test of easy visual analogies with, at best, an unknown relationship to fluid intelligence. Interestingly, the one training group that was tested on the RAPM showed no improvement. A crucial difference between the two tests is that the BOMAT requires the test-taker to keep 14 visual figures in working memory to solve each problem, whereas the RAPM requires holding only eight in working memory (one element in each matrix is blank until the problem is solved). Thus, performance on the BOMAT is more heavily dependent on working memory. This is the exact nature of the n-back task, especially as the version used for training included the spatial position of matrix elements quite similar to the format used in the BOMAT problems (see Textbox 5.1). As noted by Moody, “Rather than being ‘entirely different’ from the test items on the BOMAT, this [n-back] task seems well-designed to facilitate performance on that test.” When this flaw is considered along with the small samples and issues surrounding small change scores of single tests, it is hard to understand the peer review and editorial processes that led to a featured publication in PNAS which claimed an extraordinary finding that was contrary to the weight of evidence from hundreds of previous reports.

But he puts little emphasis on the fact that the original study had n=69 and p=.01 or something (judging from the confidence intervals):

Given publication bias and methodological degrees of freedom, and this is very poor evidence indeed. It requires no elaborate explanation of scoring of tests.

Haier does not cover the general history of attempts to increase intelligence, and this is a mistake because those don’t read history make the same mistakes over and over again. History supplies ample evidence that can inform our prior. I can’t think of any particular reason not to briefly cover this history given that Spitz wrote a pretty good book on the topic.

WM/brain training is just the latest fad in a long series of cheap tricks to improve intelligence or other test performance. The pattern is:

  1. Small study, usually with marginal effects by NHST standards (p close to .05)
  2. Widespread media and political attention
  3. Follow-up/replication research takes 1-3 decades and is always negative.
  4. The results from (3) are usually ignored, and the fad continues until it slowly dies because researchers get bored of it, and switch to the next fad. Sometimes this dying can take a very long time.

[See also: Hsu’s more general pattern.]

Some immediate examples I can think of:

  • Early interventions for low intelligence children: endless variations. Proponents always cite early, tiny studies (these are: Perry Preschool Project (1962), The Abecedarian Project (1972), The Milwaukee Project (1960s)), and neglect large negative replications (e.g. Head Start Impact Study).
  • Interventions targeted at autistic, otherwise damaged and the regular dull children. History of full of charlatans pandering miracle treatments to sad parents (see Spitz’ review).
  • Pygmalion/self-fulfilling prophecy effects. Usually people only mention a single study from 1968. Seeing the pattern here?
  • Stereotype threat. Actually this is not even a boosting effect, but is widely understood in that way. This one is still in stage.
  • Mozart effect.
  • Hard to read fonts.

I think readers would better appreciate the current studies if they knew the historical record – 0% success rate despite >80 years of trying with massive funding and political support. Haier does kind of get to this stage, but only after going over the papers and not as explicitly as I do above:

Speaking of independent replication, none of the three studies discussed so far (the Mozart Effect, n-back training, and computer training) included any replication attempt in the original reports. There are other interesting commonalities among these studies. Each claimed a finding that overturned long-standing findings from many previous studies. Each study was based on small samples. Each study measured putative cognitive gains with single test scores rather that extracting a latent factor like g from multiple measures. Each study’s primary author was a young investigator and the more senior authors had few previous publications that depended on psychometric assessment of intelligence. In retrospect, is it surprising that numerous subsequent studies by independent, experienced investigators failed to replicate the original claims? There is a certain eagerness about showing that intelligence is malleable and can be increased with relatively simple interventions. This eagerness requires researchers to be extra cautious. Peer-reviewed publication of extraordinary claims requires extraordinary evidence, which is not apparent in Figures 5.1, 5.3, and 5.4. In my view, basic requirements for publication of “landmark” findings would start with replication data included along with original findings. This would save many years of effort and expense trying to replicate provocative claims based on fundamentally flawed studies and weak results. It is a modest proposal, but probably unrealistic given academic pressures to publish and obtain grants.

Meta-analyses are a good tool to estimate the entire body of evidence, but they feed on published studies, and when published studies produce biased effect size estimates due to researcher degrees of freedom and publication bias (+ small samples), the meta-analyses will also tend to do this  too. One can adjust for the publication bias to some degree, but this works best when there’s a large body of research. One cannot adjust for researcher degrees of freedom. For a matter as important as increasing intelligence, there is only one truly convincing kind of evidence: large-scale, pre-registered trials with public data access. Preferably they should be announced in advance (see Registered Reports). This way it’s hard to just ‘forget’ to publish the results (common), swap outcomes (also common), and use creative statistics to get the precious p < .05 (probably common too).

Statistics of (neuro)science

In general, I think Haier should have started the book with a brief introduction to the replication crisis and the relevant statistics: Why do we have it, how do we fix it. This would of course also mean that he would have to be a lot more cautious in describing many of the earlier studies presented in the preceding chapters. Most of these studies were done on tiny samples and we know that they suffer publication bias because we have a huge meta-analysis of brain size x intelligence showing the decline effect. There is no reason to expect the other reported associations will hold up any better or at all.

Haier spends too much time noting apparent sex differences in small studies. These claims are virtually always based on the NHST subgroup fallacy – the idea that if some association is p < alpha for one group, and p > alpha for another group, then we can conclude there’s an 1/0 interaction effect such that there is an effect in one population, and none in the other.

To be fair, Haier does lay out 3 laws:

  1. No story about the brain is simple.

  2. No one study is definitive.

  3. It takes many years to sort out conflicting and inconsistent findings and establish a compelling weight of evidence.

which kind of make the same point. Tiny samples make every study non-conclusive (2), and we have to wait for the eventual replications and meta-analyses (3). Tiny samples combined with inappropriate NHST statistics give rise to pseudo-interactions in the literature which make (1) seem truer than it is (cf. the situational specificity hypothesis in I/O psych). Not to say that the brain is not complicated, but no need to add spurious interactions to the mix. This is not hypothetical: many papers reported such sex ‘interactions’ for brain size x intelligence, but the large meta-analysis by Pietschnig et al found no such moderator effect.

Beware meta-regression too. This is just regular regression and has the same problems: if you have few studies, say k=15, use weights (study SE), and try out many predictors — sex, age, subtest, country, year, … –, then it’s easy to find false positive moderators. An early meta-analysis did in fact identify sex as a moderator (which Haier cites approvingly), and this turned out not to be so.

Going forward

I think the best approach to working out the neurobiology of intelligence is:

  1. Compile large, public-use datasets. We are getting there with some recent additions, but they are not public use: 1) PING dataset n=1500, 2) Human Connectome n=1200, 3) Brain Genomics Superstruct Project n=1570. Funding for neuroscience, almost any science, should be contingent on contributing the data to a large, public repository. Ideally, this should be a single, standardized dataset. Linguistics has a good precedent to follow: “The World Atlas of Language Structures (WALS) is a large database of structural (phonological, grammatical, lexical) properties of languages gathered from descriptive materials (such as reference grammars) by a team of 55 authors.”.

  2. Include many diverse measures of neurobiology so we can do multivariate studies. Right now, the literature is extremely fragmented and no one knows the degree to which specific reported associations overlap or are merely proxies for each other. One can do meta-analytic modeling based on summary statistics, but this approach is very limited.

  3. Clean up the existing literature by including measures reported in the literature first. While many of these may be positive positives, a reported p < alpha finding is better evidence than nothing. Similarly: p > alpha findings in the literature may be false negatives. Only by testing them in large samples can we know. It would be bad to miss an important predictor just because some early study with 20 persons failed to find an association.

Actually boosting intelligence

After discussing nootropics (cognitive enhancing drugs, in his terms), Haier writes:

If I had to bet, the most likely path toward enhancing intelligence would be a genetic one. In Chapter 2 we discussed Doogie mice, a strain bred to learn maze problem-solving faster than other mice. In Chapter 4 we enumerated a few specific genes that might qualify as relevant for intelligence and we reviewed some possible ways those genes might influence the brain. Even if hundreds of intelligence-relevant genes are discovered, each with a small influence, the best case for enhancement would be if many of the genes worked on the same neurobiological system. In other words, many genes may exert their influence through a final common neurobiological pathway. That pathway would be the target for enhancement efforts (see, for example, the Zhao et al. paper summarized in Section 2.6). Similar approaches are taken in genetic research on disorders like autism and schizophrenia and many other complex behavioral traits that are polygenetic. Finding specific genes, as difficult as it is, is only a first step. Learning how those genes function in complex neurobiological systems is even more challenging. But once there is some understanding at the functional system level, then ways to intervene can be tested.This is the step where epigenetic influences can best be explicated. If you think the hunt for intelligence genes is slow and complex, the hunt for the functional expression of those genes is a nightmare. Nonetheless, we are getting better at investigations at the molecular functional level and I am optimistic that, sooner or later, this kind of research applied to intelligence will pay off with actionable enhancement possibilities. The nightmares of neuroscientists are the driving forces of progress.

None of the findings reported so far are advanced enough to consider actual genetic engineering to produce highly intelligent children. There is a recent noteworthy development in genetic engineering technology, however, with implications for enhancement possibilities. A new method for editing the human genome is called CRISPR/Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats/Cas genes). I don’t understand the name either, but this method uses bacteria to edit the genome of living cells by making changes to targeted genes (Sander & Joung, 2014). It is noteworthy because many researchers can apply this method routinely so that editing the entire human genome is possible as a mainstream activity. Once genes for intelligence and how they function are identified, this kind of technology could provide the means for enhancement on a large scale. Perhaps that is why the name of the method was chosen to be incomprehensible to most of us. Keep this one on your radar too.

I think there may be gains to had from nootropics, but to find them, we have to get serious: large-scale neuroscientific studies of intelligence must look for correlates that we know/think we can modify, such as the prevalence of certain molecules. Then large-scale pre-registered RCTs must be done on plausible candidates for these. In general, however, it seems more plausible that we can find ways to improve non-intelligence traits that nevertheless help. For instance, speedy drugs generally enhance self-control in low doses. This shows up in higher educational attainment and lower crime convictions. These are very much real gains to be had.

Haier discusses various ways of directly stimulating the brain. In my speculative model of neuro-g, this would constitute an enhancement of the activity or connectivity factors. It seems possible that one can make some small gains this way, but I think the gains are probably larger for non-intelligence traits such as sustained attention and tiredness. If we could find a way to sleep more effectively, this would have insanely high social payoffs, so I recommend research into this.

(From: Nerve conduction velocity and cognitive ability: a large sample study)

For larger gains, genetics is definitely the most certain route (only other alternative is neurosurgery with implants). Since we know genetic variation is very important for variation in intelligence (high heritability), all we have to do is tinker with that variation. Haier makes the usual mistake of focusing on direct editing approaches. Direct editing is hard because one must know the causal variants and be able to change them (not always easy!). So far we know of very few confirmed causal variants, and those we know, are mostly bad: e.g. aneuplodies such as Down syndrome (trisomy 21). However, siblings show that it is quite possible to have the same parents and different genomes, so this means that all we have to do is filter among possible children: embryo selection. Embryo selection does not require us to know the causal variants, it only requires predictive validity, something that’s much easier to attain. See Gwern’s excellent writings on the topic for more info.

4. Socialist neuroscience?

Haier opines a typical HBD-Rawlsian approach to social policy:

Here is my political bias. I believe government has a proper role, and a moral imperative, to provide resources for people who lack the cognitive capabilities required for education, jobs, and other opportunities that lead to economic success and increased SES. This goes beyond providing economic opportunities that might be unrealistic for individuals lacking the requisite mental abilities. It goes beyond demanding more complex thinking and higher expectations for every student irrespective of their capabilities (a demand that is likely to accentuate cognitive gaps). It even goes beyond supporting programs for early childhood education, jobs training, affordable childcare, food assistance, and access to higher education. There is no compelling evidence that any of these things increase intelligence, but I support all these efforts because they will help many people advance in other ways and because they are the right thing to do. However, even if this support becomes widely available, there will be many people at the lower end of the g-distribution who do not benefit very much, despite best efforts. Recall from Chapter 1 that the normal distribution of IQ scores with a mean of 100 and a standard deviation of 15 estimates that 16% of people will score below an IQ of 85 (the minimum for military service in the USA). In the USA, about 51 million people have IQs lower than 85 through no fault of their own. There are many useful, affirming jobs available for these individuals, usually at low wages, but generally they are not strong candidates for college or for technical training in many vocational areas. Sometimes they are referred to as a permanent underclass, although this term is hardly ever explicitly defined by low intelligence. Poverty and near-poverty for them is a condition that may have some roots in the neurobiology of intelligence beyond anyone’s control.

The sentence you just read is the most provocative sentence in this book. It may be a profoundly inconvenient truth or profoundly wrong. But if scientific data support the concept, is that not a jarring reason to fund supportive programs that do not stigmatize people as lazy or unworthy? Is that not a reason to prioritize neuroscience research on intelligence and how to enhance it? The term “neuro-poverty” is meant to focus on those aspects of poverty that result mostly from the genetic aspects of intelligence. The term may overstate the case. It is a hard and uncomfortable concept, but I hope it gets your attention. This book argues that intelligence is strongly rooted in neurobiology. To the extent that intelligence is a major contributing factor for managing daily life and increasing the probability of life success, neuro-poverty is a concept to consider when thinking about how to ameliorate the serious problems associated with tangible cognitive limitations that characterize many individuals through no fault of their own.

Public policy and social justice debates might be more informed if what we know about intelligence, especially with respect to genetics, is part of the conversation. In the past, attempts to do this were met mostly with acrimony, as evidenced by the fierce criticisms of Arthur Jensen (Jensen, 1969; Snyderman & Rothman, 1988), Richard Herrnstein (1973), and Charles Murray (Herrnstein & Murray, 1994; Murray, 1995). After Jensen’s 1969 article, both IQ in the Meritocracy and The Bell Curve raised this prospect in considerable detail. Advances in neuroscience research on intelligence now offer a different starting point for discussion. Given that approaches devoid of neuroscience input have failed for 50 years to minimize the root causes of poverty and the problems that go with it, is it not time to consider another perspective?

Here is the second most provocative sentence in this book: The uncomfortable concept of “treating” neuro-poverty by enhancing intelligence based on neurobiology, in my view, affords an alternative, optimistic concept for positive change as neuroscience research advances. This is in contrast to the view that programs which target only social/cultural influences on intelligence can diminish cognitive gaps and overcome biological/genetic influences. The weight of evidence suggests a neuroscience approach might be even more effective as we learn more about the roots of intelligence. I am not arguing that neurobiology alone is the only approach, but it should not be ignored any longer in favor of SES-only approaches. What works best is an empirical question, although political context cannot be ignored. On the political level, the idea of treating neuro-poverty like it is a neurological disorder is supremely naive. This might change in the long run if neuroscience research ever leads to ways to enhance intelligence, as I believe it will. For now, epigenetics is one concept that might bridge both neuroscience and social science approaches. Nothing will advance epigenetic research faster than identifying specific genes related to intelligence so that the ways environmental factors influence those genes can be determined. There is common ground to discuss and that includes what we know about the neuroscience of intelligence from the weight of empirical evidence. It is time to bring “intelligence” back from a 45-year exile and into reasonable discussions about education and social policies without acrimony.

It’s a little odd that he ignores genetic interventions here, given his early mentioning of them. Aside from that, the focus on neurobiology is eminently sensible. If typical S approaches – e.g. even more income theft redistribution – do have causal effects on intelligence, this must be thru some pretty long causal pathway, so we cannot expect large effects for the change we make in the income distribution. Neurobiology, however, is the direct biological substrate of intelligence, and thus one can expect to see much larger gains by interventions targeted at this domain for the simple reason that it’s the direct causal antecedent of the thing we’re trying to manipulate – provided of course that any non-genetic intervention can work.

From a utilitarian/consequentialist perspective, government action to increase intelligence, if it works, is likely to have huge payoffs at many levels, so it is definitely something I can get behind – with the caveat that we get serious about it: open data, large-scale, preregistered RCTs.

Chronometric measures and the elusive ratio scale of intelligence

Haier quotes Jensen on chronometric measures:

At the end of his book, Jensen concluded, “… chronometry provides the behavioral and brain sciences with a universal absolute [ratio] scale for obtaining highly sensitive and frequently repeatable measurements of an individual’s performance on specially devised cognitive tasks. Its time has come. Let us get to work!” (p. 246). This method of assessing intelligence could establish actual changes due to any kind of proposed enhancement in a before and after research design. The sophistication of this method for measuring intelligence would diminish the gap with sophisticated genetic and neuroimaging methods.

But it does not follow. Performance on any given chronometric test is a function of multiple independent factors, some of which might change without the others doing so. According to a large literature, specific abilities generally have zero or near-zero predictive validity, so boosting them is not of much use. Chronometric tests measure intelligence, but also other factors. When one repeats chronometric tests, there are gains – people’s reaction times do become faster, — but it does not mean that intelligence increased, it was some other factor.

Besides, it is easy enough to have a ratio scale. Most tests that are part of batteries are indeed on a ratio scale: if you get 0 items right you have no ability on that test. The trouble is aligning the ratio scale of a given test with the presumed ratio scale of intelligence. Anyone can make a test sufficiently hard so that no one can get any items right, but that doesn’t mean people who took the test have 0 intelligence.

Besides, chronometric measures are reversed: a 0 on a chronometric test is the best possible performance, not the worst possible performance. So, while they are ratio scale – a 0 ms is a true 0 and one can sensibly apply multiplication operations – they are clearly not aligned with the hypothetical ratio scale of intelligence itself.

With this criticism in mind, chronometric tests are interesting for multiple reasons and deserve much further study:

  1. They are resistant to training gains. One cannot just memorize the items beforehand. This is important for high stakes testing. There are training gains on them, but they show diminishing returns, and thus, if we want to do testing that is not very confounded with test gains, we can give people lots of practice trials.

  2. They are resistant to motivation effects. Studies have so far not produce any associations between subjects’ rating of how hard they tried at a given trial and their actual performance on that trial.

  3. They are much closer to the neurobiology of intelligence and thus their use is likely to lead to advances in our understanding of that area too. I recommend doing brain measurements of people as they complete chronometric tests and see if one can link the two. EEG and similar methods are getting quite cheap so this is possible to do at a large scale.

  4. Chronometric tests are resistant to ceiling problems. While one can max out on simple reaction time tests – get to 0 ms decision time – it is easy to simply add some harder tests which will effectively raise the ceiling again. One can do this endlessly. It is quite likely that one can find ways to measure into the extreme high end of ability using simple predictive equations, something that’s not generally possible with ordinary tests. This could be researched using e.g. the SMPY sample.

Unfortunately, I do not have the resources right now to pursue this area of research. It would require having a laboratory and some way to get people to participate in large numbers, Galton style.

Conclusion

All in all, this is a very readable book that summarizes research into intelligence for the intelligent layman, while also giving an overview of the neuroscientific findings. It is similar in spirit to Ritchie’s recent book. The main flaws are statistical, but they are not critical to the book in my opinion.

Powered by WordPress