I had heard good things about this book, sort of. It has been cited a lot. Enough that I would be wiling to read it, given that the author has written at least one interesting paper (Political Diversity Will Improve Social Psychological Science). Generally, it is written in popsci style, very few statistics making it impossible to easily judge how much certainty to assign to different studies mentioned in the text. Generally, I was not impressed or learned much, tho not all was necessarily bad. Clearly, he wrote this book in an attempt to appeal to many different people. Perhaps he succeeded, but appeals that work well on large parts of the population rarely work well on me.

In any case, there are some parts worth quoting and commenting on:

The results were as clear as could be in support of Shweder. First, all four of my Philadelphia groups confirmed Turiel’s finding that Americans make a big distinction between moral and conventional violations. I used two stories taken directly from Turiel’s research: a girl pushes a boy off a swing (that’s a clear moral violation) and a boy refuses to wear a school uniform (that’s a conventional violation). This validated my methods. It meant that any differences I found on the harmless taboo stories could not be attributed to some quirk about the way I phrased the probe questions or trained my interviewers. The upper-class Brazilians looked just like the Americans on these stories. But the working-class Brazilian kids usually thought that it was wrong, and universally wrong, to break the social convention and not wear the uniform. In Recife in particular, the working-class kids judged the uniform rebel in exactly the same way they judged the swing-pusher. This pattern supported Shweder: the size of the moral-conventional distinction varied across cultural groups.

Emil’s law: Whenever a study reports that socioeconomic status correlates with X, it is mostly due to its relationship to intelligence, and often socioeconomic status is non-causally related to X.

Wilson used ethics to illustrate his point. He was a professor at Harvard, along with Lawrence Kohlberg and the philosopher John Rawls, so he was well acquainted with their brand of rationalist theorizing about rights and justice.15 It seemed clear to Wilson that what the rationalists were really doing was generating clever justifications for moral intuitions that were best explained by evolution. Do people believe in human rights because such rights actually exist, like mathematical truths, sitting on a cosmic shelf next to the Pythagorean theorem just waiting to be discovered by Platonic reasoners? Or do people feel revulsion and sympathy when they read accounts of torture, and then invent a story about universal rights to help justify their feelings?
Wilson sided with Hume. He charged that what moral philosophers were really doing was fabricating justifications after “consulting the emotive centers” of their own brains.16 He predicted that the study of ethics would soon be taken out of the hands of philosophers and “biologicized,” or made to fit with the emerging science of human nature. Such a linkage of philosophy, biology, and evolution would be an example of the “new synthesis” that Wilson dreamed of, and that he later referred to as consilience—the “jumping together” of ideas to create a unified body of knowledge.17
Prophets challenge the status quo, often earning the hatred of those in power. Wilson therefore deserves to be called a prophet of moral psychology. He was harassed and excoriated in print and in public.18 He was called a fascist, which justified (for some) the charge that he was a racist, which justified (for some) the attempt to stop him from speaking in public. Protesters who tried to disrupt one of his scientific talks rushed the stage and chanted, “Racist Wilson, you can’t hide, we charge you with genocide.”19

For more on the history of sociobiology, see: www.goodreads.com/book/show/786131.Defenders_of_the_Truth?ac=1

But yes, human rights bug me. There is no such thing as an ethical right ‘out there’. Human rights are completely made up. While some of them are useful as model for civil rights, they are nothing more. Worse, human rights keep getting added which are both inconsistent, vague and redundant. See e.g. www.foreignaffairs.com/articles/139598/jacob-mchangama-and-guglielmo-verdirame/the-danger-of-human-rights-proliferation

I say this as someone who strongly believes in having strong civil rights, especially regarding freedom of expression, assembly, due process and the like. However, since pushing for new human rights attracts social justice warriors, this of course means that the new rights not only conflict with previous rights (e.g. freedom of expression), but also obviously concern matters that should be a matter of national policy (e.g. resource redistribution), not super-national courts and their creeping interpretations. See e.g. this document: www.europarl.europa.eu/charter/pdf/text_en.pdf or even worse, this one: A EUROPEAN FRAMEWORK NATIONAL STATUTE FOR THE PROMOTION OF TOLERANCE. The irony is that tolerance consists exactly in letting others do what they want: Wiktionary: The ability or practice of tolerating; an acceptance or patience with the beliefs, opinions or practices of others; a lack of bigotry.

Psychopaths do have some emotions. When Hare asked one man if he ever felt his heart pound or stomach churn, he responded: “Of course! I’m not a robot. I really get pumped up when I have sex or when I get into a fight.”29 But psychopaths don’t show emotions that indicate that they care about other people. Psychopaths seem to live in a world of objects, some of which happen to walk around on two legs. One psychopath told Hare about a murder he committed while burglarizing an elderly man’s home:
I was rummaging around when this old geezer comes down stairs and … uh … he starts yelling and having a fucking fit … so I pop him one in the, uh, head and he still doesn’t shut up. So I give him a chop to the throat and he … like … staggers back and falls on the floor. He’s gurgling and making sounds like a stuck pig! [laughs] and he’s really getting on my fucking nerves so I … uh … boot him a few times in the head. That shut him up … I’m pretty tired by now so I grab a few beers from the fridge and turn on the TV and fall asleep. The cops woke me up [laughs].30


This is the sort of bad thinking that a good education should correct, right? Well, consider the findings of another eminent reasoning researcher, David Perkins.21 Perkins brought people of various ages and education levels into the lab and asked them to think about social issues, such as whether giving schools more money would improve the quality of teaching and learning. He first asked subjects to write down their initial judgment. Then he asked them to think about the issue and write down all the reasons they could think of—on either side—that were relevant to reaching a final answer. After they were done, Perkins scored each reason subjects wrote as either a “my-side” argument or an “other-side” argument.
Not surprisingly, people came up with many more “my-side” arguments than “other-side” arguments. Also not surprisingly, the more education subjects had, the more reasons they came up with. But when Perkins compared fourth-year students in high school, college, or graduate school to first-year students in those same schools, he found barely any improvement within each school. Rather, the high school students who generate a lot of arguments are the ones who are more likely to go on to college, and the college students who generate a lot of arguments are the ones who are more likely to go on to graduate school. Schools don’t teach people to reason thoroughly; they select the applicants with higher IQs, and people with higher IQs are able to generate more reasons.
The findings get more disturbing. Perkins found that IQ was by far the biggest predictor of how well people argued, but it predicted only the number of my-side arguments. Smart people make really good lawyers and press secretaries, but they are no better than others at finding reasons on the other side. Perkins concluded that “people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”22

Cite is: Perkins, D. N., M. Farady, and B. Bushey. 1991. “Everyday Reasoning and the Roots of Intelligence.” In Informal Reasoning and Education, ed. J. F. Voss, D. N. Perkins, and J. W. Segal, 83–105. Hillsdale, NJ: Lawrence Erlbaum.

From Plato through Kant and Kohlberg, many rationalists have asserted that the ability to reason well about ethical issues causes good behavior. They believe that reasoning is the royal road to moral truth, and they believe that people who reason well are more likely to act morally.
But if that were the case, then moral philosophers—who reason about ethical principles all day long—should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students.48 And in none of these ways are moral philosophers better than other philosophers or professors in other fields.
Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably borrowed mostly by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy.49 In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.

Oh dear.

The anthropologists Pete Richerson and Rob Boyd have argued that cultural innovations (such as spears, cooking techniques, and religions) evolve in much the same way that biological innovations evolve, and the two streams of evolution are so intertwined that you can’t study one without studying both.65 For example, one of the best-understood cases of gene-culture coevolution occurred among the first people who domesticated cattle. In humans, as in all other mammals, the ability to digest lactose (the sugar in milk) is lost during childhood. The gene that makes lactase (the enzyme that breaks down lactose) shuts off after a few years of service, because mammals don’t drink milk after they are weaned. But those first cattle keepers, in northern Europe and in a few parts of Africa, had a vast new supply of fresh milk, which could be given to their children but not to adults. Any individual whose mutated genes delayed the shutdown of lactase production had an advantage. Over time, such people left more milk-drinking descendants than did their lactose-intolerant cousins. (The gene itself has been identified.)66 Genetic changes then drove cultural innovations as well: groups with the new lactase gene then kept even larger herds, and found more ways to use and process milk, such as turning it into cheese. These cultural innovations then drove further genetic changes, and on and on it went.

Why is this anyway? Why don’t we just keep expressing this gene? Is there any reason?

In an interview in 2000, the paleontologist Stephen Jay Gould said that “natural selection has almost become irrelevant in human evolution” because cultural change works “orders of magnitude” faster than genetic change. He next asserted that “there’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.”77

I wonder, was Gould right about anything? Another thing. Did Gould invent his Punctuated Equilibrium theory because it postulates these change-free periods which can be conveniently claimed for humans the last 100k years or so in order to keep his denial of racial differences consistent with evolution or?

Religion is therefore well suited to be the handmaiden of groupishness, tribalism, and nationalism. To take one example, religion does not seem to be the cause of suicide bombing. According to Robert Pape, who has created a database of every suicide terrorist attack in the last hundred years, suicide bombing is a nationalist response to military occupation by a culturally alien democratic power.62 It’s a response to boots and tanks on the ground—never to bombs dropped from the air. It’s a response to contamination of the sacred homeland. (Imagine a fist punched into a beehive, and left in for a long time.)

This sounds interesting. en.wikipedia.org/wiki/Suicide_Attack_Database

The problem is not just limited to politicians. Technology and changing residential patterns have allowed each of us to isolate ourselves within cocoons of like-minded individuals. In 1976, only 27 percent of Americans lived in “landslide counties”—counties that voted either Democratic or Republican by a margin of 20 percent or more. But the number has risen steadily; in 2008, 48 percent of Americans lived in a landslide county.77 Our counties and towns are becoming increasingly segregated into “lifestyle enclaves,” in which ways of voting, eating, working, and worshipping are increasingly aligned. If you find yourself in a Whole Foods store, there’s an 89 percent chance that the county surrounding you voted for Barack Obama. If you want to find Republicans, go to a county that contains a Cracker Barrel restaurant (62 percent of these counties went for McCain).78

This sounds more like assortative relocation + greater amount of relocation. Now, if only there were more local democratic power, then people living in these different areas could self-govern and stop arguing about conflicts that would never arise. E.g. healthcare systems: each smaller area could decide on its own system. I like to quote from Uncontrolled:

This leads then to a call for “states as laboratories of democracy” federalism in matters of social policy, or in a more formal sense, a call for subsidiarity—the principle that matters ought to be handled by the smallest competent authority. After all, the typical American lives in a state that is a huge political entity governing millions of people. As many decisions as possible ought to be made by counties, towns, neighborhoods, and families (in which parents have significant coer­cive rights over children). In this way, not only can different prefer­ences be met, but we can learn from experience how various social arrangements perform.

Nuclear energy often gets bad press. However, journalists are mostly very leftist, scientifically ill-educated women, so perhaps they are not quite the right demographic to tell us about this issue. So far I have not found any published studies about the relationship between attitudes towards nuclear energy and general intelligence, however there is good reason to believe it is positive. The EU is to nice to survey the opinions of EU citizens on nuclear energy every few years, and they include measurement of various variables, but not general intelligence or general science knowledge.

Three surveys of European attitudes towards nuclear power and correlates

nuc18 nuc17 nuc16 nuc15 nuc14 nuc13 nuc12 nuc11 nuc10 nuc9 nuc8 nuc7 nuc6 nuc5 nuc4 nuc3 nuc2 nuc1

Generally, these find:

  1. Men are more positive about nuclear power
  2. The better educated are more positive about nuclear power
  3. The better educated are less in doubt about nuclear power
  4. Education has inconsistent relationship to opposition to nuclear power
  5. Self-rated knowledge is related to more positive attitude to nuclear power
  6. Those with more experience about nuclear power are more positive about nuclear power

(4) might seem inconsistent with (1-3), but it is not. Usually the questions have three broad categories: positive, negative, don’t know. When the education level increases, negative stays about the same (sometimes increases, sometimes decreases), while don’t know always decreases and positive nearly always increases. Thus, the simplest explanation for this is that higher education move people from the don’t know category to the positive category, while it has no clear effect those who are negative about nuclear power.

Survey in the United Kingdom

Most of the questions were too specific to be of use, however one was useful, is post-Fukushima and still found the usual results:


What do experts think?

I found an old review (1984) of expert opinion. They write:

In contrast to the public, most “opinion leaders,” particularly energy experts, support further development of nuclear power. This support is revealed both in opinion polls and in technical studies of the risks of nuclear power. A March 1982 poll of Congress found 76 percent of members supported expanded use of nuclear power (50. In a survey conducted for Connecticut Mutual Life Insurance Co. in 1980, leaders in religion, business, the military, government, science, education, and law perceived the benefits of nuclear power as greater than the risks (19). Among the categories of leaders surveyed, scientists were particularly supportive of nuclear power. Seventyfour percent of scientists viewed the benefits of nuclear power as greater than risks, compared with only 55 percent of the rest of the public.

In a recent study, a random sample of scientists was asked about nuclear power (62). Of those polled, 53 percent said development should proceed rapidly, 36 percent said development should proceed slowly, and 10 percent would halt development or dismantle plants. When a second group of scientists with particular expertise in energy issues was given the same choices, 70 percent favored proceeding rapidly and 25 percent favored proceeding slowly with the technology. This second sample included approximately equal numbers of scientists from 71 disciplines, ranging from air pollution to energy policy to thermodynamics. About 10 percent of those polled in this group worked in disciplines directly related to nuclear energy, so that the results might be somewhat biased. Support among both groups of scientists was found to result from concern about the energy crisis and the belief that nuclear power can make a major contribution to national energy needs over the next 20 years. Like scientists, a majority of engineers continued to support nuclear power after the accident at Three Mile Island (69).

Of course, not all opinion leaders are in favor of the current U.S. program of nuclear development. Leaders of the environmental movement have played a major role in the debate about reactor safety and prominent scientists are found on both sides of the debate. A few critics of nuclear power have come from the NRC and the nuclear industry, including three nuclear engineers who left General Electric in order to demonstrate their concerns about safety in 1976. However, the majority of those with the greatest expertise in nuclear energy support its further development.

Analysis of public opinion polls indicates that people’s acceptance or rejection of nuclear power is more influenced by their view of reactor safety than by any other issue (57). As discussed above, accidents and events at operating plants have greatly increased public concern about the possibility of a catastrophic accident. Partially in response to that concern, technical experts have conducted a number of studies of the likelihood and consequences of such an accident. However, rather than reassuring the public about nuclear safety, these studies appear to have had the opposite effect. By painting a picture of the possible consequences of an accident, the studies have contributed to people’s view of the technology as exceptionally risky, and the debate within the scientific community about the study methodologies and findings has increased public uncertainty.

And recently, much publicity was given to a study showing the discrepancy between public opinion and scientific opinion on various topics, and it included nuclear power:


A 20% percent point different is no small matter and is similar to the older studies described above.

General intelligence and nuclear power

The OKCupid dataset contains only one question related to nuclear power among the first 2400 questions or so (those in the dataset). The question is the 2216th most commonly answered question, that is, not very commonly answered at all. Since people who answer >2000 questions on a dating site are a very self-selected group, there is likely to be some heavy range restriction.


Aside from the “I don’t know”-category, the differences are quite small:

[1] "Question ID: q59519"
[1] "How do you feel about nuclear energy?"
[1] "I'm not sure, there are pros and cons."
[1] "n's =" "1015" 
[1] "means =" "2.3"    
[1] "I don't care, whatever keeps my light bulbs lit."
[1] "n's =" "52"   
[1] "means =" "1.37"   
[1] "No.  It is a danger to public safety."
[1] "n's =" "335"  
[1] "means =" "2.31"   
[1] "Yes.  It is efficient, safe, and clean."
[1] "n's =" "591"  
[1] "means =" "2.49"

However, samples are quite large. The 99% confidence interval is -0.323 to -0.037, which is getting close to 0, so we need more data to be quite certain, but it’s unlikely to have happened by chance. How large is the difference in some more useful unit? It’s .18 points on a 3-point scale (3 IQ-type questions). The overall SD in the total dataset for this 3-point scale is .96 (mean=2.12, N=28k). However, for this question, it is only .88 due to range restriction (mean=2.34, N=1993). In other words, in SD units it is .18/.88=0.20, which is 3 IQ points, not correcting for anything. Perhaps after corrections this would be something like 5 IQ points between pro and con-nuclear power people in this dataset.

Future research

I’d like more data with measures of scientific knowledge, or probability reasoning or some such and various energy policy opinions including nuclear. Of course, there should also be general intelligence data.


Europeans and Nuclear Safety – 2010 – PDF

Europeans and Nuclear Safety – 2007 – PDF

Attitudes towards radioactive waste (EU) – 2008 – PDF

Public Attitudes to Nuclear Power (OECD summary) – 2010 – PDF

New Nuclear Watch Europe – Nuclear Energy Survey (United Kingdom) – 2014 – PDF

Public Attitudes Toward Nuclear Power – 1984 – PDF (part of Nuclear Power in an Age of Uncertainty. Washington, D. C.: U.S. Congress, Office of Technology Assessment, OTA-E-216, February 1984.


This is not so straight forward as could be thought. On the one hand one might argue:

Facebook is a private company. They offer a service to users (free of charge but with ads), and since they own it, they should be able to control it as they see fit. If users don’t like it, they can go elsewhere.

We can call this the market argument. The counter argument (as phrased by Falkvinge) is this:

“At the end of the day, this is about the fact that the public square, where freedom of speech used to be enforced, has moved in under the terms-and-services umbrella of a private corporation, where they enforce their own arbitrary limits of what may be expressed and not. That means our fundamental rights have effectively moved into the hands of private interests. I welcome a challenge to this doctrine and an enforcement of freedom of speech, once a public discussion forum – like Facebook – has grown large enough to be a de-facto public location, if not the de-facto public location.”

Or, to put it my way:

The point above about it being a private company is valid. However, the “users can go elsewhere”-part is not very realistic given the near monopoly of Facebook. It just so happens that certain types of services, such as social networks, function better the more users use them and thus tends to lead to near monopolies with one or only a few dominant players on the market. When this happens, users face a choice between using a service that’s useful (where the other users are) and one that has strong protection of freedom of speech (is it exists). When such services also become a very important part of life (as measured in percent users of total population, e.g. for the US, about 48% of the population has a Facebook user) for important matters such as communication, there is reason to enforce freedom of expression (FoE) on their services despite them being privately owned, because not doing so would in practice mean that private companies would decide limits to FoE which could have negative effects social consequences because certain topics could not be discussed.

By now, unless you are some kind of libertarian/anarcho-capitalist, you should be convinced that the issue is not so straight forward as it might appear.

Enforcing freedom of expression in practice

Suppose we go ahead and say that freedom of speech must be protected on Facebook in as far as it is in the country normally — i.e. not that much for most countries, and even most Western countries limit freedom of speech in important ways — this must also be done on Facebook, how would this happen exactly?

Suppose a French native based on France creates a user which is located in France. Suppose that France’s FoE law is pretty broad: allows nudity, hardcore porn of any type as long as it consenting adults, hate speech, blasfemy, racism, sexism, wrong claims of convictions, etc. Now, suppose the French user starts posting porn and Facebook doesn’t like porn. Facebook might want to delete it and perhaps block the user (current practice in fact). However, if FoE was enforced here, this would not be legal. Facebook has some options:

  1. Make a filter option that by default is turned on (somewhere hidden in advanced settings) which hides any kind of content, including porn, that Facebook does not like.
  2. Show this content only to users from France.
  3. Show this content only to users from countries with protection for porn expressions.
  4. Disallow people based in France from creating profiles.

Now, (4) seems like an unlikely option. It would cause Facebook to lose a lot of ad revenue. (1) could potentially by struck down by a court as de facto limiting FoE too much too, but may work for most purposes. If the purpose for blocking porn is that some users are (thought to be) sensitive to it, then this option would work fine. Choosing (2) or (3) would be limitation of FoE for cross-national purposes (if the user has friends based in other countries). Whether this could be struck down by any national court is a good question. Who has jurisdiction for cross-national speech? Both countries by themselves? Both countries in joint?

In any case, (1) seems like a realistic choice. If some material is reported as being over the limit, it can be put into the ‘dangerous stuff’-stream not shown to most users (presumably, unless it becomes very trendy to disable the filter). One could also combine (2-3) with (1). But it is certain that were Facebook to use one of these methods, it could add a considerable cost to them by maintaining an updated database for the laws of each country and classifying content into the correct categories that can and cannot be shown in this or that country.

Other civil liberties?

But why stop at FoE, what about other civil liberties? Should we also enforce them on Facebook if relevant? If basic law/constitution/ground law/etc. includes due process requirements, does this mean that any decision on Facebook regarding citizens from that country must adhere to local due process laws? This can be problematic. Suppose two citizens are involved in a case, and they are from different countries with different due process laws. Which country’s law should be adhered to? Both? Just one of them? What if the laws are inconsistent so that enforcing both is impossible?

Given the recent NSA spying scandals, one might wonder about right to privacy laws. What if adhering to country X’s laws means that some user data must be protected, while adhering to country Y’s laws means that they must be openly available to secret agencies without a court order (even a secret one)? It is not clear.

I am a major proponent of drug legalization, and have also been following the research on drugs’ influence on driving skills. In media discourse, it is taken for granted that driving under the influence (DUI) is bad because it causes crashes. This is usually assumed to be true for drugs at large, but especially THC (cannabis) and alcohol gets attention. Unfortunately, most of the research on the topic is non-experimental, and so open to multiple causal interpretations. I will focus on the recently published report, Drug and Alcohol Crash Risk (US Dept. of Transportation), which I found via The Washington Post.

The study is a case-control design where they try to adjust for potential correlates and causal factors both by statistical means and by data-collection means. Specifically:

The case control crash risk study reported here is the first large-scale study in the United States to include drugs other than alcohol. It was designed to estimate the risk associated with alcohol- and drug-positive driving. Virginia Beach, Virginia, was selected for this study because of the outstanding cooperation of the Virginia Beach Police Department and other local agencies with our stringent research protocol. Another reason for selection was that Virginia Beach is large enough to provide a sufficient number of crashes for meaningful analysis. Data was collected from more than 3,000 crash-involved drivers and 6,000 control drivers (not involved in crashes). Breath alcohol measurements were obtained from a total of 10,221 drivers, oral fluid samples from 9,285 drivers, and blood samples from 1,764 drivers.

Research teams responded to crashes 24 hours a day, 7 days a week over a 20-month period. In order to maximize comparability, efforts were made to match control drivers to each crash-involved driver. One week after a driver involved in a crash provided data for the study, control drivers were selected at the same location, day of week, time of day, and direction of travel as the original crash. This allowed a comparison to be made between use of alcohol and other drugs by drivers involved in a crash with drivers not in a crash, resulting in an estimation of the relative risk of crash involvement associated with alcohol or drug use. In this study, the term marijuana is used to refer to drivers who tested positive for delta-9-tetrahydrocannabinal (THC). THC is associated with the psychoactive effects of ingesting marijuana. Drivers who tested positive for inactive cannabinoids were not considered positive for marijuana. More information on the methodology of this study and other methods of estimating crash risk is presented later in this Research Note.

So, by design, they control for: location, day of week, time of day, direction of travel. It is also good that they don’t conflate inactive metabolites with THC as commonly done.

The basic results are shown in Tables 1 and 3.


The first shows the raw data, so to speak. It can be seen that drug use while driving is fairly common at about 15% both in crash drivers and normal drivers. Since their testing probably didn’t detect all possible drugs, these are underestimates (assuming that the testing does not bias it with uneven false positive/false negative rates).

Now, the authors write:

These unadjusted odds ratios must be interpreted with caution as they do not account for other factors that may contribute to increased crash risk. Other factors, such as demographic variables, have been shown to have a significant effect on crash risk. For example, male drivers have a higher crash rate than female drivers. Likewise, young drivers have a higher crash rate than older drivers. To the extent that these demographic variables are correlated with specific types of drug use, they may account for some of the increased crash risk associated with drug use.

Table 4 examines the odds ratios for the same categories and classes of drugs, adjusted for the demographic variables of age, gender, and race/ethnicity. This analysis shows that the significant increased risk of crash involvement associated with THC and illegal drugs shown in Table 3 is not found after adjusting for these demographic variables. This finding suggests that these demographic variables may have co-varied with drug use and accounted for most of the increased crash risk. For example, if the THC-positive drivers were predominantly young males, their apparent crash risk may have been related to age and gender rather than use of THC.

Table 4 looks like this, and for comparison, Table 6 for alcohol:

DUI_table4 DUI_table6

The authors do not state anything outright false. But they only mention one causal model that fits the data, the one where THC’s rule is non-causal. However, it is more proper to show both models openly:

Causal models of driving, drug use and demographic variables

The first model is the one discussed by the authors. Here demographic variables cause THC use and crashing, but THC use has no effect on crashing. THC use and crashing are statistically associated because they have a common cause. In the second model, demographic variables cause both THC use and crashing, and THC use also causes crashing. In both models, if one controls for demographic variables, the statistical associated of THC use and crashing disappears. Hence, controlling for demographic variables cannot distinguish between those two important models.

However, they can test the second model by controlling for THC use and seeing if demographic variables are still associated with crashing. If they are not, the second model above is falsified (assuming no false negative/adequate statistical power).

Alcohol was still associated with crashing even controlling for demographic variables, which strengthens the case for its causal effect.

How common is alcohol driving?

Incidentally, some interesting statistics on DUI for alcohol:

The differences between the two studies in the proportion of drivers found to be alcohol-positive are likely to have resulted from the concentration of Roadside Survey data collection on weekend nighttime hours, while this study included data from all days of the week and all hours of the day. For example, in the 2007 Roadside Survey the percentage of alcohol-positive weekday daytime drivers was only 1.0 percent, while on weekend nights 12.4 percent of the drivers were alcohol-positive. In this study, 1.9 percent of weekday daytime drivers were alcohol- positive, while 9.4 percent of weekend nighttime drivers were alcohol-positive.

Assuming the causal model of alcohol on crashing is correct, this must result in quite a lot of extra deaths in traffic. Another reason to fund more research into safer vehicles:

Mandatory follow-up:

I had my first Twitter controversy. So:

I pointed out in the reply to this, that they don’t actually charge that much normally. The comparison is here. The prices are around 500-3000 USD, with an average (eyeballed) around 2500 USD.

Now, this is just a factual error, so not so bad. However…

If anyone is wondering why he is so emotional, he gave the answer himself:

A very brief history of journals and science

  • Science starts out involving few individuals.
  • They need a way to communicate ideas.
  • They set up journals to distribute the ideas on paper.
  • Printing costs money, so they cost money to buy.
  • Due to limitations of paper space, there needs to be some selection in what gets printed, which falls on the editor. Fast forward to perhaps 1950’s, now there are too many papers for the editors to handle, and so they delegate the job of deciding what to accept to other academics (reviewers). In the system, academics write papers, they edit them, and review them. All for free.
  • Fast forward to perhaps 1990 and what happens is that big business takes over the running of the journals so academics can focus on science. As it does, the prices rise becus of monetary interests.
  • Academics are reluctant to give up publishing in and buying journals becus their reputation system is built on publishing in said journals. I.e. the system is inherently conservatively biased (Status quo bias). It is perfect for business to make money from.
  • Now along comes the internet which means that publishing does not need to rely on paper. This means that marginal printing cost is very close to 0. Yet the journals keep demanding high prices becus academia is reliant on them becus they are the source of the reputation system.
  • There is a growing movement in academia that this is a bad situation for science, and that publications shud be openly available (open access movement). New OA journals are set up. However, since they are also either for-profit or crypto for-profit, in order to make money they charge outrageous amounts of money (say, anything above 100 USD) to publish some text+figures on a website. Academics still provide nearly all the work for free, yet they have to pay enormous amounts of money to publish, while the publisher provides a mere website (and perhaps some copyediting etc.).

Who thinks that is a good solution? It is clearly a smart business move. For instance, popular OA metajournal Frontiers are owned by Nature Publishing Group. This company thus very neatly both makes money off their legacy journals and the new challenger journals.

The solution is to set up journals run by academics again now that the internet makes this rather easy and cheap. The profit motive is bad for science and just results in even worse journals.

As for my claim, I stand by it. Altho in retrospect, the more correct term is parasitic. Publishers are a middleman exploiting the the fact that academia relies on established journals for reputation.



Someone posted a nice collection of books dealing with the on-going revolution in science:

So i decided to read some of them. Ironically, many of them are not available for free (contrary to the general idea of openness in them).

The book is short at 200 pages, with 14 chapters covering most aspects of changing educational system. It is at times long-winded. It shud probably have been 20-50 pages shorter. However, it seems fine as a general introduction to the area. The author shud have used more grafs, figures etc. to make points. There are plenty of good figures for these things (e.g. journal revenue increases).



So i kept finding references to this book in papers, so i decided to read it. It is a quick read introducing behavior genetics and the results from it to lay readers and perhaps policy makers. The book is overly long (200) for its content, it cud easily have been cut 30 pages. The book itself contains not much new to people familiar with the field (i.e. me), however there are some references that were interesting and unknown to me. It may pay for the expert to simply skim the reference lists for each chapter and read those papers instead.

The main thrust of the book is what policies we shud implement becus of our ‘new’ behavioral genetic knowledge. Basically the authors think that we need to add more choice to schools becus everybody is different and we want to use the gene-environment correlations to improve results. It is hard to disagree with this. They go on about how labeling is bad, but obviously labeling is useful for talking about things.

If one is interested in school policy then reading this book may be worth it, especially if one is a layman. If one is interested in learning behavior genetics, read something else (e.g. Plomin’s 2012 textbook)

I recently published a paper in Open Differential Psychology. After it was published, I decided to tell some colleagues about it so that they would not miss it because it is not published in any of the two primary journals in the field: Intell or PAID (Intelligence, Personal and Individual Differences). My email is this:

Dear colleagues,

I wish to inform you about my paper which has just been published in Open Differential Psychology.

Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

You can read the full paper at the journal website: openpsych.net/ODP/2014/09/the-international-general-socioeconomic-factor-factor-analyzing-international-rankings/


One researcher responded with:

Dear Emil,
Thanks for your paper.
Why not publishing in standard well established well recognized journals listed in Scopus and Web of Science benefiting from review and
increasing your reputation after publishing there?
Go this way!

This concerns the decision of choosing where to publish. I discussed this in a blog post back in March before setting up OpenPsych. To be very short, the benefits of publishing in legacy journals is 1) recognition, 2) indexing in proprietary indexes (SCOPUS, WoS, etc.), 3) perhaps better peer review, 4) perhaps fancier appearance of the final paper. The first is very important if one is an up-and-coming researcher (like me) because one will need recognition from university people to get hired.

I nevertheless decided NOT to publish (much) in legacy journals. In fact, the reason I got into publishing studies so late is that I dislike the legacy journals in this field (and most other fields). Why dislike legacy journals? I made an overview here, but to sum it up: 1) Either not open access or extremely pricey, 2) no data sharing, 3) in-transparent peer review system, 4) very slow peer review (~200 days on average in case of Intell and PAID), 5) you’re supporting companies that add little value to science and charge insane amounts of money for it (for Elsevier, see e.g. Wikipedia, TechDirt has a large number of posts concerning that company alone).

As a person who strongly believes in open science (data, code, review, access), there is no way I can defend a decision to publish in Elsevier journals. Their practices are clearly antithetical to science. I also signed The Cost of Knowledge petition not to publish or review for them. Elsevier has a strong economic interest in keeping up their practices and I’m sure they will. The only way to change science for the better is to publish in other journals.

Non-Elsevier journals

Aside from Elsevier journals, one could publish in PLoS or Frontiers journals. They are open access, right? Yes, and that’s a good improvement. They however are also predatory because they charge exorbitant fees to publish: 1600 € (Frontiers), 1350 US$ (PLoS). One might as well publish in Elsevier as open access for which they charge 1800 US$.

So are there any open access journals without publication fees in this field? There is only one as far as I know, the newly established Journal of Intelligence. However, the journal site states that the lack of a publication fee is a temporary state of affairs, so there seems to be no reason to help them get established by publishing in their journal. After realizing this, I began work on starting a new journal. I knew that there was a lot of talent in the blogosphere with a similar mindset to me who could probably be convinced to review for and publish in the new journal.


But what about indexing? Web of Science and SCOPUS are both proprietary; not freely available to anyone with an internet connection. But there is a fast-growing alternative: Google Scholar. Scholar is improving rapidly compared to the legacy indexers and is arguably already better since it indexes a host of grey literature sources that the legacy indexers don’t cover. A recent article compared Scholar to WOS. I quote:

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle become complete in which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for meta-analyses and systematic reviews. [source]

In other words, Scholar covers almost all the articles that WoS covers already and is quickly catching up on the older studies too. In a few years Scholar will cover close to 100% of the articles in legacy indexers and they will be nearly obsolete.

Getting noticed

One thing related to the above is getting noticed by other researchers. Since many researchers read legacy journals, simply being published in them is likely sufficient to get some attention (and citations!). It is however not the only way. The internet has changed the situation here completely in that there are new lots of different ways to get noticed: 1) Twitter, 2) ResearchGate, 3) Facebook/Google+, 4) Reddit, 5) Google Scholar will inform you about new any research by anyone one has cited previously, 6) blogs (own or others’) and 7) emails to colleagues (as above).

Peer review

Peer review in OpenPsych is innovative in two ways: 1) it is forum-style instead of email-based which is better suited for communication between more than 2 persons, 2) it is openly visible which works against biased reviewing. Aside from this, it is also much faster, currently averaging 20 days in review.

Reputation and career

There is clearly a drawback here for publishing in OpenPsych journals compared with legacy journals. Any new journal is likely to be viewed as not serious by many researchers. Most people dislike changes including academics (perhaps especially?). Publishing there will not improve chances of getting hired as much as will publishing in primary journals. So one must weigh what is most important: science or career?


Venice seems to be tired of Italy. It’s a bad economic trade off for them. They want to return to their former glory. Good! We need more power decentralization.

There was a vote:

Last week, in a move overshadowed by the international outcry over Russia’s annexation of Crimea, Plebiscito.eu, an organization representing a coalition of Venetian nationalist groups, held an unofficial referendum on breaking with Rome. Voters were first asked the main question—”Do you want Veneto to become an independent and sovereign federal republic?”—followed by three sub-questions on membership in the European Union, NATO, and the eurozone. The region’s 3.7 million eligible voters used a unique digital ID number to cast ballots online, and organizers estimate that more than 2 million voters ultimately participated in the poll.

On Friday night, people waving red-and-gold flags emblazoned with the Lion of St. Mark filled the square of Treviso, a city in the Veneto region, as the referendum’s organizers announced the results: 2,102,969 votes in favor of independence—a whopping 89 percent of all ballots cast—to 257,266 votes against. Venetians also said yes to joining NATO, the EU, and the eurozone. The overwhelming victory surprised even ardent supporters of the initiative, as most polls before the referendum estimated only about 65 percent of the region’s voters supported independence.

Someone in the comments makes the following argument:

I don’t understand why it’s so surprising that 89% of respondents in an online, unofficial poll organized by Venetian nationalist groups voted that way. As a proportion of all eligible voters, that comes out to 55-60%, much closer to what you’d expect from neutral sampling.
Self-selection bias is a huge problem with online polling, and I expect that given the methodology of the referendum, that would explain a large part of the discrepancy between the predicted and observed outcomes.

My response:

You are assuming that the entire set of nonvoting citizens would be against it. While there is likely some self-selection, it is NOT likely to be 100%.

I did the math for every 10% incremental. If everybody voted either “yes” or “no”, then the total outcome range is [56.84%-93.05%], a clear majority in any case.

Even given a very strong self-selection effect such that nonvoters are 70% against, the outcome is 67.7% “yes”.

I did the math, and it is here: docs.google.com/spreadsheet/ccc?key=0AoYWmgpqFzdsdDZUSWhOOEctRnFhakVLUjFsbFpWUHc#gid=0

Here’s the takeaway. Venice wants to be independent and it is not a narrow decision, even assuming implausible self-selection.

I was asked to comment on this Reddit thread: www.reddit.com/r/netsec/comments/s1t2c/netsec_how_would_you_design_an_electronic_voting/


This post is written with the assumption that a bitcoin-like system is used.


Nirvana / perfect solution fallacy

I agree. I don’t think an electronic system needs to solve every problem present in a paper system, it just needs to be better. Right now, for example, one could buy an absentee ballot and be done with it. I think a system that makes it less practical to do something similar is an improvement.


As always when considering options, one should choose the best solution, not stubbornly refuse any change that will not give a perfect situation. Paper voting is not perfect either.



Threatening scenarios

The instant you let people vote from remote locations, everything else is up in the air. It doesn’t matter if the endpoints are secure.
Say you can vote by phone. I have my goons “canvass” the area knocking on doors. “Hey, have you voted for Smith yet? You haven’t? Well, go get your phone, we will help you do it right now.”
If you are trying to do secure voting over the Internet, you have already lost.


While one cannot bring goons right into the voting boxes, it is quite clearly possible to threaten people to vote in a particular way right now. The reason it is not generally done is that every single vote has very little power and the costs therefore are absurdly high for anyone trying scare tactics.


It is also easy to solve by making it possible to change votes after they have been given. This is clearly possible with computer technology but hard with paper.



Viruses that target voting software

This is clearly an issue. However, people can easily check that their votes are correct in the votechain (blockchain analogy). A sophisticated virus might wait until the last minute and then vote, but this can easily be prevented by turning off the computers used.


Furthermore, I imagine that one will use specialized software for voting, especially a linux system designed specifically for safety and voting, and rigorously tested by thousands of independent coders. One might also create specialized hardware for voting, i.e. special computers. Specifically, one can have read only memory which makes it impossible to install malacious software on the system. For instance, the hardware might have built in software for voting and a camera for scanning a QR code with one’s private key(s).


Lastly, one can use 2FA to enchance security just as one does everywhere else where extra safety is needed on the web.



Anoynmous and veriable voting

You can either have a system where people can verify their vote and take some type of receipt to prove the system recorded their vote wrong, or you can have anonymous voting. You cannot have verifiable voting AND anonymous voting. Someone somewhere has to be able to decrypt or access whatever keys or pins or you are holding a meaningless or login or hash that can’t prove you aren’t lying or didn’t change your vote etc.


Yes you can, with pseudonymous voting with a bitcoin-like system. Everybody can verify that no more votes are used than there are eligible voters. But the individuals who control the addresses are not identifiable from the code alone. They can choose announce publicly their address so that people can connect the two. Will will ofc be used to public persons.



Selling votes

This is already possible. It is already possible to verify this as well, as one can easily film the process of voting. This is not generally illegal either.


The reason why people do not generally buy or sell votes is that single votes have basically no power and hence are worth nothing.


As pointed out in the thread, this is already possible with mail-voting.


Lastly, it is generally thought of to be evil or wrong to buy and sell votes, but only when done directly. It is clearly legal indirectly and even if not de jura legal, it is de facto legal. In every modern democracy, it is common for politicians offering certain wealth or income redistribution policies. If people who would benefit from these vote for the politicians they are indirectly receiving money for voting for a given politician/party. For this reason, the buying and selling of votes is a non-issue.



The ease of digital attacks

It seems to me that the real problem is the scalability of the attacks in the digital sphere. Changing votes in our regular system of several thousand human ballot counters looking a pieces of paper is rather costly. A well-planned digital attack can be virtually free of cost (not counting the time it takes to figure out the attack).


This is a concern, and that is why one will need tough security and verification technologies. I have suggested several above.



Interceptions of the signal

Whatever, VPN, custom software, browser. It’s the same thing. Malware or even an ISP could intercept and manipulate what is displayed or recorded. The software on the receiving end can also be manipulated but more likely to have some controls of the hardware and software, but again, who inspects this?


This could be a problem. It can be reduced by having a nationally free, encrypted VPN/proxy for voting purposes.



Others who were faster than me

Voting could not be more further from any of the simplest banking. The idea behind banking or any “secure” online transaction is that it is not anonymous. Bitcoin might be the only viable anonymous type online voting.



The bitcoin protocol would actually be fantastic for this. I should explain for those unaware: Bitcoin is actually two different things. One: A protocol, and Two: A software implementing the protocol to send ‘coins’ like money to others. I’ll do a writeup a little later, but the gist of it is: the votes would be public for anyone to view, impossible to fake/forge, and still anonymous. This would be done by embedding the voting information into the blockchain.



Strong encryption with distributed verification a la bitcoin. You don’t have to trust the clients; you trust the math. I’m by no means a crypto expert, so don’t look to me for design tips, but I suspect you could map a private key to each valid voter’s SSN then generate a vote (hash) that could be verified by the voter pool.


These posts dates to “1 year ago” according to Reddit. Clearly, I was not the first to think the obvious.



Who is going to mine votecoins?

So unless you are actually piggy-backing voting ontop of another currency (like the main bitcoin blockchain), there’s no incentive for ordinary citizens to participate and validate/process the blockchain. What are they mining? More votes?? That seems weird/illegitimate. If you say “well, some government agency can just do all the mining and distribute coins to voters” this would seem to offer no improvement over a straightforward centralized system, and only introduces extra questions like


The government and the users who want to help out. Surely citizens have some self interest in getting the election over with. This is a non-issue.


If the government started the block chain, mined the correct number of coins, and then put it in the “no more coins mode” then we would have the setup for it. If they could convince one of the major pools to do merged mining with them (i’m not sure what they would exchange for this, but it would only have to be for a week/month) if hiring a pool is out of the question then just realize that the govt spends millions routinely on elections, and $10M should be more than enough to beat most mafias (~9Thash/s which is roughly what the current bitcoin rate is). If someone like the coke brothers tried to overpower this it would be very obvious.


Yes, this is the same solution I suggested. Code the system so that the first block gives all votecoins.


Another option is making a dual currency system, such that one can help mine votecoins and only get rewarded in rewardcoins. That way the counting is distributed to whoever wants the job.



The prize for the least imagination

The simple answer is that I would not. The risks and downsides of such a system are inherently not worth the only benefit which I can think of (faster results). This should also answer your last question. This hasn’t been done simply because there is no good reason to do it.


No other benefits? Like… an infinite variety of other voting systems???



The price of online voting

You’re assuming the cost of an electronic voting system and the time it will take for people to be comfortable using them will outpace paper and pen, which if you ask me is a pretty damn big assumption. Maybe someday, but until a grandma can easily understand and use electronic voting I am loathe to even think about implementing it. A voting system needs to be transparent and easy to understand.


In Denmark it costs about 100 million DKK to have a vote. Is he really suggesting this cannot be done cheaper with computers? I can’t take it seriously.