See also: Rationality and bias test results

I stumbled upon another political bias test:

Basically, you will be given 10 questions: 5 about why US-style conservatives favor/oppose a given policy, and 5 why US-style liberals favor/oppose, the same policy. It is a multiple-choice format. Your score is then calculated simply after comparison with the answers given to the same survey by people who say they are conservative/liberal.

Does it work? I guess. It has a sampling problem in that people who take political surveys on the internet aren’t going to be representative of these political groups, and so if one is answering about the actual political group, one might get it slightly wrong. Another problem is that the score is based only on 5 items, which must mean fairly large sampling error. Of course, other problems include it using a 1-dimensional model of political opinions despite the evidence saying that this isn’t a good model of the data and that it is US-centric.

Despite that:

You identified yourself as slightly liberal. Your score in correctly choosing conservatives’ reasoning (based on answers given by self-identified conservatives to date) was 4 out of 5 questions.
Your score in identifying conservatives’ reasoning is as good as or better than 97 percent of others who are slightly liberal.

The result is fairly in line with my prior results. I picked slightly liberal because by US standards I’m slightly towards less freedom on the economic distribution dimension, but by the Danish standard, I’m somewhat towards more freedom.

I had the impression that, since recognition of [problem] dates back at least to [person from a long time ago], there was a voluminous literature and [statistics to deal with the problem] was a solved problem, so I’m a little troubled that you seem to be trying to invent your own methods and aren’t citing much in the way of prior work.

This anonymous critique is saying that I’m not building on top of what there already is but instead re-inventing the wheel, perhaps even a square one. Underlying the criticism is a view of scientific progress as accumulating knowledge over time. We know more stuff now than we used to (but some things we think we know now we still get wrong!) and this is because new scientists don’t just start finding out how everything works (the goal of science) from scratch, but instead read what research has already been done and try to build on top of that. At least that is the general idea. However, we know from actual scientific practice that scientists often don’t build on top of prior work, perhaps because the body of prior work is already so large that having an overview of it is beyond current human cognitive capacity. Alternatively, because the prior work is often inaccessible, badly structured, not searchable, etc. Othertimes scientists are just lazy.

The first problem is in principle unsolvable because improving human cognitive ability/capacity will accelerate the accumulation of knowledge. However, we will (very soon) improve upon the present situation (Shulman & Bostrom, 2014).

The second problem is faster to fix, requiring either ubiquitous open access or guerrilla open access. The first option is coming along fast for new material, but won’t solve it for old material already locked down by copyright. Probably Big Copyright is going to lobby for extending copyright protection further, which means that even just waiting for copyright to expire is not a legal option.

A delicious example of scientists not building on top of relevant prior works is the concept of construct proliferation (Reeve & Basalik, 2014), which is when we invent a new word/concept to cover the same region in conceptual space as previous concepts already covered. This is itself a redundant copy of the earlier term construct redundancy. This meta-problem is fairly obvious, so my guess is that there is a long list of terms for it, thus illustrating itself.

Yet I argue the opposite…

Given the above, why would one willingly want to not read the earlier literature/build on top of prior work on a topic before trying to find solutions? There are some possible reasons:

One reason is personal. Perhaps one just really likes the experience of finding an interesting problem and coming up with solutions. This is closely related to a couple of concepts: openness to experience, typical intellectual engagement, need for cognition, epistemic curiosity (and more), see (Mussel, 2010) and (Stumm, Hell, & Chamorro-Premuzic, 2011). Incidentally these also show strong concept overlap (this is yet another term to refer to the situation where multiple concepts cover some of the same area in conceptual space, however it is different in that it is explicitly continuous instead of categorical).

A career reason to invent new constructs is a desire to make a name for yourself and get a good job. A well-tested way to do that is to introduce a new concept and accompanying questionnaire that others then hopefully use. This can result in hundreds or thousands of citations. For instance, the original paper for need for cognition has 5063 on Scholar since 1982 / 153 per year, the original paper for typical intellectual engagement has 410 citations since 1992 / 18 per year, and that for epistemic curiosity has 156 since 2003 / 13 per year. The later papers do have lower citation counts per year, perhaps indicating some conceptual satiation, but the papers are still way above the norm. To put it another way, since it is clearly unnecessary to read much of the relevant prior work to get published, one may as well skip this.

Scientifically speaking, neither of the above two reasons are relevant. The first has more to do with personality disposition towards solving new problems, whereas the second is due, to some degree, to perverse incentives.

Exploratory bridge building

Are there any good scientific reasons to sometimes start from scratch? I think so. Think of it this way: Many scientific questions can be approached in multiple ways. We can build a large analogy out of that idea.

Imagine a many-dimensional space where some regions are impassable or slow to pass, and where there are one or more regions or points from which useful resources can be extracted. We, the bridge engineers, start somewhere in this space (all in the same place) and have to find resources but we don’t know exactly where they will be found, so we don’t know exactly which directions to move in. Furthermore, imagine that we can build bridges (vectors) in this space by adding them together and that we can only move on the bridges (or in them). This means that one can now travel in a particular direction, at least slowly. If the resources are far away from the beginning position, it is easy to see that one could never reach them without adding vectors together. This forms the basis of the general preference for building on prior work.

How do we know which direction to build bridges in if we don’t know where the resources are? We can expand the analogy further by saying that no one has the ability to see further than a short distance. Instead, what engineers have is a noisy measure of how close their current position is to the nearest resource, and their measures don’t even agree perfectly with each other. Noisy here meaning that they is only roughly correct, to varying degrees and with different biases. Sometimes what appears to be a good general direction towards a resource ends up in a resource poor dead end, i.e. all directions to move closer to the nearby resources thru impassable or difficult to pass regions.

Those familiar with evolutionary biology should now see where I’m going with this. If we reduce my analogy, we can say that approaches to answers in science can end up in local maximums in the science fitness landscape. When this happens, one has to go back and move in a new direction somewhere.

Still, this leaves us with the question of how far we should move back. Often it may be necessary to go back only some of the way and start a new branch of the same root bridge from that point. Sometimes, however, a very early part of the bridge moved into a regional that can only result in slow progress or even a dead-end. When this happens, one has to start over entirely.

Decision making

Because all engineers are short sighted, it is impossible for them to know when it is time to start over. Worse than that, engineers have a kind of tunnel vision such that when they have once traveled out on given bridge from their homeland, they will be less capable of spotting good directions to build other root bridges from. In other words, once one has learned of a particular approach to a problem, it can be difficult to go “back to basics” and start over with new ideas. One needs a pair of fresh eyes. The only way to do this is to get an engineer who has never been to this space before, avoid informing him of the already built bridges and let him choose where to build his first bridge and let him work on it for some time to see if he ends up in a dead end or a previously unknown resource rich area. Even if the engineers have already found one good resource region, they might wonder whether there are more. Finding more resources probably requires moving in a new direction from the beginning or at least from an early part of the bridge.


It is clear that as a large team project neither extreme solution is optimal: 1) always building on prior work, or 2) never building on prior work. Instead, some balance must be found where some, probably most, engineers are dedicated to building on top of the fairly recent prior work, but some engineers should try to backtrack and see if they can find a better route to a currently known resource area or identify new areas.

Who should start new bridges? We may posit that the engineers vary in their psychological attributes in ways that have an effect on their efficiency of building on prior bridges or starting their own root bridges/branches. In that case, engineers who are particularly good at spotting new directions and working on their own bridge alone would be good for the role of pioneer/Rambo engineers. Even if there are no differences between the efficiency of the engineers re. building new branches/roots or building on top of prior work, if only a few engineers are inclined to working alone perhaps finding new resources (reason #1), the team is in the optimal situation where most build on fairly recent prior work but some don’t.


Given the abstractness of the space bridge engineer analogy, one should probably do a visualization, or maybe even a small computer game. The last is beyond my coding ability at the time being and the first requires more time than I have.


I’ve always considered myself a very rational and fairly unbiased person. Being aware of the general tendency for people to overestimate themselves (see also visualization of the Dunning-Kruger effect), this of course reduces my confidence in my own estimates of these. So what better to do than take some actual tests? I have previously taken the short test of estimation ability found in What intelligence tests miss and got 5/5 right. This is actually slight evidence of underconfidence since I was supposed to give 80% confidence intervals. This of course means that I should have had 1 error, not 0. Still, with 5 items, the precision is too low to say whether I’m actually underconfident or not with much certainty, but it shows that I’m unlikely to be strongly overconfident. Underconfidence is expected for smarter people. A project of mine is to make a test with the confidence intervals that is much longer so to give more precise estimates. It should be fairly easy to find a lot of numbers for stuff and have people give 80% confidence intervals for the numbers. Stuff like the depth of the deepest ocean, height of tallest mountain, age of oldest living organism, age of the Earth/universe, dates for various historical events such as ending of WW2, beginning of American Civil war, and so on.

However, I recently saw an article about a political bias test. I think I’m fairly unbiased. As a result of this, my beliefs don’t really fit into any mainstream political theory. This is as expected because the major political ideologies were invented before we understood much about anything, thus making it unlikely that they would tend to get everything right. More likely, they would probably get some things right and some things wrong.

Here’s my test results for political bias:

Screenshot from 2015-09-16 20:38:35 Screenshot from 2015-09-16 20:41:05

So in centiles: >= 99th for knowledge of American politics. This is higher than I expected (around 95th). Since I’m not a US citizen, presumably the test has some bias against me. For bias, the centile is <= 20th. This result did not surprise me. However, since there is a huge floor effect, this test needs more items to be more useful.

Next up, I looked at the website and saw that they had a number of other potentially useful tests. One is about common misconceptions. Now since I consider myself a scientific rationalist, I should do fairly well on this. Also because I have read somewhat extensively on the issue (Wikipedia list, Snopes and 50 myths of pop psych).

Unfortunately, they present the results in a verbose form. Pasting 8 images would be excessive. I will paste some of the relevant text:

1. Brier score

Your total performance across all quiz and confidence questions:

This measures your overall ability on this test. This number above, known as a “Brier” score, is a combination of two data points:

How many answers you got correct
Whether you were confident at the right times. That means being more confident on questions you were more likely to be right about, and less confident on questions you were less likely to get right.

The higher this score is, the more answers you got correct AND the more you were appropriately confident at the right times and appropriately uncertain at the right times.

Your score is above average. Most people’s Brier’s scores fall in the range of 65-80%. About 5% of people got a score higher than yours. That’s a really good score!

2. Overall accuracy

Answers you got correct: 80%

Out of 30 questions, you got 24 correct. Great work B.S. detector! You performed above average. Looks like you’re pretty good at sorting fact from fiction. Most people correctly guess between 16 and 21 answers, a little better than chance.

Out of the common misconceptions we asked you about, you correctly said that 12/15 were actually B.S. That’s pretty good!

No centiles are provided, so it is not evident how this compares to others.

3. Total points

Screenshot from 2015-09-16 22:12:06

As for your points, this is another way of showing your ability to detect Fact vs. B.S. and your confidence accuracy. The larger the score, the better you are at doing both! Most people score between 120 and 200 points. Looks like you did very well, ending at 204 points.

4. Reliability of confidence intervals

Reliability of your confidence levels: 89.34%

Were you confident at the right times? To find out, we took a portion of your earlier Brier score to determine just how reliable your level of confidence was. It looks like your score is above average. About 10% of people got a score higher than yours.

This score measures the link between the size of your bet and the chance you got the correct answer. If you were appropriately confident at the right times, we’d expect you to bet a lot of points more often when you got the answer correct than when you didn’t. If you were appropriately uncertain at the right times, we’d expect you to typically bet only a few points when you got the answer wrong.

You can interpret this score as measuring the ability of your gut to distinguish between things that are very likely true, versus only somewhat likely true. Or in other words, this score tries to answer the question, “When you feel more confident in something, does that actually make it more likely to be true?”

5. Confidence and accuracy

Screenshot from 2015-09-16 22:16:37

When you bet 1-3 points your confidence was accurate. You were a little confident in your answers and got the answer correct 69.23% of the time. Nice work!

When you bet 4-7 points you were underconfident. You were fairly confident in your answers, but you should have been even more confident because you got the answer correct 100% of the time!

When you bet 8-10 points your confidence was accurate. You were extremely confident in your answer and indeed got the answer correct 100% of the time. Great work!

So, again there is some evidence of underconfidence. E.g. for those I betted 0 points, I still had 60% accuracy, tho it should have been 50%.

6. Overall confidence

Your confidence: very underconfident

You tended to be very underconfident in your answers overall. Let’s explore what that means.

In the chart above, your betting average has been translated into a new score called your “average confidence.” This represents roughly how confident you were in each of your answers.

People who typically bet close to 0 points would have an average confidence near 50% (i.e. they aren’t confident at all and don’t think they’ll do much better than chance).
People who typically bet about 5 points would have an average confidence near 75% (i.e. they’re fairly confident; they might’ve thought there was a ¼ chance of being wrong).
People who typically bet 10 points would have an average confidence near 100% (i.e. they are extremely confident; they thought there was almost no chance of being wrong).

The second bar is the average number of questions you got correct. You got 24 questions correct, or 80%.

If you are a highly accurate better, then the two bars above should be about equal. That is, if you got an 80% confidence score, then you should have gotten about 80% of the questions correct.

We said you were underconfident because on average you bet at a confidence level of 69.67% (i.e. you bet 3.93 on average), but in reality you did better than that, getting the answer right 80% of the time.

In general, results were in line with my predictions: high ability + general overestimation + imperfect correlation of self-rated ability and actual ability results in underconfidence. My earlier result indicated some underconfidence as well. The longer test gave the same result. Apparently, I need to be more confident in myself. This is despite the fact that I scored 98 and 99 on the assertiveness facet on the OCEAN test on two different test taking sessions with some months in between.

I did take their additional rationality test, but since this was just based on pop psych Kahneman-style points, it doesn’t seem very useful. It also uses typological thinking because it classifies people into 16 classes, clearly wrong-headed. It found my weakest side to be planning fallacy, but this isn’t actually the case because I’m pretty good at getting papers and projects done on time.

I had heard good things about this book, sort of. It has been cited a lot. Enough that I would be wiling to read it, given that the author has written at least one interesting paper (Political Diversity Will Improve Social Psychological Science). Generally, it is written in popsci style, very few statistics making it impossible to easily judge how much certainty to assign to different studies mentioned in the text. Generally, I was not impressed or learned much, tho not all was necessarily bad. Clearly, he wrote this book in an attempt to appeal to many different people. Perhaps he succeeded, but appeals that work well on large parts of the population rarely work well on me.

In any case, there are some parts worth quoting and commenting on:

The results were as clear as could be in support of Shweder. First, all four of my Philadelphia groups confirmed Turiel’s finding that Americans make a big distinction between moral and conventional violations. I used two stories taken directly from Turiel’s research: a girl pushes a boy off a swing (that’s a clear moral violation) and a boy refuses to wear a school uniform (that’s a conventional violation). This validated my methods. It meant that any differences I found on the harmless taboo stories could not be attributed to some quirk about the way I phrased the probe questions or trained my interviewers. The upper-class Brazilians looked just like the Americans on these stories. But the working-class Brazilian kids usually thought that it was wrong, and universally wrong, to break the social convention and not wear the uniform. In Recife in particular, the working-class kids judged the uniform rebel in exactly the same way they judged the swing-pusher. This pattern supported Shweder: the size of the moral-conventional distinction varied across cultural groups.

Emil’s law: Whenever a study reports that socioeconomic status correlates with X, it is mostly due to its relationship to intelligence, and often socioeconomic status is non-causally related to X.

Wilson used ethics to illustrate his point. He was a professor at Harvard, along with Lawrence Kohlberg and the philosopher John Rawls, so he was well acquainted with their brand of rationalist theorizing about rights and justice.15 It seemed clear to Wilson that what the rationalists were really doing was generating clever justifications for moral intuitions that were best explained by evolution. Do people believe in human rights because such rights actually exist, like mathematical truths, sitting on a cosmic shelf next to the Pythagorean theorem just waiting to be discovered by Platonic reasoners? Or do people feel revulsion and sympathy when they read accounts of torture, and then invent a story about universal rights to help justify their feelings?
Wilson sided with Hume. He charged that what moral philosophers were really doing was fabricating justifications after “consulting the emotive centers” of their own brains.16 He predicted that the study of ethics would soon be taken out of the hands of philosophers and “biologicized,” or made to fit with the emerging science of human nature. Such a linkage of philosophy, biology, and evolution would be an example of the “new synthesis” that Wilson dreamed of, and that he later referred to as consilience—the “jumping together” of ideas to create a unified body of knowledge.17
Prophets challenge the status quo, often earning the hatred of those in power. Wilson therefore deserves to be called a prophet of moral psychology. He was harassed and excoriated in print and in public.18 He was called a fascist, which justified (for some) the charge that he was a racist, which justified (for some) the attempt to stop him from speaking in public. Protesters who tried to disrupt one of his scientific talks rushed the stage and chanted, “Racist Wilson, you can’t hide, we charge you with genocide.”19

For more on the history of sociobiology, see:

But yes, human rights bug me. There is no such thing as an ethical right ‘out there’. Human rights are completely made up. While some of them are useful as model for civil rights, they are nothing more. Worse, human rights keep getting added which are both inconsistent, vague and redundant. See e.g.

I say this as someone who strongly believes in having strong civil rights, especially regarding freedom of expression, assembly, due process and the like. However, since pushing for new human rights attracts social justice warriors, this of course means that the new rights not only conflict with previous rights (e.g. freedom of expression), but also obviously concern matters that should be a matter of national policy (e.g. resource redistribution), not super-national courts and their creeping interpretations. See e.g. this document: or even worse, this one: A EUROPEAN FRAMEWORK NATIONAL STATUTE FOR THE PROMOTION OF TOLERANCE. The irony is that tolerance consists exactly in letting others do what they want: Wiktionary: The ability or practice of tolerating; an acceptance or patience with the beliefs, opinions or practices of others; a lack of bigotry.

Psychopaths do have some emotions. When Hare asked one man if he ever felt his heart pound or stomach churn, he responded: “Of course! I’m not a robot. I really get pumped up when I have sex or when I get into a fight.”29 But psychopaths don’t show emotions that indicate that they care about other people. Psychopaths seem to live in a world of objects, some of which happen to walk around on two legs. One psychopath told Hare about a murder he committed while burglarizing an elderly man’s home:
I was rummaging around when this old geezer comes down stairs and … uh … he starts yelling and having a fucking fit … so I pop him one in the, uh, head and he still doesn’t shut up. So I give him a chop to the throat and he … like … staggers back and falls on the floor. He’s gurgling and making sounds like a stuck pig! [laughs] and he’s really getting on my fucking nerves so I … uh … boot him a few times in the head. That shut him up … I’m pretty tired by now so I grab a few beers from the fridge and turn on the TV and fall asleep. The cops woke me up [laughs].30


This is the sort of bad thinking that a good education should correct, right? Well, consider the findings of another eminent reasoning researcher, David Perkins.21 Perkins brought people of various ages and education levels into the lab and asked them to think about social issues, such as whether giving schools more money would improve the quality of teaching and learning. He first asked subjects to write down their initial judgment. Then he asked them to think about the issue and write down all the reasons they could think of—on either side—that were relevant to reaching a final answer. After they were done, Perkins scored each reason subjects wrote as either a “my-side” argument or an “other-side” argument.
Not surprisingly, people came up with many more “my-side” arguments than “other-side” arguments. Also not surprisingly, the more education subjects had, the more reasons they came up with. But when Perkins compared fourth-year students in high school, college, or graduate school to first-year students in those same schools, he found barely any improvement within each school. Rather, the high school students who generate a lot of arguments are the ones who are more likely to go on to college, and the college students who generate a lot of arguments are the ones who are more likely to go on to graduate school. Schools don’t teach people to reason thoroughly; they select the applicants with higher IQs, and people with higher IQs are able to generate more reasons.
The findings get more disturbing. Perkins found that IQ was by far the biggest predictor of how well people argued, but it predicted only the number of my-side arguments. Smart people make really good lawyers and press secretaries, but they are no better than others at finding reasons on the other side. Perkins concluded that “people invest their IQ in buttressing their own case rather than in exploring the entire issue more fully and evenhandedly.”22

Cite is: Perkins, D. N., M. Farady, and B. Bushey. 1991. “Everyday Reasoning and the Roots of Intelligence.” In Informal Reasoning and Education, ed. J. F. Voss, D. N. Perkins, and J. W. Segal, 83–105. Hillsdale, NJ: Lawrence Erlbaum.

From Plato through Kant and Kohlberg, many rationalists have asserted that the ability to reason well about ethical issues causes good behavior. They believe that reasoning is the royal road to moral truth, and they believe that people who reason well are more likely to act morally.
But if that were the case, then moral philosophers—who reason about ethical principles all day long—should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students.48 And in none of these ways are moral philosophers better than other philosophers or professors in other fields.
Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably borrowed mostly by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy.49 In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.

Oh dear.

The anthropologists Pete Richerson and Rob Boyd have argued that cultural innovations (such as spears, cooking techniques, and religions) evolve in much the same way that biological innovations evolve, and the two streams of evolution are so intertwined that you can’t study one without studying both.65 For example, one of the best-understood cases of gene-culture coevolution occurred among the first people who domesticated cattle. In humans, as in all other mammals, the ability to digest lactose (the sugar in milk) is lost during childhood. The gene that makes lactase (the enzyme that breaks down lactose) shuts off after a few years of service, because mammals don’t drink milk after they are weaned. But those first cattle keepers, in northern Europe and in a few parts of Africa, had a vast new supply of fresh milk, which could be given to their children but not to adults. Any individual whose mutated genes delayed the shutdown of lactase production had an advantage. Over time, such people left more milk-drinking descendants than did their lactose-intolerant cousins. (The gene itself has been identified.)66 Genetic changes then drove cultural innovations as well: groups with the new lactase gene then kept even larger herds, and found more ways to use and process milk, such as turning it into cheese. These cultural innovations then drove further genetic changes, and on and on it went.

Why is this anyway? Why don’t we just keep expressing this gene? Is there any reason?

In an interview in 2000, the paleontologist Stephen Jay Gould said that “natural selection has almost become irrelevant in human evolution” because cultural change works “orders of magnitude” faster than genetic change. He next asserted that “there’s been no biological change in humans in 40,000 or 50,000 years. Everything we call culture and civilization we’ve built with the same body and brain.”77

I wonder, was Gould right about anything? Another thing. Did Gould invent his Punctuated Equilibrium theory because it postulates these change-free periods which can be conveniently claimed for humans the last 100k years or so in order to keep his denial of racial differences consistent with evolution or?

Religion is therefore well suited to be the handmaiden of groupishness, tribalism, and nationalism. To take one example, religion does not seem to be the cause of suicide bombing. According to Robert Pape, who has created a database of every suicide terrorist attack in the last hundred years, suicide bombing is a nationalist response to military occupation by a culturally alien democratic power.62 It’s a response to boots and tanks on the ground—never to bombs dropped from the air. It’s a response to contamination of the sacred homeland. (Imagine a fist punched into a beehive, and left in for a long time.)

This sounds interesting.

The problem is not just limited to politicians. Technology and changing residential patterns have allowed each of us to isolate ourselves within cocoons of like-minded individuals. In 1976, only 27 percent of Americans lived in “landslide counties”—counties that voted either Democratic or Republican by a margin of 20 percent or more. But the number has risen steadily; in 2008, 48 percent of Americans lived in a landslide county.77 Our counties and towns are becoming increasingly segregated into “lifestyle enclaves,” in which ways of voting, eating, working, and worshipping are increasingly aligned. If you find yourself in a Whole Foods store, there’s an 89 percent chance that the county surrounding you voted for Barack Obama. If you want to find Republicans, go to a county that contains a Cracker Barrel restaurant (62 percent of these counties went for McCain).78

This sounds more like assortative relocation + greater amount of relocation. Now, if only there were more local democratic power, then people living in these different areas could self-govern and stop arguing about conflicts that would never arise. E.g. healthcare systems: each smaller area could decide on its own system. I like to quote from Uncontrolled:

This leads then to a call for “states as laboratories of democracy” federalism in matters of social policy, or in a more formal sense, a call for subsidiarity—the principle that matters ought to be handled by the smallest competent authority. After all, the typical American lives in a state that is a huge political entity governing millions of people. As many decisions as possible ought to be made by counties, towns, neighborhoods, and families (in which parents have significant coer­cive rights over children). In this way, not only can different prefer­ences be met, but we can learn from experience how various social arrangements perform.

Nuclear energy often gets bad press. However, journalists are mostly very leftist, scientifically ill-educated women, so perhaps they are not quite the right demographic to tell us about this issue. So far I have not found any published studies about the relationship between attitudes towards nuclear energy and general intelligence, however there is good reason to believe it is positive. The EU is to nice to survey the opinions of EU citizens on nuclear energy every few years, and they include measurement of various variables, but not general intelligence or general science knowledge.

Three surveys of European attitudes towards nuclear power and correlates

nuc18 nuc17 nuc16 nuc15 nuc14 nuc13 nuc12 nuc11 nuc10 nuc9 nuc8 nuc7 nuc6 nuc5 nuc4 nuc3 nuc2 nuc1

Generally, these find:

  1. Men are more positive about nuclear power
  2. The better educated are more positive about nuclear power
  3. The better educated are less in doubt about nuclear power
  4. Education has inconsistent relationship to opposition to nuclear power
  5. Self-rated knowledge is related to more positive attitude to nuclear power
  6. Those with more experience about nuclear power are more positive about nuclear power

(4) might seem inconsistent with (1-3), but it is not. Usually the questions have three broad categories: positive, negative, don’t know. When the education level increases, negative stays about the same (sometimes increases, sometimes decreases), while don’t know always decreases and positive nearly always increases. Thus, the simplest explanation for this is that higher education move people from the don’t know category to the positive category, while it has no clear effect those who are negative about nuclear power.

Survey in the United Kingdom

Most of the questions were too specific to be of use, however one was useful, is post-Fukushima and still found the usual results:


What do experts think?

I found an old review (1984) of expert opinion. They write:

In contrast to the public, most “opinion leaders,” particularly energy experts, support further development of nuclear power. This support is revealed both in opinion polls and in technical studies of the risks of nuclear power. A March 1982 poll of Congress found 76 percent of members supported expanded use of nuclear power (50. In a survey conducted for Connecticut Mutual Life Insurance Co. in 1980, leaders in religion, business, the military, government, science, education, and law perceived the benefits of nuclear power as greater than the risks (19). Among the categories of leaders surveyed, scientists were particularly supportive of nuclear power. Seventyfour percent of scientists viewed the benefits of nuclear power as greater than risks, compared with only 55 percent of the rest of the public.

In a recent study, a random sample of scientists was asked about nuclear power (62). Of those polled, 53 percent said development should proceed rapidly, 36 percent said development should proceed slowly, and 10 percent would halt development or dismantle plants. When a second group of scientists with particular expertise in energy issues was given the same choices, 70 percent favored proceeding rapidly and 25 percent favored proceeding slowly with the technology. This second sample included approximately equal numbers of scientists from 71 disciplines, ranging from air pollution to energy policy to thermodynamics. About 10 percent of those polled in this group worked in disciplines directly related to nuclear energy, so that the results might be somewhat biased. Support among both groups of scientists was found to result from concern about the energy crisis and the belief that nuclear power can make a major contribution to national energy needs over the next 20 years. Like scientists, a majority of engineers continued to support nuclear power after the accident at Three Mile Island (69).

Of course, not all opinion leaders are in favor of the current U.S. program of nuclear development. Leaders of the environmental movement have played a major role in the debate about reactor safety and prominent scientists are found on both sides of the debate. A few critics of nuclear power have come from the NRC and the nuclear industry, including three nuclear engineers who left General Electric in order to demonstrate their concerns about safety in 1976. However, the majority of those with the greatest expertise in nuclear energy support its further development.

Analysis of public opinion polls indicates that people’s acceptance or rejection of nuclear power is more influenced by their view of reactor safety than by any other issue (57). As discussed above, accidents and events at operating plants have greatly increased public concern about the possibility of a catastrophic accident. Partially in response to that concern, technical experts have conducted a number of studies of the likelihood and consequences of such an accident. However, rather than reassuring the public about nuclear safety, these studies appear to have had the opposite effect. By painting a picture of the possible consequences of an accident, the studies have contributed to people’s view of the technology as exceptionally risky, and the debate within the scientific community about the study methodologies and findings has increased public uncertainty.

And recently, much publicity was given to a study showing the discrepancy between public opinion and scientific opinion on various topics, and it included nuclear power:


A 20% percent point different is no small matter and is similar to the older studies described above.

General intelligence and nuclear power

The OKCupid dataset contains only one question related to nuclear power among the first 2400 questions or so (those in the dataset). The question is the 2216th most commonly answered question, that is, not very commonly answered at all. Since people who answer >2000 questions on a dating site are a very self-selected group, there is likely to be some heavy range restriction.


Aside from the “I don’t know”-category, the differences are quite small:

[1] "Question ID: q59519"
[1] "How do you feel about nuclear energy?"
[1] "I'm not sure, there are pros and cons."
[1] "n's =" "1015" 
[1] "means =" "2.3"    
[1] "I don't care, whatever keeps my light bulbs lit."
[1] "n's =" "52"   
[1] "means =" "1.37"   
[1] "No.  It is a danger to public safety."
[1] "n's =" "335"  
[1] "means =" "2.31"   
[1] "Yes.  It is efficient, safe, and clean."
[1] "n's =" "591"  
[1] "means =" "2.49"

However, samples are quite large. The 99% confidence interval is -0.323 to -0.037, which is getting close to 0, so we need more data to be quite certain, but it’s unlikely to have happened by chance. How large is the difference in some more useful unit? It’s .18 points on a 3-point scale (3 IQ-type questions). The overall SD in the total dataset for this 3-point scale is .96 (mean=2.12, N=28k). However, for this question, it is only .88 due to range restriction (mean=2.34, N=1993). In other words, in SD units it is .18/.88=0.20, which is 3 IQ points, not correcting for anything. Perhaps after corrections this would be something like 5 IQ points between pro and con-nuclear power people in this dataset.

Future research

I’d like more data with measures of scientific knowledge, or probability reasoning or some such and various energy policy opinions including nuclear. Of course, there should also be general intelligence data.


Europeans and Nuclear Safety – 2010 – PDF

Europeans and Nuclear Safety – 2007 – PDF

Attitudes towards radioactive waste (EU) – 2008 – PDF

Public Attitudes to Nuclear Power (OECD summary) – 2010 – PDF

New Nuclear Watch Europe – Nuclear Energy Survey (United Kingdom) – 2014 – PDF

Public Attitudes Toward Nuclear Power – 1984 – PDF (part of Nuclear Power in an Age of Uncertainty. Washington, D. C.: U.S. Congress, Office of Technology Assessment, OTA-E-216, February 1984.

This is not so straight forward as could be thought. On the one hand one might argue:

Facebook is a private company. They offer a service to users (free of charge but with ads), and since they own it, they should be able to control it as they see fit. If users don’t like it, they can go elsewhere.

We can call this the market argument. The counter argument (as phrased by Falkvinge) is this:

“At the end of the day, this is about the fact that the public square, where freedom of speech used to be enforced, has moved in under the terms-and-services umbrella of a private corporation, where they enforce their own arbitrary limits of what may be expressed and not. That means our fundamental rights have effectively moved into the hands of private interests. I welcome a challenge to this doctrine and an enforcement of freedom of speech, once a public discussion forum – like Facebook – has grown large enough to be a de-facto public location, if not the de-facto public location.”

Or, to put it my way:

The point above about it being a private company is valid. However, the “users can go elsewhere”-part is not very realistic given the near monopoly of Facebook. It just so happens that certain types of services, such as social networks, function better the more users use them and thus tends to lead to near monopolies with one or only a few dominant players on the market. When this happens, users face a choice between using a service that’s useful (where the other users are) and one that has strong protection of freedom of speech (is it exists). When such services also become a very important part of life (as measured in percent users of total population, e.g. for the US, about 48% of the population has a Facebook user) for important matters such as communication, there is reason to enforce freedom of expression (FoE) on their services despite them being privately owned, because not doing so would in practice mean that private companies would decide limits to FoE which could have negative effects social consequences because certain topics could not be discussed.

By now, unless you are some kind of libertarian/anarcho-capitalist, you should be convinced that the issue is not so straight forward as it might appear.

Enforcing freedom of expression in practice

Suppose we go ahead and say that freedom of speech must be protected on Facebook in as far as it is in the country normally — i.e. not that much for most countries, and even most Western countries limit freedom of speech in important ways — this must also be done on Facebook, how would this happen exactly?

Suppose a French native based on France creates a user which is located in France. Suppose that France’s FoE law is pretty broad: allows nudity, hardcore porn of any type as long as it consenting adults, hate speech, blasfemy, racism, sexism, wrong claims of convictions, etc. Now, suppose the French user starts posting porn and Facebook doesn’t like porn. Facebook might want to delete it and perhaps block the user (current practice in fact). However, if FoE was enforced here, this would not be legal. Facebook has some options:

  1. Make a filter option that by default is turned on (somewhere hidden in advanced settings) which hides any kind of content, including porn, that Facebook does not like.
  2. Show this content only to users from France.
  3. Show this content only to users from countries with protection for porn expressions.
  4. Disallow people based in France from creating profiles.

Now, (4) seems like an unlikely option. It would cause Facebook to lose a lot of ad revenue. (1) could potentially by struck down by a court as de facto limiting FoE too much too, but may work for most purposes. If the purpose for blocking porn is that some users are (thought to be) sensitive to it, then this option would work fine. Choosing (2) or (3) would be limitation of FoE for cross-national purposes (if the user has friends based in other countries). Whether this could be struck down by any national court is a good question. Who has jurisdiction for cross-national speech? Both countries by themselves? Both countries in joint?

In any case, (1) seems like a realistic choice. If some material is reported as being over the limit, it can be put into the ‘dangerous stuff’-stream not shown to most users (presumably, unless it becomes very trendy to disable the filter). One could also combine (2-3) with (1). But it is certain that were Facebook to use one of these methods, it could add a considerable cost to them by maintaining an updated database for the laws of each country and classifying content into the correct categories that can and cannot be shown in this or that country.

Other civil liberties?

But why stop at FoE, what about other civil liberties? Should we also enforce them on Facebook if relevant? If basic law/constitution/ground law/etc. includes due process requirements, does this mean that any decision on Facebook regarding citizens from that country must adhere to local due process laws? This can be problematic. Suppose two citizens are involved in a case, and they are from different countries with different due process laws. Which country’s law should be adhered to? Both? Just one of them? What if the laws are inconsistent so that enforcing both is impossible?

Given the recent NSA spying scandals, one might wonder about right to privacy laws. What if adhering to country X’s laws means that some user data must be protected, while adhering to country Y’s laws means that they must be openly available to secret agencies without a court order (even a secret one)? It is not clear.

I am a major proponent of drug legalization, and have also been following the research on drugs’ influence on driving skills. In media discourse, it is taken for granted that driving under the influence (DUI) is bad because it causes crashes. This is usually assumed to be true for drugs at large, but especially THC (cannabis) and alcohol gets attention. Unfortunately, most of the research on the topic is non-experimental, and so open to multiple causal interpretations. I will focus on the recently published report, Drug and Alcohol Crash Risk (US Dept. of Transportation), which I found via The Washington Post.

The study is a case-control design where they try to adjust for potential correlates and causal factors both by statistical means and by data-collection means. Specifically:

The case control crash risk study reported here is the first large-scale study in the United States to include drugs other than alcohol. It was designed to estimate the risk associated with alcohol- and drug-positive driving. Virginia Beach, Virginia, was selected for this study because of the outstanding cooperation of the Virginia Beach Police Department and other local agencies with our stringent research protocol. Another reason for selection was that Virginia Beach is large enough to provide a sufficient number of crashes for meaningful analysis. Data was collected from more than 3,000 crash-involved drivers and 6,000 control drivers (not involved in crashes). Breath alcohol measurements were obtained from a total of 10,221 drivers, oral fluid samples from 9,285 drivers, and blood samples from 1,764 drivers.

Research teams responded to crashes 24 hours a day, 7 days a week over a 20-month period. In order to maximize comparability, efforts were made to match control drivers to each crash-involved driver. One week after a driver involved in a crash provided data for the study, control drivers were selected at the same location, day of week, time of day, and direction of travel as the original crash. This allowed a comparison to be made between use of alcohol and other drugs by drivers involved in a crash with drivers not in a crash, resulting in an estimation of the relative risk of crash involvement associated with alcohol or drug use. In this study, the term marijuana is used to refer to drivers who tested positive for delta-9-tetrahydrocannabinal (THC). THC is associated with the psychoactive effects of ingesting marijuana. Drivers who tested positive for inactive cannabinoids were not considered positive for marijuana. More information on the methodology of this study and other methods of estimating crash risk is presented later in this Research Note.

So, by design, they control for: location, day of week, time of day, direction of travel. It is also good that they don’t conflate inactive metabolites with THC as commonly done.

The basic results are shown in Tables 1 and 3.


The first shows the raw data, so to speak. It can be seen that drug use while driving is fairly common at about 15% both in crash drivers and normal drivers. Since their testing probably didn’t detect all possible drugs, these are underestimates (assuming that the testing does not bias it with uneven false positive/false negative rates).

Now, the authors write:

These unadjusted odds ratios must be interpreted with caution as they do not account for other factors that may contribute to increased crash risk. Other factors, such as demographic variables, have been shown to have a significant effect on crash risk. For example, male drivers have a higher crash rate than female drivers. Likewise, young drivers have a higher crash rate than older drivers. To the extent that these demographic variables are correlated with specific types of drug use, they may account for some of the increased crash risk associated with drug use.

Table 4 examines the odds ratios for the same categories and classes of drugs, adjusted for the demographic variables of age, gender, and race/ethnicity. This analysis shows that the significant increased risk of crash involvement associated with THC and illegal drugs shown in Table 3 is not found after adjusting for these demographic variables. This finding suggests that these demographic variables may have co-varied with drug use and accounted for most of the increased crash risk. For example, if the THC-positive drivers were predominantly young males, their apparent crash risk may have been related to age and gender rather than use of THC.

Table 4 looks like this, and for comparison, Table 6 for alcohol:

DUI_table4 DUI_table6

The authors do not state anything outright false. But they only mention one causal model that fits the data, the one where THC’s rule is non-causal. However, it is more proper to show both models openly:

Causal models of driving, drug use and demographic variables

The first model is the one discussed by the authors. Here demographic variables cause THC use and crashing, but THC use has no effect on crashing. THC use and crashing are statistically associated because they have a common cause. In the second model, demographic variables cause both THC use and crashing, and THC use also causes crashing. In both models, if one controls for demographic variables, the statistical associated of THC use and crashing disappears. Hence, controlling for demographic variables cannot distinguish between those two important models.

However, they can test the second model by controlling for THC use and seeing if demographic variables are still associated with crashing. If they are not, the second model above is falsified (assuming no false negative/adequate statistical power).

Alcohol was still associated with crashing even controlling for demographic variables, which strengthens the case for its causal effect.

How common is alcohol driving?

Incidentally, some interesting statistics on DUI for alcohol:

The differences between the two studies in the proportion of drivers found to be alcohol-positive are likely to have resulted from the concentration of Roadside Survey data collection on weekend nighttime hours, while this study included data from all days of the week and all hours of the day. For example, in the 2007 Roadside Survey the percentage of alcohol-positive weekday daytime drivers was only 1.0 percent, while on weekend nights 12.4 percent of the drivers were alcohol-positive. In this study, 1.9 percent of weekday daytime drivers were alcohol- positive, while 9.4 percent of weekend nighttime drivers were alcohol-positive.

Assuming the causal model of alcohol on crashing is correct, this must result in quite a lot of extra deaths in traffic. Another reason to fund more research into safer vehicles:

Mandatory follow-up:

I had my first Twitter controversy. So:

I pointed out in the reply to this, that they don’t actually charge that much normally. The comparison is here. The prices are around 500-3000 USD, with an average (eyeballed) around 2500 USD.

Now, this is just a factual error, so not so bad. However…

If anyone is wondering why he is so emotional, he gave the answer himself:

A very brief history of journals and science

  • Science starts out involving few individuals.
  • They need a way to communicate ideas.
  • They set up journals to distribute the ideas on paper.
  • Printing costs money, so they cost money to buy.
  • Due to limitations of paper space, there needs to be some selection in what gets printed, which falls on the editor. Fast forward to perhaps 1950’s, now there are too many papers for the editors to handle, and so they delegate the job of deciding what to accept to other academics (reviewers). In the system, academics write papers, they edit them, and review them. All for free.
  • Fast forward to perhaps 1990 and what happens is that big business takes over the running of the journals so academics can focus on science. As it does, the prices rise becus of monetary interests.
  • Academics are reluctant to give up publishing in and buying journals becus their reputation system is built on publishing in said journals. I.e. the system is inherently conservatively biased (Status quo bias). It is perfect for business to make money from.
  • Now along comes the internet which means that publishing does not need to rely on paper. This means that marginal printing cost is very close to 0. Yet the journals keep demanding high prices becus academia is reliant on them becus they are the source of the reputation system.
  • There is a growing movement in academia that this is a bad situation for science, and that publications shud be openly available (open access movement). New OA journals are set up. However, since they are also either for-profit or crypto for-profit, in order to make money they charge outrageous amounts of money (say, anything above 100 USD) to publish some text+figures on a website. Academics still provide nearly all the work for free, yet they have to pay enormous amounts of money to publish, while the publisher provides a mere website (and perhaps some copyediting etc.).

Who thinks that is a good solution? It is clearly a smart business move. For instance, popular OA metajournal Frontiers are owned by Nature Publishing Group. This company thus very neatly both makes money off their legacy journals and the new challenger journals.

The solution is to set up journals run by academics again now that the internet makes this rather easy and cheap. The profit motive is bad for science and just results in even worse journals.

As for my claim, I stand by it. Altho in retrospect, the more correct term is parasitic. Publishers are a middleman exploiting the the fact that academia relies on established journals for reputation.

Someone posted a nice collection of books dealing with the on-going revolution in science:

So i decided to read some of them. Ironically, many of them are not available for free (contrary to the general idea of openness in them).

The book is short at 200 pages, with 14 chapters covering most aspects of changing educational system. It is at times long-winded. It shud probably have been 20-50 pages shorter. However, it seems fine as a general introduction to the area. The author shud have used more grafs, figures etc. to make points. There are plenty of good figures for these things (e.g. journal revenue increases).

So i kept finding references to this book in papers, so i decided to read it. It is a quick read introducing behavior genetics and the results from it to lay readers and perhaps policy makers. The book is overly long (200) for its content, it cud easily have been cut 30 pages. The book itself contains not much new to people familiar with the field (i.e. me), however there are some references that were interesting and unknown to me. It may pay for the expert to simply skim the reference lists for each chapter and read those papers instead.

The main thrust of the book is what policies we shud implement becus of our ‘new’ behavioral genetic knowledge. Basically the authors think that we need to add more choice to schools becus everybody is different and we want to use the gene-environment correlations to improve results. It is hard to disagree with this. They go on about how labeling is bad, but obviously labeling is useful for talking about things.

If one is interested in school policy then reading this book may be worth it, especially if one is a layman. If one is interested in learning behavior genetics, read something else (e.g. Plomin’s 2012 textbook)