THC and driving: Causal modeling and statistical controls

I am a major proponent of drug legalization, and have also been following the research on drugs’ influence on driving skills. In media discourse, it is taken for granted that driving under the influence (DUI) is bad because it causes crashes. This is usually assumed to be true for drugs at large, but especially THC (cannabis) and alcohol gets attention. Unfortunately, most of the research on the topic is non-experimental, and so open to multiple causal interpretations. I will focus on the recently published report, Drug and Alcohol Crash Risk (US Dept. of Transportation), which I found via The Washington Post.

The study is a case-control design where they try to adjust for potential correlates and causal factors both by statistical means and by data-collection means. Specifically:

The case control crash risk study reported here is the first large-scale study in the United States to include drugs other than alcohol. It was designed to estimate the risk associated with alcohol- and drug-positive driving. Virginia Beach, Virginia, was selected for this study because of the outstanding cooperation of the Virginia Beach Police Department and other local agencies with our stringent research protocol. Another reason for selection was that Virginia Beach is large enough to provide a sufficient number of crashes for meaningful analysis. Data was collected from more than 3,000 crash-involved drivers and 6,000 control drivers (not involved in crashes). Breath alcohol measurements were obtained from a total of 10,221 drivers, oral fluid samples from 9,285 drivers, and blood samples from 1,764 drivers.

Research teams responded to crashes 24 hours a day, 7 days a week over a 20-month period. In order to maximize comparability, efforts were made to match control drivers to each crash-involved driver. One week after a driver involved in a crash provided data for the study, control drivers were selected at the same location, day of week, time of day, and direction of travel as the original crash. This allowed a comparison to be made between use of alcohol and other drugs by drivers involved in a crash with drivers not in a crash, resulting in an estimation of the relative risk of crash involvement associated with alcohol or drug use. In this study, the term marijuana is used to refer to drivers who tested positive for delta-9-tetrahydrocannabinal (THC). THC is associated with the psychoactive effects of ingesting marijuana. Drivers who tested positive for inactive cannabinoids were not considered positive for marijuana. More information on the methodology of this study and other methods of estimating crash risk is presented later in this Research Note.

So, by design, they control for: location, day of week, time of day, direction of travel. It is also good that they don’t conflate inactive metabolites with THC as commonly done.

The basic results are shown in Tables 1 and 3.

DUI_table1DUI_table3

The first shows the raw data, so to speak. It can be seen that drug use while driving is fairly common at about 15% both in crash drivers and normal drivers. Since their testing probably didn’t detect all possible drugs, these are underestimates (assuming that the testing does not bias it with uneven false positive/false negative rates).

Now, the authors write:

These unadjusted odds ratios must be interpreted with caution as they do not account for other factors that may contribute to increased crash risk. Other factors, such as demographic variables, have been shown to have a significant effect on crash risk. For example, male drivers have a higher crash rate than female drivers. Likewise, young drivers have a higher crash rate than older drivers. To the extent that these demographic variables are correlated with specific types of drug use, they may account for some of the increased crash risk associated with drug use.

Table 4 examines the odds ratios for the same categories and classes of drugs, adjusted for the demographic variables of age, gender, and race/ethnicity. This analysis shows that the significant increased risk of crash involvement associated with THC and illegal drugs shown in Table 3 is not found after adjusting for these demographic variables. This finding suggests that these demographic variables may have co-varied with drug use and accounted for most of the increased crash risk. For example, if the THC-positive drivers were predominantly young males, their apparent crash risk may have been related to age and gender rather than use of THC.

Table 4 looks like this, and for comparison, Table 6 for alcohol:

DUI_table4 DUI_table6

The authors do not state anything outright false. But they only mention one causal model that fits the data, the one where THC’s rule is non-causal. However, it is more proper to show both models openly:

Causal models of driving, drug use and demographic variables

The first model is the one discussed by the authors. Here demographic variables cause THC use and crashing, but THC use has no effect on crashing. THC use and crashing are statistically associated because they have a common cause. In the second model, demographic variables cause both THC use and crashing, and THC use also causes crashing. In both models, if one controls for demographic variables, the statistical associated of THC use and crashing disappears. Hence, controlling for demographic variables cannot distinguish between those two important models.

However, they can test the second model by controlling for THC use and seeing if demographic variables are still associated with crashing. If they are not, the second model above is falsified (assuming no false negative/adequate statistical power).

Alcohol was still associated with crashing even controlling for demographic variables, which strengthens the case for its causal effect.

How common is alcohol driving?

Incidentally, some interesting statistics on DUI for alcohol:

The differences between the two studies in the proportion of drivers found to be alcohol-positive are likely to have resulted from the concentration of Roadside Survey data collection on weekend nighttime hours, while this study included data from all days of the week and all hours of the day. For example, in the 2007 Roadside Survey the percentage of alcohol-positive weekday daytime drivers was only 1.0 percent, while on weekend nights 12.4 percent of the drivers were alcohol-positive. In this study, 1.9 percent of weekday daytime drivers were alcohol- positive, while 9.4 percent of weekend nighttime drivers were alcohol-positive.

Assuming the causal model of alcohol on crashing is correct, this must result in quite a lot of extra deaths in traffic. Another reason to fund more research into safer vehicles:

Mandatory follow-up:

Predatory journals

I had my first Twitter controversy. So:

I pointed out in the reply to this, that they don’t actually charge that much normally. The comparison is here. The prices are around 500-3000 USD, with an average (eyeballed) around 2500 USD.

Now, this is just a factual error, so not so bad. However…

If anyone is wondering why he is so emotional, he gave the answer himself:

A very brief history of journals and science

  • Science starts out involving few individuals.
  • They need a way to communicate ideas.
  • They set up journals to distribute the ideas on paper.
  • Printing costs money, so they cost money to buy.
  • Due to limitations of paper space, there needs to be some selection in what gets printed, which falls on the editor. Fast forward to perhaps 1950’s, now there are too many papers for the editors to handle, and so they delegate the job of deciding what to accept to other academics (reviewers). In the system, academics write papers, they edit them, and review them. All for free.
  • Fast forward to perhaps 1990 and what happens is that big business takes over the running of the journals so academics can focus on science. As it does, the prices rise becus of monetary interests.
  • Academics are reluctant to give up publishing in and buying journals becus their reputation system is built on publishing in said journals. I.e. the system is inherently conservatively biased (Status quo bias). It is perfect for business to make money from.
  • Now along comes the internet which means that publishing does not need to rely on paper. This means that marginal printing cost is very close to 0. Yet the journals keep demanding high prices becus academia is reliant on them becus they are the source of the reputation system.
  • There is a growing movement in academia that this is a bad situation for science, and that publications shud be openly available (open access movement). New OA journals are set up. However, since they are also either for-profit or crypto for-profit, in order to make money they charge outrageous amounts of money (say, anything above 100 USD) to publish some text+figures on a website. Academics still provide nearly all the work for free, yet they have to pay enormous amounts of money to publish, while the publisher provides a mere website (and perhaps some copyediting etc.).

Who thinks that is a good solution? It is clearly a smart business move. For instance, popular OA metajournal Frontiers are owned by Nature Publishing Group. This company thus very neatly both makes money off their legacy journals and the new challenger journals.

The solution is to set up journals run by academics again now that the internet makes this rather easy and cheap. The profit motive is bad for science and just results in even worse journals.

As for my claim, I stand by it. Altho in retrospect, the more correct term is parasitic. Publishers are a middleman exploiting the the fact that academia relies on established journals for reputation.

Review: The Digital Scholar: How Technology is Transforming Academic Practice (Martin Weller)

www.goodreads.com/book/show/12582388-the-digital-scholar

gen.lib.rus.ec/book/index.php?md5=5343D586EEEFB7BB24DE5B71FBD07C32

Someone posted a nice collection of books dealing with the on-going revolution in science:

So i decided to read some of them. Ironically, many of them are not available for free (contrary to the general idea of openness in them).

The book is short at 200 pages, with 14 chapters covering most aspects of changing educational system. It is at times long-winded. It shud probably have been 20-50 pages shorter. However, it seems fine as a general introduction to the area. The author shud have used more grafs, figures etc. to make points. There are plenty of good figures for these things (e.g. journal revenue increases).

Review: G Is for Genes: The Impact of Genetics on Education and Achievement (Kathryn Asbury, Robert Plomin)

www.goodreads.com/book/show/17015094-g-is-for-genes

gen.lib.rus.ec/book/index.php?md5=97ac0ec914522d3c888679e9c02291c6

So i kept finding references to this book in papers, so i decided to read it. It is a quick read introducing behavior genetics and the results from it to lay readers and perhaps policy makers. The book is overly long (200) for its content, it cud easily have been cut 30 pages. The book itself contains not much new to people familiar with the field (i.e. me), however there are some references that were interesting and unknown to me. It may pay for the expert to simply skim the reference lists for each chapter and read those papers instead.

The main thrust of the book is what policies we shud implement becus of our ‘new’ behavioral genetic knowledge. Basically the authors think that we need to add more choice to schools becus everybody is different and we want to use the gene-environment correlations to improve results. It is hard to disagree with this. They go on about how labeling is bad, but obviously labeling is useful for talking about things.

If one is interested in school policy then reading this book may be worth it, especially if one is a layman. If one is interested in learning behavior genetics, read something else (e.g. Plomin’s 2012 textbook)

Costs and benefits of publishing in legacy journals vs. new journals

I recently published a paper in Open Differential Psychology. After it was published, I decided to tell some colleagues about it so that they would not miss it because it is not published in any of the two primary journals in the field: Intell or PAID (Intelligence, Personal and Individual Differences). My email is this:

Dear colleagues,

I wish to inform you about my paper which has just been published in Open Differential Psychology.

Abstract
Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

You can read the full paper at the journal website: openpsych.net/ODP/2014/09/the-international-general-socioeconomic-factor-factor-analyzing-international-rankings/

Regards,
Emil

One researcher responded with:

Dear Emil,
Thanks for your paper.
Why not publishing in standard well established well recognized journals listed in Scopus and Web of Science benefiting from review and
increasing your reputation after publishing there?
Go this way!
Best,
NAME

This concerns the decision of choosing where to publish. I discussed this in a blog post back in March before setting up OpenPsych. To be very short, the benefits of publishing in legacy journals is 1) recognition, 2) indexing in proprietary indexes (SCOPUS, WoS, etc.), 3) perhaps better peer review, 4) perhaps fancier appearance of the final paper. The first is very important if one is an up-and-coming researcher (like me) because one will need recognition from university people to get hired.

I nevertheless decided NOT to publish (much) in legacy journals. In fact, the reason I got into publishing studies so late is that I dislike the legacy journals in this field (and most other fields). Why dislike legacy journals? I made an overview here, but to sum it up: 1) Either not open access or extremely pricey, 2) no data sharing, 3) in-transparent peer review system, 4) very slow peer review (~200 days on average in case of Intell and PAID), 5) you’re supporting companies that add little value to science and charge insane amounts of money for it (for Elsevier, see e.g. Wikipedia, TechDirt has a large number of posts concerning that company alone).

As a person who strongly believes in open science (data, code, review, access), there is no way I can defend a decision to publish in Elsevier journals. Their practices are clearly antithetical to science. I also signed The Cost of Knowledge petition not to publish or review for them. Elsevier has a strong economic interest in keeping up their practices and I’m sure they will. The only way to change science for the better is to publish in other journals.

Non-Elsevier journals

Aside from Elsevier journals, one could publish in PLoS or Frontiers journals. They are open access, right? Yes, and that’s a good improvement. They however are also predatory because they charge exorbitant fees to publish: 1600 € (Frontiers), 1350 US$ (PLoS). One might as well publish in Elsevier as open access for which they charge 1800 US$.

So are there any open access journals without publication fees in this field? There is only one as far as I know, the newly established Journal of Intelligence. However, the journal site states that the lack of a publication fee is a temporary state of affairs, so there seems to be no reason to help them get established by publishing in their journal. After realizing this, I began work on starting a new journal. I knew that there was a lot of talent in the blogosphere with a similar mindset to me who could probably be convinced to review for and publish in the new journal.

Indexing

But what about indexing? Web of Science and SCOPUS are both proprietary; not freely available to anyone with an internet connection. But there is a fast-growing alternative: Google Scholar. Scholar is improving rapidly compared to the legacy indexers and is arguably already better since it indexes a host of grey literature sources that the legacy indexers don’t cover. A recent article compared Scholar to WOS. I quote:

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle become complete in which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for meta-analyses and systematic reviews. [source]

In other words, Scholar covers almost all the articles that WoS covers already and is quickly catching up on the older studies too. In a few years Scholar will cover close to 100% of the articles in legacy indexers and they will be nearly obsolete.

Getting noticed

One thing related to the above is getting noticed by other researchers. Since many researchers read legacy journals, simply being published in them is likely sufficient to get some attention (and citations!). It is however not the only way. The internet has changed the situation here completely in that there are new lots of different ways to get noticed: 1) Twitter, 2) ResearchGate, 3) Facebook/Google+, 4) Reddit, 5) Google Scholar will inform you about new any research by anyone one has cited previously, 6) blogs (own or others’) and 7) emails to colleagues (as above).

Peer review

Peer review in OpenPsych is innovative in two ways: 1) it is forum-style instead of email-based which is better suited for communication between more than 2 persons, 2) it is openly visible which works against biased reviewing. Aside from this, it is also much faster, currently averaging 20 days in review.

Reputation and career

There is clearly a drawback here for publishing in OpenPsych journals compared with legacy journals. Any new journal is likely to be viewed as not serious by many researchers. Most people dislike changes including academics (perhaps especially?). Publishing there will not improve chances of getting hired as much as will publishing in primary journals. So one must weigh what is most important: science or career?

Venice wants to be independent: analysis of possible outcomes given self-selection of the voting population

www.theatlantic.com/international/archive/2014/03/europes-latest-secession-movement-venice/284562/

Venice seems to be tired of Italy. It’s a bad economic trade off for them. They want to return to their former glory. Good! We need more power decentralization.

There was a vote:

Last week, in a move overshadowed by the international outcry over Russia’s annexation of Crimea, Plebiscito.eu, an organization representing a coalition of Venetian nationalist groups, held an unofficial referendum on breaking with Rome. Voters were first asked the main question—”Do you want Veneto to become an independent and sovereign federal republic?”—followed by three sub-questions on membership in the European Union, NATO, and the eurozone. The region’s 3.7 million eligible voters used a unique digital ID number to cast ballots online, and organizers estimate that more than 2 million voters ultimately participated in the poll.

On Friday night, people waving red-and-gold flags emblazoned with the Lion of St. Mark filled the square of Treviso, a city in the Veneto region, as the referendum’s organizers announced the results: 2,102,969 votes in favor of independence—a whopping 89 percent of all ballots cast—to 257,266 votes against. Venetians also said yes to joining NATO, the EU, and the eurozone. The overwhelming victory surprised even ardent supporters of the initiative, as most polls before the referendum estimated only about 65 percent of the region’s voters supported independence.

Someone in the comments makes the following argument:

I don’t understand why it’s so surprising that 89% of respondents in an online, unofficial poll organized by Venetian nationalist groups voted that way. As a proportion of all eligible voters, that comes out to 55-60%, much closer to what you’d expect from neutral sampling.
Self-selection bias is a huge problem with online polling, and I expect that given the methodology of the referendum, that would explain a large part of the discrepancy between the predicted and observed outcomes.

My response:

You are assuming that the entire set of nonvoting citizens would be against it. While there is likely some self-selection, it is NOT likely to be 100%.

I did the math for every 10% incremental. If everybody voted either “yes” or “no”, then the total outcome range is [56.84%-93.05%], a clear majority in any case.

Even given a very strong self-selection effect such that nonvoters are 70% against, the outcome is 67.7% “yes”.

I did the math, and it is here: docs.google.com/spreadsheet/ccc?key=0AoYWmgpqFzdsdDZUSWhOOEctRnFhakVLUjFsbFpWUHc#gid=0

Here’s the takeaway. Venice wants to be independent and it is not a narrow decision, even assuming implausible self-selection.

Some thoughts on online voting

I was asked to comment on this Reddit thread: www.reddit.com/r/netsec/comments/s1t2c/netsec_how_would_you_design_an_electronic_voting/

 

This post is written with the assumption that a bitcoin-like system is used.

 

Nirvana / perfect solution fallacy

I agree. I don’t think an electronic system needs to solve every problem present in a paper system, it just needs to be better. Right now, for example, one could buy an absentee ballot and be done with it. I think a system that makes it less practical to do something similar is an improvement.

 

As always when considering options, one should choose the best solution, not stubbornly refuse any change that will not give a perfect situation. Paper voting is not perfect either.

 

 

Threatening scenarios

The instant you let people vote from remote locations, everything else is up in the air. It doesn’t matter if the endpoints are secure.
Say you can vote by phone. I have my goons “canvass” the area knocking on doors. “Hey, have you voted for Smith yet? You haven’t? Well, go get your phone, we will help you do it right now.”
If you are trying to do secure voting over the Internet, you have already lost.

 

While one cannot bring goons right into the voting boxes, it is quite clearly possible to threaten people to vote in a particular way right now. The reason it is not generally done is that every single vote has very little power and the costs therefore are absurdly high for anyone trying scare tactics.

 

It is also easy to solve by making it possible to change votes after they have been given. This is clearly possible with computer technology but hard with paper.

 

 

Viruses that target voting software

This is clearly an issue. However, people can easily check that their votes are correct in the votechain (blockchain analogy). A sophisticated virus might wait until the last minute and then vote, but this can easily be prevented by turning off the computers used.

 

Furthermore, I imagine that one will use specialized software for voting, especially a linux system designed specifically for safety and voting, and rigorously tested by thousands of independent coders. One might also create specialized hardware for voting, i.e. special computers. Specifically, one can have read only memory which makes it impossible to install malacious software on the system. For instance, the hardware might have built in software for voting and a camera for scanning a QR code with one’s private key(s).

 

Lastly, one can use 2FA to enchance security just as one does everywhere else where extra safety is needed on the web.

 

 

Anoynmous and veriable voting

You can either have a system where people can verify their vote and take some type of receipt to prove the system recorded their vote wrong, or you can have anonymous voting. You cannot have verifiable voting AND anonymous voting. Someone somewhere has to be able to decrypt or access whatever keys or pins or you are holding a meaningless or login or hash that can’t prove you aren’t lying or didn’t change your vote etc.

 

Yes you can, with pseudonymous voting with a bitcoin-like system. Everybody can verify that no more votes are used than there are eligible voters. But the individuals who control the addresses are not identifiable from the code alone. They can choose announce publicly their address so that people can connect the two. Will will ofc be used to public persons.

 

 

Selling votes

This is already possible. It is already possible to verify this as well, as one can easily film the process of voting. This is not generally illegal either.

 

The reason why people do not generally buy or sell votes is that single votes have basically no power and hence are worth nothing.

 

As pointed out in the thread, this is already possible with mail-voting.

 

Lastly, it is generally thought of to be evil or wrong to buy and sell votes, but only when done directly. It is clearly legal indirectly and even if not de jura legal, it is de facto legal. In every modern democracy, it is common for politicians offering certain wealth or income redistribution policies. If people who would benefit from these vote for the politicians they are indirectly receiving money for voting for a given politician/party. For this reason, the buying and selling of votes is a non-issue.

 

 

The ease of digital attacks

It seems to me that the real problem is the scalability of the attacks in the digital sphere. Changing votes in our regular system of several thousand human ballot counters looking a pieces of paper is rather costly. A well-planned digital attack can be virtually free of cost (not counting the time it takes to figure out the attack).

 

This is a concern, and that is why one will need tough security and verification technologies. I have suggested several above.

 

 

Interceptions of the signal

Whatever, VPN, custom software, browser. It’s the same thing. Malware or even an ISP could intercept and manipulate what is displayed or recorded. The software on the receiving end can also be manipulated but more likely to have some controls of the hardware and software, but again, who inspects this?

 

This could be a problem. It can be reduced by having a nationally free, encrypted VPN/proxy for voting purposes.

 

 

Others who were faster than me

Voting could not be more further from any of the simplest banking. The idea behind banking or any “secure” online transaction is that it is not anonymous. Bitcoin might be the only viable anonymous type online voting.

 

-

 

The bitcoin protocol would actually be fantastic for this. I should explain for those unaware: Bitcoin is actually two different things. One: A protocol, and Two: A software implementing the protocol to send ‘coins’ like money to others. I’ll do a writeup a little later, but the gist of it is: the votes would be public for anyone to view, impossible to fake/forge, and still anonymous. This would be done by embedding the voting information into the blockchain.

 

-

 

Strong encryption with distributed verification a la bitcoin. You don’t have to trust the clients; you trust the math. I’m by no means a crypto expert, so don’t look to me for design tips, but I suspect you could map a private key to each valid voter’s SSN then generate a vote (hash) that could be verified by the voter pool.

 

These posts dates to “1 year ago” according to Reddit. Clearly, I was not the first to think the obvious.

 

 

Who is going to mine votecoins?

So unless you are actually piggy-backing voting ontop of another currency (like the main bitcoin blockchain), there’s no incentive for ordinary citizens to participate and validate/process the blockchain. What are they mining? More votes?? That seems weird/illegitimate. If you say “well, some government agency can just do all the mining and distribute coins to voters” this would seem to offer no improvement over a straightforward centralized system, and only introduces extra questions like

 

The government and the users who want to help out. Surely citizens have some self interest in getting the election over with. This is a non-issue.

 

If the government started the block chain, mined the correct number of coins, and then put it in the “no more coins mode” then we would have the setup for it. If they could convince one of the major pools to do merged mining with them (i’m not sure what they would exchange for this, but it would only have to be for a week/month) if hiring a pool is out of the question then just realize that the govt spends millions routinely on elections, and $10M should be more than enough to beat most mafias (~9Thash/s which is roughly what the current bitcoin rate is). If someone like the coke brothers tried to overpower this it would be very obvious.

 

Yes, this is the same solution I suggested. Code the system so that the first block gives all votecoins.

 

Another option is making a dual currency system, such that one can help mine votecoins and only get rewarded in rewardcoins. That way the counting is distributed to whoever wants the job.

 

 

The prize for the least imagination

The simple answer is that I would not. The risks and downsides of such a system are inherently not worth the only benefit which I can think of (faster results). This should also answer your last question. This hasn’t been done simply because there is no good reason to do it.

 

No other benefits? Like… an infinite variety of other voting systems???

 

 

The price of online voting

You’re assuming the cost of an electronic voting system and the time it will take for people to be comfortable using them will outpace paper and pen, which if you ask me is a pretty damn big assumption. Maybe someday, but until a grandma can easily understand and use electronic voting I am loathe to even think about implementing it. A voting system needs to be transparent and easy to understand.

 

In Denmark it costs about 100 million DKK to have a vote. Is he really suggesting this cannot be done cheaper with computers? I can’t take it seriously.

 

 

 

Review: Philosophy of Science: A Very Short Introduction

I had low expectations for this book. It was assigned for some humanities class im taking (Studium generale). However, the book is a quite decent introduction to the field. Happily surprised.

 

libgen.org/book/index.php?md5=7d804c1413f8993654ecc933170a5141

 

 

The first two statements are called the premisses of the inference,

while the third statement is called the conclusion. This is a

deductive inference because it has the following property: if the

premisses are true, then the conclusion must be true too. In other

words, if it’s true th at all Frenchman like red wine, and if it’s true

th at Pierre is a Frenchman, it follows th at Pierre does indeed like

red wine. This is sometimes expressed by saying th at the

premisses of the inference entail the conclusion. Of course, the

premisses of this inference are almost certainly not true – there

are bound to be Frenchmen who do not like red wine. But that is

not the point. What makes the inference deductive is the

existence of an appropriate relation between premisses and

conclusion, namely th at if the premisses are true, the conclusion

must be true too. Whether the premisses are actually true is a

different matter, which doesn’t affect the status of the inference as

deductive.

 

This distinction is not a good idea. In that case, the existence of a deductive and invalid argument is impossible. I wrote about this area years ago, but apparently never finished my essay, or published it. It is still on my desktop.

 

 

Philosophers of science are interested in probability for two main

reasons. The first is th at in many branches of science, especially

physics and biology, we find laws and theories th at are formulated

using the notion of probability. Consider, for example, the theory

known as Mendelian genetics, which deals with the transmission

of genes from one generation to another in sexually reproducing

populations. One of the most important principles of Mendelian

genetics is that every gene in an organism has a 50% chance of

making it into any one of the organism’s gametes (sperm or egg

cells). Hence there is a 50% chance th at any gene found in your

mother will also be in you, and likewise for the genes in your

father. Using this principle and others, geneticists can provide

detailed explanations for why particular characteristics (e.g. eye

colour) are distributed across the generations of a family in the

way that they are. Now ‘chance’ is ju st another word for

probability, so it is obvious th at our Mendelian principle makes

essential use of the concept of probability. Many other examples

could be given of scientific laws and principles th at are expressed

in terms of probability. The need to understand these laws and

principles is an important motivation for the philosophical study of

probability.

 

Author forgot about sex-linked genes, which complicate matters.

 

 

Modern science can explain a great deal about the world we live in.

But there are also numerous facts th at have not been explained by

science, or at least not explained fully. The origin of life is one such

example. We know that about 4 billion years ago, molecules with

the ability to make copies of themselves appeared in the primeval

soup, and life evolved from there. But we do not understand how

these self-replicating molecules got there in the first place. Another

example is the fact th at autistic children tend to have very good

memories. Numerous studies of autistic children have confirmed

this fact, but as yet nobody has succeeded in explaining it.

 

en.wikipedia.org/wiki/Autism_and_working_memory

 

Wiki seems to be of the exact opposite opinion.

 

 

Since the realism/anti-realism debate concerns the aim of science,

one might think it could be resolved by simply asking the scientists

themselves. Why not do a straw poll of scientists asking them about

their aims? But this suggestion misses the point – it takes the

expression ‘the aim of science’ too literally. When we ask what the

aim of science is, we are not asking about the aims of individual

scientists. Rather, we are asking how best to make sense of what

scientists say and do – how to interpret the scientific enterprise.

Realists think we should interpret all scientific theories as

attempted descriptions of reality; anti-realists think this

interpretation is inappropriate for theories th at talk about

unobservable entities and processes. While it would certainly be

interesting to discover scientists’ own views on the realism/anti-

realism debate, the issue is ultimately a philosophical one.

 

Good idea. Is that a case for expertimental filosofy?

 

Cudnt find any data from a quick google.

 

 

Cladists argue th at their way of classifying is ‘objective’ while th at of

the pheneticists is not. There is certainly some tru th in this charge.

For pheneticists base their classifications on the similarities

between species, and judgements of similarity are invariably partly

subjective. Any two species are going to be similar to each other in

some respects, but not in others. For example, two species of insect

might be anatomically quite similar, but very diverse in their

feeding habits. So which ‘respects’ do we single out, in order to

make judgements of similarity? Pheneticists hoped to avoid this

problem by defining a measure o f ‘overall similarity’, which would

take into account all of a species’ characteristics, thus permitting

fully objective classifications to be constructed. But though this idea

sounds nice, it did not work, not least because there is no obvious

way to count characteristics. Most people today believe that the very

idea o f ‘overall similarity’ is philosophically suspect. Phenetic

classifications do exist, and are used in practice, but they are not

fully objective. Different similarity judgements lead to different

phenetic classifications, and there is no obvious way to choose

between them.

 

Surely someone has tried factor analysis to find this overall similarity factor, if there is one? It’s not that hard to find out. Make a huge list of things to measure to species. Measure it all in say, 1000 species, and then factor analyze it. Is there an overall factor similar to g? If not, then the hypothesis is disconfirmed.

 

I checked. Yes, someone did this. ib.berkeley.edu/courses/ib200a/lect/ib200a_lect09_Lindberg_phenetics.pdf

 

Seems to be common practice. So this can avoid the charge of arbitrary classifications.

 

 

A similar issue arises regarding the relation between the natural

sciences and the social sciences. Ju st as philosophers sometimes

complain o f ‘science worship’ in their discipline, so social scientists

sometimes complain o f ‘natural science worship’ in theirs. There is

no denying that the natural sciences – physics, chemistry, biology,

etc. – are in a more advanced state than the social sciences –

economics, sociology, anthropology, etc. A number of people have

wondered why this is so. It can hardly be because natural scientists

are smarter than social scientists. One possible answer is that the

methodsof the natural sciences are superior to those of the social

sciences. If this is correct, then what the social sciences need to do

to catch up is to ape the methods of the natural sciences. And to

some extent, this has actually happened. The increasing use of

mathematics in the social sciences may be partly a result of this

attitude. Physics made a great leap forward when Galileo took the

step of applying mathematical language to the description of

motion; so it is tempting to think that a comparable leap forward

might be achievable in the social sciences, if a comparable way of

‘mathematicizing’ their subject matter can be found.

 

Ofc it can! All data confirm this, ex. emilkirkegaard.dk/en/?p=3925

 

Social science has the triple disadvantage of having 1) less smart researchers, 2) a more complex field, 3) fewer experimental options (due to ethical and monetary problems).

 

 

To be fair to the creation scientists, they do olfer arguments th at are

specific to the theory of evolution. One of their favourite arguments

is that the fossil record is extremely patchy, particularly when it

comes to the supposed ancestors of Homo sapiens.There is some

truth in this charge. Evolutionists have long puzzled over the gaps

in the fossil record. One persistent puzzle is why there are so few

‘transition fossils’ – fossils of creatures intermediate between two

species. If later species evolved from earlier ones as Darwin’s theory

asserts, surely we would expect transition fossils to be very \

common? Creationists take puzzles of this sort to show that

Darwin’s theory is ju st wrong. But the creationist arguments are

uncompelling, notwithstanding the real difficulties in

understanding the fossil record. For fossils are not the only or even

the main source of evidence for the theory of evolution, as

creationists would know if they had read The Origin o f Species.

Comparative anatomy is another important source of evidence, as

are embryology biogeography, and genetics. Consider, for example,

the fact that humans and chimpanzees share 98% of their DNA.

This and thousands of similar facts make perfect sense if the theory

of evolution is true, and thus constitute excellent evidence for the

theory. Of course, creation scientists can explain such facts too.

They can claim th at God decided to make humans and chimpanzees

genetically similar, for reasons of His own. But the possibility of

giving ‘explanations’ o f this sort really ju st points to the fact that

Darwin’s theory is not logically entailed by the data. As we have

seen, the same is true o f every scientific theory. The creationists

have merely highlighted the general methodological point th at data

can always be explained in a multitude of ways. This point is true,

but shows nothing special about Darwinism.

 

The author is confused about transitional fossils. All fossils are transitionary. There is no point at which

 

 

Human sociobiologists (henceforth simply ‘sociobiologists’) believe

th at many behavioural traits in humans can be given adaptationist

explanations. One of their favourite examples is incest-avoidance.

Incest – or sexual relations between members of the same family –

is regarded as taboo in virtually every human society, and subject to

legal and moral sanctions in most. This fact is quite striking, given

th at sexual mores are otherwise quite diverse across human

societies. Why the prohibition on incest? Sociobiologists offer the

following explanation. Children born of incestuous relationships

often have serious genetic defects. So in the past, those who

practised incest would have tended to leave fewer viable offspring

than those who didn’t. Assuming th at the incest-avoiding behaviour

was genetically based, and thus transmitted from parents to their

offspring, over a number of generations it would have spread

through the population. This explains why incest is so rarely found

in human societies today.

 

See: en.wikipedia.org/wiki/Westermarck_effect

 

 

If this response is correct, it means we should sharply distinguish

the ‘scientific’ objections to sociobiology from the ‘ideological’

objections. Reasonable though this sounds, there is one point it

doesn’t address: advocates of sociobiology have tended to be

politically right-wing, while its critics have tended to come from the

political left. There are many exceptions to this generalization,

especially to the first half of it, b ut few would deny the trend

altogether. I f sociobiology is simply an impartial enquiry into the

facts, what explains the trend? Why should there be any correlation

at all between political opinions and attitudes towards

sociobiology? This is a tricky question to answer. For though some

sociobiologists may have had hidden political agendas, and though

some of sociobiology’s critics have had opposing agendas of their

own, the correlation extends even to those who debate the issue in

apparently scientific terms. This suggests, though does not prove,

th at the ‘ideological’ and ‘scientific’ issues may not be quite so easy

to separate after all. So the question of whether sociobiology is a

value-free science is less easy to answer than might have been

supposed.

 

This typical claim has been found to be wrong. And it also doesnt fit with other facts, like that Wilson is a socialist. The father of sociobiology! Dawkins has also expressed leftist beliefs.

 

link.springer.com/article/10.1007/s12110-007-9024-y/fulltext.html

 

Critics of evolutionary psychology and sociobiology have advanced an adaptationists-as-right-wing-conspirators (ARC) hypothesis, suggesting that adaptationists use their research to support a right-wing political agenda. We report the first quantitative test of the ARC hypothesis based on an online survey of political and scientific attitudes among 168 US psychology Ph.D. students, 31 of whom self-identified as adaptationists and 137 others who identified with another non-adaptationist meta-theory. Results indicate that adaptationists are much less politically conservative than typical US citizens and no more politically conservative than non-adaptationist graduate students. Also, contrary to the “adaptationists-as-pseudo-scientists” stereotype, adaptationists endorse more rigorous, progressive, quantitative scientific methods in the study of human behavior than non-adaptationists.

 

emilkirkegaard.dk/en/wp-content/uploads/Testing_the_Controversy.pdf

Review: Bad Pharma

www.goodreads.com/book/similar/19171192-bad-pharma-how-drug-companies-mislead-doctors-and-harm-patients

lib.free-college.org/view.php?id=864114

 

Having already read Peter Gøtzsche’s Dødelig medicin og organiseret kriminalitet: Hvordan medicinalindustrien har korrumperet sundhedsvæsenet. Art People, 2013, this book did not bring so much new. However, it did present things better than Gøtzsche did. To be fair, he focused mostly on proving that the farma industry are organized criminals. I agree, but the science is more interesting than reading about 100 different cases of farma companies cheating and getting fines.

 

 

 

If you’re a nerd, you might think: these files are electronic;

they’re PDFs, a type o f file specifically designed to make sharing

electronic documents convenient. Any nerd will know that if

you want to find something in an electronic document, it’s easy:

you just use the ‘find’ command: type in, say, ‘peripheral

neuropathy’, and your computer will find the phrase straight

off. But no: unlike almost any other serious government docu­

ment in the world, the PDFs from the FDA are a series of photo­

graphs of pages of text, rather than the text itself. This means

you cannot search for a phrase. Instead, you have to go through

it, searching for that phrase, laboriously, by eye.

 

Easily solved by OCR software.

en.wikipedia.org/wiki/Optical_character_recognition

 

 

Sharing data of individual patients’ outcomes in clinical

trials, rather than just the final summary result, has several

significant advantages. First, it’s a safeguard against dubious

analytic practices. In the VIGOR trial on the painkiller Vioxx,

for example, a bizarre reporting decision was made.83 The aim

of the study was to compare Vioxx against an older, cheaper

painkiller, to see if it was any less likely to cause stomach prob­

lems (this was the hope for Vioxx), and also if it caused more

heart attacks (this was the fear). But the date cut-off for mea­

suring heart attacks was much earlier than that for measuring

stomach problems. This had the result of making the risks look

less significant, relative to the benefits, but it was not declared

clearly in the paper, resulting in a giant scandal when it was

eventually noticed. If the raw data on patients was shared,

games like these would be far easier to spot, and people might

be less likely to play them in the first place.

 

Occasionally – with vanishing rarity – researchers are able to

obtain raw data, and re-analyse studies that have already been

conducted and published. Daniel Coyne, Professor of Medicine

at Washington University, was lucky enough to get the data on a

key trial for epoetin, a drug given to patients on kidney dialysis,

after a four-year-long fight.84 The original academic publication

on this study, ten years earlier, had switched the primary

outcomes described in the protocol (we will see later how this

exaggerates the benefits of treatments), and changed the main

statistical analysis strategy (again, a huge source of bias). Coyne

was able to analyse the study as the researchers had initially

stated they were planning to in their protocol; and when he did,

he found that they had dramatically overstated the benefits of

the drug. It was a peculiar outcome, as he himself acknowl­

edges: ‘As strange as it seems, I am now the sole author of the

publication on the predefined primary and secondary results of

the largest outcomes trial of epoetin in dialysis patients, and I

didn’t even participate in the trial.’ There is room, in my view,

for a small army o f people doing the very same thing, re-

analysing all the trials that were incorrectly analysed, in ways

that deviated misleadingly from their original protocols.

 

This is the kind of second-order scientist that was described in the paper:

Nosek, Brian A., and Yoav Bar-Anan. “Scientific utopia: I. Opening scientific communication.” Psychological Inquiry 23.3 (2012): 217-243.

 

This paper is extremely interesting by the way. Read it. Yes, seriously!

Review: Making sense of heritability

Download: www.libgen.net/search.php?search_type=magic&search_text=making+sense+of+heritability&submit=Dig+for

 

This is a GREAT book, which goes down to the basics about heritability and the various claims people have made against it. Highly recommended. Best book of the 29 i have read this year.

 

The denial of genetically based psychological differences is the kind of sophisti-

cated error normally accessible only to persons having Ph.D. degrees.

David Lykken

 

Quote checks out. edge.org/conversation/-how-can-educated-continue-to-be-radical-environmentalists

 

 

I was introduced to the nature–nurture debate by reading Ned Block

and Gerald Dworkin’s well-known and widely cited anthology about

the IQ controversy (Block & Dworkin 1976a). This collection of arti-

cles has long been the main source of information about the heredity–

environment problem for a great number of scientists, philosophers, and

other academics. It is not an exaggeration to say that the book has been

the major influence on thinking about this question for many years. Like

most readers, I also left the book with a feeling that hereditarianism (the

view that IQ differences among individuals or groups are in substantial

part due to genetic differences) is facing insuperable objections that strike

at its very core.

 

There was something very satisfying, especially to philosophers, about

the way hereditarianism was criticized there. A strong emphasis was on

conceptual and methodological difficulties, and the central arguments

against hereditarianism appeared to have full destructive force indepen-

dently of empirical data, which are, as we know, both difficult to evaluate

and inherently unpredictable.

 

So this looked like a philosopher’s dream come true: a scientific issue

with potentially dangerous political implications was defused not through

an arduous exploration of themessy empiricalmaterial but by using a dis-

tinctly philosophical method of conceptual analysis and methodological

criticism. It was especially gratifying that the undermined position was

often associated with politically unacceptable views like racism, toler-

ation of social injustice, etc. Besides, the defeat of that doctrine had a

certain air of finality. It seemed to be the result of very general, a priori

considerations, which, if correct, could not be reversed by “unpleasant”

discoveries in the future.

 

But very soon I started having second thoughts about Block and

Dworkin’s collection. The reasons are worth explaining in some detail

I think, because the book is still having a considerable impact, especially

on discussions in philosophy of science.

 

First, some of the arguments against hereditarianism presented there

were just too successful. The refutations looked so utterly simple, elegant,

and conclusive that it made me wonder whether competent scientists

could have really defended a position that was somanifestly indefensible.

Something was very odd about the whole situation.

 

 

There is indeed something about this. This book is a premier case of what Weinberg called mentioned with his comment “…a knowledge of philosophy does not seem to be of use to physicists – always with the exception that the work of some philosophers helps us to avoid the errors of other philosophers.”

 

See: www.abstractdelights.com/no-respect

 

 

Of course,Bouchardwould be justified in notworrying toomuch about

these global methodological criticisms if the only people who made a

fuss over them were philosophers of science. Even with this unfriendly

stance becoming a consensus in philosophy of science, scientists might

still remain unimpressed because many of them would probably be sym-

pathetic to JamesWatson’s claim: “I do not like to suffer at all from what

I call the German disease, an interest in philosophy” (Watson 1986: 19).

 

Source is: Watson, J. D. 1986, “Biology: A Necessarily Limitless Vista,” in S. Rose and L.

Appignanesi (eds.), Science and Beyond, Oxford, Blackwell.

 

 

At this point I am afraid I may lose some of my scientific readers.

Remembering Steven Weinberg’s statement that the insights of philoso-

phers have occasionally benefited scientists, “but generally in a negative

fashion – by protecting them from the preconceptions of other philoso-

phers” (Weinberg 1993: 107), they might conclude that it is best just to

avoid reading any philosophy (including this book), and that in this way

they will neither contract preconceptions nor need protection fromthem.

But the problemis that the preconceptions discussed here do not originate

from a philosophical armchair. Scientists should be aware that to a great

extent these preconceptions come from some of their own. Philosophers

of science uncritically accepted these seductive but ultimately fallacious

arguments from scientists, repackaged them a little, and then fed them

back to the scientific community, which often took them very seriously.

Bad science was mistaken for good philosophy.

 

Sesardic clearly saw the same connection to Weinberg’s comments as i did. :)

 

 

It may seem surprising that Jones dismissed the views of the founder

of his own laboratory (Galton Laboratory, University College London)

in such amanner. But then again this should perhaps not be so surprising.

One can hardly be expected to study seriously the work of a man whom

one happens to call publicly “Victorian racist swine” – the way Jones

referred to Galton in an interview (Grove 1991). Also, in Jones’s book

Genetics for Beginners (Jones & Van Loon 1993: 169), Galton is pictured

in a Nazi uniform, with a swastika on his sleeve.

 

The virulent antinazism among these lefties is extraordinary. It targets everybody having the least to do with ideas the nazis also liked. It is a wonder no one attacks vegetarians or people who campaign against smoking for being nazis…

 

 

Arthur Jensen once said that “a heritability study may be regarded

as a Geiger counter with which one scans the territory in order to find

the spot one can most profitably begin to dig for ore” (Jensen 1972b:

243). That Jensen’s advice as to how to look upon heritability is merely

an application of a standard general procedure in causal reasoning is

confirmed by the following observation from an introduction to causal

analysis: “the decomposition of statistical associations represents a first

step. The results indicate which effects are important and which may be

safely ignored, that is, where we ought to start digging in order to uncover

the nature of the causal mechanisms producing association between our

variables” (Hellevik 1984: 149). High heritability of a trait (in a given

population) often signals that it may be worthwhile to dig further, in the

sense that an important geneticmechanismcontrolling differences in this

trait may thus be uncovered.8

 

Another great Jensen insight.

 

Citation is to: 1972b, “Discussion,” in L. Ehrman, G. S. Omenn, E. Caspari (eds.), Genetics,

Environment and Behavior, New York, Academic Press.

 

 

Second, even if a trait is shared by all organisms in a given population

it can still be heritable – if we take a broader perspective, and compare

that populationwith other populations. The critics of heritability are often

confused, and switch from one perspective to another without noticing it.

Consider the following “problem” for heritability:

 

the heritability of “walking on two legs” is zero.And yetwalking on two legs

is clearly a fundamental property of being human, and is one of the more

obvious biological differences between humans and other great apes such

as chimpanzees or gorillas. It obviously depends heavily on genes, despite

having a heritability of zero. (Bateson 2001b: 565; cf. Bateson 2001a: 150–

151; 2002: 2212)

 

When Bateson speaks about the differences between humans and other

great apes, the heritability of walking on two legs in that population

(consisting of humans, chimpanzees, and gorillas) is certainly not zero.

On the other hand, within the human species itself the heritability may

well be zero. So, if it is just made entirely clear which population is

being discussed, no puzzling element remains. In the narrower popula-

tion (humans), the question “Do genetic differences explain why some

people walk on two legs and some don’t?” has a negative answer because

there are no such genetic differences. In the broader population (humans,

chimpanzees, and gorillas) the question “Do genetic differences explain

why some organisms walk on two legs and some don’t?” has an affirma-

tive answer. All this neatly accords with the logic of heritability, and cre-

ates no problem whatsoever. The critics of hereditarianism like to repeat

that heritability is a population-relative statistic, but when they raise this

kind of objection it seems that they themselves forget this important

truth.

 

Things like the number of finger is also heritable within populations. There are rare genetic mutations that cause supernumerary body parts: en.wikipedia.org/wiki/Supernumerary_body_part

 

However, these are very rare, so to spot them, one needs a huge sample size. Surely the heritability of having 6 fingers is high, while the heritability of having 4 fingers is low, but not zero. Of the people who have 4 fingers, most of the casesare probably caused by unique environment (i.e. accidents), but some are caused by genetics.

 

 

(4) It is often said that in individual cases it is meaningless to compare

the importance of interacting causes: “If an event is the result of the joint

operation of a number of causative chains and if these causes ‘interact’

in any generally accepted meaning of the word, it becomes conceptually

impossible to assign quantitative values to the causes of that individual

event” (Lewontin 1976a: 181).But this is in fact not true.Take, for example,

the rectangle with width 2 and length 1 (from Figure 2.3). Its area is 2,

which is considerably below the average area for all rectangles (around

100). Why is that particular rectangle smaller than most others? Is its

width or its length more responsible for that? Actually, this question is

not absurd at all. It has a straightforward and perfectlymeaningful answer.

The rectangleswith thatwidth (2) have on average the area that is identical

to the mean area for all rectangles (100.66), so the explanation why the

area of that particular rectangle deviates so much from the mean value

cannot be in its width. It is its below-average length that is responsible.

 

Even the usually cautious David Lykken slips here by condemning

the measurement of causal influences in the individual case as inherently

absurd: “It is meaningless to ask whether Isaac Newton’s genius was due

more to his genes or his environment, as meaningless as asking whether

the area of a rectangle is due more to its length or its width” (Lykken

1998a: 24). Contrary to what he says, however, it makes perfect sense to

inquire whether Newton’s extraordinary contributions were more due to

his above-average inherited intellectual ability or to his being exposed

to an above-average stimulating intellectual environment (or to some

particular combination of the two). The Nuffield Council on Bioethics

makes a similar mistake in its report on genetics and human behavior:

“It is vital to understand that neither concept of heritability [broad or

narrow] allows us to conclude anything about the role of heredity in the

development of a characteristic in an individual” (Nuffield 2002: 40). On

the contrary, if the broad heritability of a trait is high, this does tell us

that any individual’s phenotypic divergence from the mean is probably

more caused by a non-standard genetic influence than by a non-typical

environment. For a characteristically clear explanation of why gauging

the contributions of heredity and environment is not meaningless even in

an individual case, see Sober 1994: 190–192.

 

This is a good point. The reason not to talk about the causes of a particular level of g in some person is not that it is a meaningless question, it is that it is difficult to know the answer. But in some cases, it is clearly possible, cf. my number of fingers scenario above.

 

 

Nesardic mentions two studies that fysical attractiveness is not correlated with intelligence. That goes against what i believe(d?). He cites:

 

Feingold, A. 1992, “Good-looking People Are NotWhatWe Think,” Psycholog-

ical Bulletin 111: 304–341.

 

Langlois, J. H., Kalakanis, L., Rubenstein, A. J., Larson, A., Hallam, M., and

Smoot, M. 2000, “Maxims or Myths of Beauty? A Meta-Analytic and Theo-

retical Review,” Psychological Bulletin 126: 390–423.

emilkirkegaard.dk/en/wp-content/uploads/Maxims-or-Myths-of-Beauty.pdf

 

But i apparently dont have access to the first one. But the second one i do have. In it one can read:

 

According to this maxim, there is no necessary correspondence

between external appearance and the behavior or personality of an

individual (Ammer, 1992). Two meta-analyses have examined the

relation between attractiveness and some behaviors and traits

(Feingold, 1992b2; L. A. Jackson, Hunter, & Hodge, 1995). Fein-

gold (1992b) reported significant relations between attractiveness

and measures of mental health, social anxiety, popularity, and

sexual activity but nonsignificant relations between attractiveness

and sociability, internal locus of control, freedom from self-

absorption and manipulativeness, and sexual permissiveness in

adults. Feingold also found a nonsignificant relation between at-

tractiveness and intelligence (r = .04) for adults, whereas L. A.

Jackson et al. found a significant relation for both adults (d = .24

overall, d = .02 once selected studies were removed) and for

children (d = .41).

 

These meta-analyses suggest that there may be a relation be-

twe^n behavior and attractiveness, but the inconsistencies in re-

sults call for additional attention. Moreover, the vast majority of

dependent variables analyzed by Feingold (1992b) and L. A.

Jackson et al. (1995) assessed traits as defined by psychometric

tests (e.g., IQ) rather than behavior as defined by observations of

behaviors in actual interactions. Thus, to fully understand the

relations among appearance, behaviors, and traits, it is important to

broaden the conception of behavior beyond that used by Feingold

and L. A. Jackson et al. If beauty is only skin-deep, then a

comprehensive meta-analysis of the literature should find no sig-

nificant differences between attractive and unattractive people in

their behaviors, traits, or self-views.

 

So, maybe. It seems difficult that g and pa (phy. attract.) is NOT associated purely by effect of mating choices, since females prefer males with high SES and males prefer females with have pa. Then comes the mutational load hypothesis, and the fact that smarter people presumably are better at taking care of their bodies, which increases pa. I find it very difficult indeed to believe that they arent correlated.

 

 

In my opinion, this kind of deliberate misrepresentation in attacks on

hereditarianism is less frequent than sheer ignorance. But why is it that a

number of peoplewho publicly attack “Jensenism” are so poorly informed

about Jensen’s real views? Given the magnitude of their distortions and

the ease with which these misinterpretations spread, one is alerted to

the possibility that at least some of these anti-hereditarians did not get

their information about hereditarianismfirst hand, fromprimary sources,

but only indirectly, from the texts of unsympathetic and sometimes quite

biased critics.8In this connection, it is interesting to note that several

authors who strongly disagree with Jensen (Longino 1990; Bowler 1989;

Allen 1990; Billings et al. 1992; McInerney 1996; Beckwith 1993; Kassim

2002) refer to his classic paper from 1969 by citing the volume of the

Harvard Educational Review incorrectly as “33” (instead of “39”). What

makes this mis-citation noteworthy is that the very same mistake is to

be found in Gould’s Mismeasure of Man (in both editions). Now the

fact that Gould’s idiosyncratic lapsus calami gets repeated in the later

sources is either an extremely unlikely coincidence or else it reveals that

these authors’ references to Jensen’s paper actually originate from their

contact with Gould’s text, not Jensen’s.

 

Gotcha. A nice illustrating case of the thing map makers used to use to prove plagiarism. en.wikipedia.org/wiki/Copyright_trap

 

Incidentally, in this case it ended up having another use! :)

 

 

Nesardic quotes:

 

In December 1986 our newly-born daughter was diagnosed to be suffering

from a genetically caused disease called Dystrophic Epidermolysis Bullosa

(EB). This is a disease in which the skin of the sufferer is lacking in certain

essential fibers. As a result, any contact with her skin caused large blisters

to form, which subsequently burst leaving raw open skin that healed only

slowly and left terrible scarring. As EB is a genetically caused disease it

is incurable and the form that our daughter suffered from usually causes

death within the first sixmonths of life . . .Our daughter died after a painful

and short life at the age of only 12 weeks. (quoted in Glover 2001: 431 –

italics added)

 

from: Glover, J. 2001, “Future People, Disability, and Screening,” in J. Harris (ed.),

Bioethics, Oxford, Oxford University Press.

 

Nasty disease indeed. Only eugenics can avoid such atrocities.

 

 

On the contrary, empirical evidence suggests that for many important

psychological traits (particularly IQ), the environmental influences that

account for phenotypic variation among adults largely belong to the non-

shared variety. In particular, adoption studies of genetically unrelated

children raised in the same family show that for many traits the adult

phenotypic correlation among these children is very close to zero (Plomin

et al. 2001: 299–300). This very surprising but consistent result points

to the conclusion that we may have greatly overestimated the impact

of variation in shared environmental influences.6The fact that variation

within a normal range does not have much effect was dramatized in the

following way by neuroscientist Steve Petersen:

 

At a minimum, development really wants to happen. It takes very impov-

erished environments to interfere with development because the biological

system has evolved so that the environment alone stimulates development.

What does this mean? Don’t raise your children in a closet, starve them, or

hit them in the head with a frying pan. (Quoted in Bruer 1999: 188)

 

But if social reforms are mainly directed at eliminating precisely these

between-family inequalities (economic, social, and educational), and if

these differences are not so consequential as we thought, then egalitar-

ianism will find a point of resistance not just in genes but also in the

non-heritable domain, i.e., in those uncontrollable and chaotically emerg-

ing environmental differences that by their very nature cannot be an easy

object for social manipulation.

 

All this shows that it is irresponsible to disregard constraints on mal-

leability and fan false hopes about what social or educational reforms can

do. As David Rowe said:

 

As social scientists, we should be wary of promisingmore than we are likely

to deliver. Physicists do not greet every new perpetual motion machine,

created by a basement inventor, with shouts of joy and claims of an endless

source of electrical or mechanical power; no, they know the laws of physics

would prevent it. (Rowe 1997: 154)

 

I will end this chapter with another qualification.Although heritability

puts constraints on malleability it is, strictly speaking, incorrect to say

that the heritable part of phenotypic variance cannot be decreased by

environmentalmanipulation. It is true that if heritability is, say, 80 percent

then at most 20 percent of the variation can be eliminated by equalizing

environments. But if we consider redistributing environments, without

necessarily equalizing them, a larger portion of variance than 20 percent

can be removed.

 

Table 5.5 gives an illustration how this might work.

In this examplewith just two genotypes and two environments (equally

distributed in the population), themain effect of the genotype on the vari-

ation in the trait (say, IQ) is obviously stronger than the environmental

effect. Going from G2 to G1 increases IQ 20 points, while going from the

less favorable environment (E2) to the more favorable one (E1) leads

to an increase of only 10 points. Heritability is 80 percent, the genetic

variance being 100 and the environmental variance being 25. Now if we

expose everyone to the more favorable environment (E1) we will com-

pletely remove the environmental variance (25), and the variance in the

new population will be 100. The genetic variance survives environmental

manipulation unscathed.

 

Table:

emilkirkegaard.dk/en/wp-content/uploads/ScreenHunter_90-Sep.-23-13.57.png

 

But there is a way to make an incursion into the “genetic territory.”

Suppose we expose all those endowed with G1 to the less favorable

environment (E2) and those with G2 to the more favorable environment

(E1). In this way we would get rid of the highest and lowest score, and

we would be left only with scores of 95 and 105. In terms of variance, we

would have succeeded in eliminating 80 percent of variance by manipu-

lating environment, despite heritability being 80 percent.

 

How is this possible? The answer is in the formula for calculating vari-

ance in chapter 1 (see p. 21). One component of variance is genotype–

environment correlation, which can have a negative numerical value.

This is what has happened in our example. The phenotype-increasing

genotype was paired with the phenotype-decreasing environment, and

the phenotype-decreasing genotype was paired with the phenotype-

increasing environment. This move introduced the negative G–E corre-

lation and neutralized the main effects, bringing about a drastic drop in

variation.

 

The strategy calls to mind the famous Kurt Vonnegut story “Harrison

Bergeron,” where the society intervenes very early and suppresses the

mere expression of superior innate abilities by imposing artificial obsta-

cles on gifted individuals. Here is just one short passage from Vonnegut:

 

And George, while his intelligence was way above normal, had a little

mental-handicap radio in his ear – he was required by law to wear it at all

times. It was tuned to a government transmitter and, every twenty seconds

or so, the transmitter would send out some sharp noise to keep people like

George from taking unfair advantage of their brains. (Vonnegut 1970: 7)

 

We all get a chill from the nightmare world of “Harrison Bergeron.” But

in its milder forms the idea that if the less talented cannot be brought

up to the level of those better endowed, the latter should then be held

back in their development for the sake of equality, is not entirely with-

out adherents. In one of the most carefully argued sociological studies

on inequality there is an interesting proposal in that direction, about

how to reduce differences in cognitive abilities that are caused by genetic

differences:

 

Asociety committed to achieving full cognitive equality would, for example,

probably have to exclude genetically advantaged children from school. It

might also have to impose other handicaps on them, like denying them

access to books and television. Virtually no one thinks cognitive equality

worth such a price.Certainlywe do not.But if our goalwere simply to reduce

cognitive inequality to, say, half its present level, instead of eliminating it

entirely, the pricemight bemuch lower. (Jencks et al. 1972: 75–76 – emphasis

added)

 

So although Jencks and his associates concede that excluding geneti-

cally advantaged children from school and denying them access to books

may be too drastic, they appear to think that the price of equality could

become acceptable if the goalwas lowered andmeasuresmademoremod-

erate. Are they suggesting that George keeps the little mental-handicap

radio in his ear but that the noise volume should be set only at half

volume?

 

I wonder if someone cud make a good video based on this… Oh that’s right…

 

www.youtube.com/watch?v=F1eHkbmUJBQ

 

 

David Lykken had a good comment on this tendency of some

Darwinians (he had John Tooby and Leda Cosmides in mind) to pub-

licly dissociate themselves from behavior genetics, in the hope that this

move would make their own research less vulnerable to political criti-

cisms: “Are these folks just being politic, just claiming only the minimum

they need to pursue their own agenda while leaving the behavior geneti-

cists to contend with the main armies of political correctness?” (Lykken

1998b).

 

There are some obvious, and other less obvious, consequences of polit-

ically inspired, vituperative attacks on a given hypothesisH.On the obvi-

ous side, many scientists who believe that H is true will be reluctant to

say so, many will publicly condemn it in order to eliminate suspicion that

they might support it, anonymous polls of scientists’ opinions will give

a different picture from the most vocal and most frequent public pro-

nouncements (Snyderman & Rothman 1988), it will be difficult to get

funding for research on “sensitive” topics,19the whole research area will

be avoided by many because one could not be sure to end up with the

“right” conclusion,20texts insufficiently critical of “condemned” views

will not be accepted for publication,21etc.

 

On the less obvious side, a nasty campaign against H could have the

unintended effect of strengthening H epistemically, and making the criti-

cism of H look less convincing. Simply, if you happen to believe that H is

true and if you also know that opponents of H will be strongly tempted

to “play dirty,” that they will be eager to seize upon your smallest mis-

take, blow it out of all proportion, and label you with Dennett’s “good

epithets,” with a number of personal attacks thrown in for good measure,

then if you still want to advocate H, you will surely take extreme care to

present your argument in the strongest possible form. In the inhospitable

environment for your views, you will be aware that any major error is a

liability that you can hardly afford, because it willmore likely be regarded

as a reflection of your sinister political intentions than as a sign of your

fallibility. The last thing onewants in this situation is the disastrous combi-

nation of being politically denounced (say, as a “racist”) and being proved

to be seriously wrong about science. Therefore, in the attempt to make

themselves as little vulnerable as possible to attacks they can expect from

their uncharitable and strident critics, those who defendHwill tread very

cautiously and try to build a very solid case before committing themselves

publicly. As a result, the quality of their argument will tend to rise, if the

subject matter allows it.22

 

Interesting effects of the unpopularity of the views.

 

 

First of all, the issue about heritability is obviously a purely empirical

and factual one. So there is a strong case for denying that it can affect

our normative beliefs. But it is worth noting that the idea that a certain

heritability value could have political implications was not only criticized

for violatingHume’s law, but also for being politically dangerous. Bluntly,

if the high heritability of IQ differences between races really has racist

implications then it would seem that, after all, science could actually dis-

cover that racism is true.

 

The dangerwas clearly recognized byDavidHorowitz in his comments

on a statement on race that the Genetics Society of America (GSA)

wanted to issue in 1975. A committee preparing the statement took the

line that racism is best fought by demonstrating that racists’ belief in the

heritability of the black–white difference in IQ is disproved by science.

Horowitz objected:

 

The proposed statement is weak morally, for the following reason: Racists

assert that blacks are genetically inferior in I.Q. and therefore need not

be treated as equals. The proposed statement disputes the premise of the

assertion, but not the logic of the conclusion. It does not perceive that the

premise, while it may be mistaken, is not by itself racist: it is the conclusion

drawn (wrongly) from it that is racist. Even if the premise were correct, the

conclusion would not be justified …Yetthe proposed statement directs its

main fire at the premise, and by so doing seems to accept the racist logic.

It places itself in a morally vulnerable position, for if, at some future time,

that the premise is correct, then the whole GSA case collapses, together

with its case for equal opportunity. (Quoted in Provine 1986: 880)

 

The same argument was made by others:

 

To rest the case for equal treatment of national or racial minorities on

the assertion that they do not differ from other men is implicitly to admit

that factual inequality would justify unequal treatment. (Hayek 1960:

86)

But to fear research on genetic racial differences, or the possible existence

of a biological basis for differences in abilities, is, in a sense, to grant the

racist’s assumption: that if it should be established beyond reasonable doubt

that there are biological or genetically conditioned differences in mental

abilities among individuals or groups, then we are justified in oppressing

or exploiting those who are most limited in genetic endowment. This is, of

course, a complete non sequitur. (Jensen 1972a: 329)

If someone defends racial discrimination on the grounds of genetic differ-

ences between races, it is more prudent to attack the logic of his argument

than to accept the argument and deny any differences. The latter stance can

leave one in an extremely awkward position if such a difference is subse-

quently shown to exist. (Loehlin et al. 1975: 240)

But it is a dangerousmistake to premise themoral equality of human beings

on biological similarity because dissimilarity, once revealed, then becomes

an argument for moral inequality. (Edwards 2003: 801)

 

Good point indeed.