Clear Language, Clear Mind

March 19, 2017

European produced culture

Filed under: Copyright and filesharing — Tags: , , , , — Emil O. W. Kirkegaard @ 07:07

This is a break from the usual technical sciency stuff!

I talked with John Fuerst about how weird it is that people all over the Western world were watching the same American produced cartoons as a child. A kind of pan-Western world environmental effect, if there are any long term effects of watching these, aside from seemingly trivial things like recognizing and singing the songs. Here’s some Danish versions of US produced cartoons:

Gummi bears

 Chip og Chap

Someone made a compilation of intros, Danish

Cultural imperialism

From a non-US perspective, it is also a little sad that everybody else is watching culture produced by mainly one country. Western Europe (depending on exact definition), after all, has more people than the USA (~413 million vs. ~320 million), especially if we only count European Americans (~198 million). It’s a function of global capitalism and dominant language mechanics. Following WW2, English came to be the dominant language of culture. As such, if one wants to produce culture and sell it for profit/maximize audience, it makes sense to do so in English since this opens up the market automatically. If one did it in a native language, one would have to do a translation, which is costly and hard to do well. The result of this is that production of culture tends to be English, even when produced in non-English countries.

The most obvious case of this is where musicians sing in English, not their own language. Some Danish examples:

Dizzy Mizz Lizzy – Love is a loser’s game

(I’m a cynical romantic, so this resonates with me)

Dizzy Mizz Lizzy – 67 seas in your eyes

DAD – Sleeping my day away lyrics

Veto – Built to Fail

Mew – Snow Brigade

Aqua – Barbie Girl

I cheated a little: It’s Dano-Norwegian, but Norway produces its best stuff when together with Denmark! ;)

Trentemøller – Moan

I picked the Danish-in-English music I like the most. There is a lot more, but I could not immediately find a list. I actually listen almost only to electronic, instrumental music. Music with intelligible song lyrics interfere with my thinking. As such, the above is skewed towards music with lyrics.

European produced cartoons

After going over some examples of US produced cartoons I watched as a child, we wondered which non-US cartoons I saw. It’s not something one can immediately know, because obviously the versions I saw were in Danish, and it’s hard to tell a good translation from native production, and in case it’s translated, it may be translated from non-English. However, I can think of some examples:

Asterix and Cleopatra 1968


The Twelve Tasks of Asterix 1976

Tin Tin


Lucky Luke

Also Belgian.


Swedish-Finnish. Danish intro.

Danish produced cartoons

Fuglekrigen i Kanøfleskoven 1990

A dark children’s movie, but also quite lovely. Unfortunately, I cannot find any English subtitles.

Bennys badekar 1971

A surrealistic children’s movie. Seems that no English version exists either. Features a lot of curious things like a scene with topless mermaids singing how about pretty they are. Seems that no one put a complete version on Youtube (add to todo list…), but here’s the mermaid scene:

Inter-language culture

[Linguists call this code mixing. I almost can’t speak pure Danish anymore. I have to concentrate when I speak with my grandmother (age ~85), who’s the only person I know that doesn’t speak English at all.]

The Nordic countries + Netherlands are becoming effectively bilingual. English class begins in first grade currently in Denmark, but will surely begin in 0th grade soon. It has been lowered in recent decades. When I started primary school in 1995, our school took part in an experiment to begin it in the second grad whereas before it was the 4th grade. In general, I think one should teach foreign languages in the early years of primary school because children are really naturally amazing at picking up languages, whereas they are not very good at math. So, one can simply delay the teaching of math to the later grades (opt-in basis to allow for gifted students to begin early). But I digress.

An interesting effect of this near bilingualism is that it allows for inter-language culture. Two examples:


Wikipedia notes:

The band uses an unusual mixture of Danish and English in their lyrics; they started singing mostly in English with just a few Danish lyrics, but gradually, they have been using Danish more frequently in their songs. In 2009, when interviewed and asked about the language mix, frontman Kvamm said: “It’s important for me to use the Danish kind of English that I speak…my mother tongue Danish, and my second language English, are very present to me in thinking and talking and speaking with others, and writing. Also in songwriting. And things just take form in one of those languages, or a mixture in between them. I can’t really find a system to what goes the English way and what goes the Danish.”

Still, most of the songs are almost entire in one language. The album version of this song is completely in English, aside from 1 word in the title, but the live version is a bit more Danified:

And here’s one in Danish:

(Title means: Again and again and)

De Nattergale – The Julekalender

This is an adult (not like that!) christmas calendar with a dark humor created by a comedy group. The language is standard Danish (narrator), rural dialect (farm people), mixed Danish-English (nisserne), and Copenhagen dialect (nåseren). To get to the mixed part, go to 5:00. Possibly sounds hilarious to people who don’t speak Danish as it’s completely mixed up of gibberish Danish and English.

Fun facts about Disney

So who did Disney copy and copyright? Well, there’s a list on Wiki, but some totally not cherry-picked examples:

  • Snow White and the Seven Dwarfs, German
  • Pinocchio, Italian
  • Cinderella, French
  • Alice in Wonderland, British
  • Peter Pan, Scottish
  • Sleeping Beauty, French
  • The Little Mermaid, Danish
  • Beauty and the Beast, French
  • Aladdin, Arab folk
  • Mulan, Chinese folk
  • The Princess and the Frog, German
  • Tangled, German
  • Frozen, Danish

So, if we want to be a little trolly, we might say that this is American business as usual: take other people’s stuff and profit from it. Then tell everybody how great you are.

PS. I was a little inconsistent with my use of “European” to mean either from Europe or by Europeans, no matter where they live. Hopefully it is clear enough.

PPS. You figured it out: I like dark and surrealistic.

October 27, 2015

In favor of method diversity by the non-use of giants

Filed under: Copyright and filesharing,Science,Science — Tags: , , — Emil O. W. Kirkegaard @ 23:17

I had the impression that, since recognition of [problem] dates back at least to [person from a long time ago], there was a voluminous literature and [statistics to deal with the problem] was a solved problem, so I’m a little troubled that you seem to be trying to invent your own methods and aren’t citing much in the way of prior work.

This anonymous critique is saying that I’m not building on top of what there already is but instead re-inventing the wheel, perhaps even a square one. Underlying the criticism is a view of scientific progress as accumulating knowledge over time. We know more stuff now than we used to (but some things we think we know now we still get wrong!) and this is because new scientists don’t just start finding out how everything works (the goal of science) from scratch, but instead read what research has already been done and try to build on top of that. At least that is the general idea. However, we know from actual scientific practice that scientists often don’t build on top of prior work, perhaps because the body of prior work is already so large that having an overview of it is beyond current human cognitive capacity. Alternatively, because the prior work is often inaccessible, badly structured, not searchable, etc. Othertimes scientists are just lazy.

The first problem is in principle unsolvable because improving human cognitive ability/capacity will accelerate the accumulation of knowledge. However, we will (very soon) improve upon the present situation (Shulman & Bostrom, 2014).

The second problem is faster to fix, requiring either ubiquitous open access or guerrilla open access. The first option is coming along fast for new material, but won’t solve it for old material already locked down by copyright. Probably Big Copyright is going to lobby for extending copyright protection further, which means that even just waiting for copyright to expire is not a legal option.

A delicious example of scientists not building on top of relevant prior works is the concept of construct proliferation (Reeve & Basalik, 2014), which is when we invent a new word/concept to cover the same region in conceptual space as previous concepts already covered. This is itself a redundant copy of the earlier term construct redundancy. This meta-problem is fairly obvious, so my guess is that there is a long list of terms for it, thus illustrating itself.

Yet I argue the opposite…

Given the above, why would one willingly want to not read the earlier literature/build on top of prior work on a topic before trying to find solutions? There are some possible reasons:

One reason is personal. Perhaps one just really likes the experience of finding an interesting problem and coming up with solutions. This is closely related to a couple of concepts: openness to experience, typical intellectual engagement, need for cognition, epistemic curiosity (and more), see (Mussel, 2010) and (Stumm, Hell, & Chamorro-Premuzic, 2011). Incidentally these also show strong concept overlap (this is yet another term to refer to the situation where multiple concepts cover some of the same area in conceptual space, however it is different in that it is explicitly continuous instead of categorical).

A career reason to invent new constructs is a desire to make a name for yourself and get a good job. A well-tested way to do that is to introduce a new concept and accompanying questionnaire that others then hopefully use. This can result in hundreds or thousands of citations. For instance, the original paper for need for cognition has 5063 on Scholar since 1982 / 153 per year, the original paper for typical intellectual engagement has 410 citations since 1992 / 18 per year, and that for epistemic curiosity has 156 since 2003 / 13 per year. The later papers do have lower citation counts per year, perhaps indicating some conceptual satiation, but the papers are still way above the norm. To put it another way, since it is clearly unnecessary to read much of the relevant prior work to get published, one may as well skip this.

Scientifically speaking, neither of the above two reasons are relevant. The first has more to do with personality disposition towards solving new problems, whereas the second is due, to some degree, to perverse incentives.

Exploratory bridge building

Are there any good scientific reasons to sometimes start from scratch? I think so. Think of it this way: Many scientific questions can be approached in multiple ways. We can build a large analogy out of that idea.

Imagine a many-dimensional space where some regions are impassable or slow to pass, and where there are one or more regions or points from which useful resources can be extracted. We, the bridge engineers, start somewhere in this space (all in the same place) and have to find resources but we don’t know exactly where they will be found, so we don’t know exactly which directions to move in. Furthermore, imagine that we can build bridges (vectors) in this space by adding them together and that we can only move on the bridges (or in them). This means that one can now travel in a particular direction, at least slowly. If the resources are far away from the beginning position, it is easy to see that one could never reach them without adding vectors together. This forms the basis of the general preference for building on prior work.

How do we know which direction to build bridges in if we don’t know where the resources are? We can expand the analogy further by saying that no one has the ability to see further than a short distance. Instead, what engineers have is a noisy measure of how close their current position is to the nearest resource. Noisy here meaning that they are only roughly correct, to varying degrees and with different biases. Sometimes what appears to be a good general direction towards a resource to many engineers ends up in a resource poor dead end, i.e. all directions to move closer to nearby resources requires going thru impassable or difficult to pass regions (say, regions where the price of building bridges are very high).

Those familiar with evolutionary biology should now see where I’m going with this. We can say that approaches to answers in science can end up in local maximums in the science fitness landscape. When this happens, one has to go back and move in a new direction somewhere.

Still, this leaves us with the question of how far we should move back. Often it may be necessary to go back only some of the way and start a new branch of the same root bridge from that point. Sometimes, however, a very early part of the bridge moved into a regional that can only result in slow progress or even a dead-end. When this happens, one has to start over entirely.

Decision making

Because all engineers are short sighted, it is impossible for them to know when it is time to start over. Worse than that, engineers have a kind of tunnel vision such that when they have once traveled out on given bridge from their homeland, they will be less capable of spotting good directions to build other root bridges from. In other words, once one has learned of a particular approach to a problem, it can be difficult to go “back to basics” and start over with new ideas. One needs a pair of fresh eyes. The only way to do this is to get an engineer who has never been to this space before, avoid informing him of the already built bridges and let him choose where to build his first bridge and let him work on it for some time to see if he ends up in a dead end or a previously unknown resource rich area. Even if the engineers have already found one good resource region, they might wonder whether there are more. Finding more resources probably requires moving in a new direction from the beginning or at least from an early part of the bridge.


It is clear that as a large team project neither extreme solution is optimal: 1) always building on prior work, or 2) never building on prior work. Instead, some balance must be found where some, probably most, engineers are dedicated to building on top of the fairly recent prior work, but some engineers should try to backtrack and see if they can find a better route to a currently known resource area or identify new regions.

Who should start new bridges? We may posit that the engineers vary in their psychological attributes in ways that have an effect on their efficiency of building on prior bridges or starting their own root bridges/branches. In that case, engineers who are particularly good at spotting new directions and working on their own bridge alone would be good for the role of pioneer/Rambo engineers. Even if there are no differences between the efficiency of the engineers re. building new branches/roots or building on top of prior work, if only a few engineers are inclined to working alone perhaps finding new resources (reason #1), the optimal team strategy is where most engineers build on fairly recent prior work but some don’t.


Given the abstractness of the space bridge engineer analogy, one should probably do a visualization, or maybe even a small computer game. The last is beyond my coding ability at the time being and the first requires more time than I have.


Mussel, P. (2010). Epistemic curiosity and related constructs: Lacking evidence of discriminant validity. Personality and Individual Differences, 49(5), 506–510.

Reeve, C. L., & Basalik, D. (2014). Is health literacy an example of construct proliferation? A conceptual and empirical evaluation of its redundancy with general cognitive ability. Intelligence, 44, 93–102.

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92.

Stumm, S. von, Hell, B., & Chamorro-Premuzic, T. (2011). The Hungry Mind Intellectual Curiosity Is the Third Pillar of Academic Performance. Perspectives on Psychological Science, 6(6), 574–588.

September 14, 2014

Costs and benefits of publishing in legacy journals vs. new journals

Filed under: Copyright and filesharing,Psychology,Science — Tags: , , , — Emil O. W. Kirkegaard @ 23:34

I recently published a paper in Open Differential Psychology. After it was published, I decided to tell some colleagues about it so that they would not miss it because it is not published in any of the two primary journals in the field: Intell or PAID (Intelligence, Personal and Individual Differences). My email is this:

Dear colleagues,

I wish to inform you about my paper which has just been published in Open Differential Psychology.

Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

You can read the full paper at the journal website:


One researcher responded with:

Dear Emil,
Thanks for your paper.
Why not publishing in standard well established well recognized journals listed in Scopus and Web of Science benefiting from review and
increasing your reputation after publishing there?
Go this way!

This concerns the decision of choosing where to publish. I discussed this in a blog post back in March before setting up OpenPsych. To be very short, the benefits of publishing in legacy journals is 1) recognition, 2) indexing in proprietary indexes (SCOPUS, WoS, etc.), 3) perhaps better peer review, 4) perhaps fancier appearance of the final paper. The first is very important if one is an up-and-coming researcher (like me) because one will need recognition from university people to get hired.

I nevertheless decided NOT to publish (much) in legacy journals. In fact, the reason I got into publishing studies so late is that I dislike the legacy journals in this field (and most other fields). Why dislike legacy journals? I made an overview here, but to sum it up: 1) Either not open access or extremely pricey, 2) no data sharing, 3) in-transparent peer review system, 4) very slow peer review (~200 days on average in case of Intell and PAID), 5) you’re supporting companies that add little value to science and charge insane amounts of money for it (for Elsevier, see e.g. Wikipedia, TechDirt has a large number of posts concerning that company alone).

As a person who strongly believes in open science (data, code, review, access), there is no way I can defend a decision to publish in Elsevier journals. Their practices are clearly antithetical to science. I also signed The Cost of Knowledge petition not to publish or review for them. Elsevier has a strong economic interest in keeping up their practices and I’m sure they will. The only way to change science for the better is to publish in other journals.

Non-Elsevier journals

Aside from Elsevier journals, one could publish in PLoS or Frontiers journals. They are open access, right? Yes, and that’s a good improvement. They however are also predatory because they charge exorbitant fees to publish: 1600 € (Frontiers), 1350 US$ (PLoS). One might as well publish in Elsevier as open access for which they charge 1800 US$.

So are there any open access journals without publication fees in this field? There is only one as far as I know, the newly established Journal of Intelligence. However, the journal site states that the lack of a publication fee is a temporary state of affairs, so there seems to be no reason to help them get established by publishing in their journal. After realizing this, I began work on starting a new journal. I knew that there was a lot of talent in the blogosphere with a similar mindset to me who could probably be convinced to review for and publish in the new journal.


But what about indexing? Web of Science and SCOPUS are both proprietary; not freely available to anyone with an internet connection. But there is a fast-growing alternative: Google Scholar. Scholar is improving rapidly compared to the legacy indexers and is arguably already better since it indexes a host of grey literature sources that the legacy indexers don’t cover. A recent article compared Scholar to WOS. I quote:

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle become complete in which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for meta-analyses and systematic reviews. [source]

In other words, Scholar covers almost all the articles that WoS covers already and is quickly catching up on the older studies too. In a few years Scholar will cover close to 100% of the articles in legacy indexers and they will be nearly obsolete.

Getting noticed

One thing related to the above is getting noticed by other researchers. Since many researchers read legacy journals, simply being published in them is likely sufficient to get some attention (and citations!). It is however not the only way. The internet has changed the situation here completely in that there are new lots of different ways to get noticed: 1) Twitter, 2) ResearchGate, 3) Facebook/Google+, 4) Reddit, 5) Google Scholar will inform you about new any research by anyone one has cited previously, 6) blogs (own or others’) and 7) emails to colleagues (as above).

Peer review

Peer review in OpenPsych is innovative in two ways: 1) it is forum-style instead of email-based which is better suited for communication between more than 2 persons, 2) it is openly visible which works against biased reviewing. Aside from this, it is also much faster, currently averaging 20 days in review.

Reputation and career

There is clearly a drawback here for publishing in OpenPsych journals compared with legacy journals. Any new journal is likely to be viewed as not serious by many researchers. Most people dislike changes including academics (perhaps especially?). Publishing there will not improve chances of getting hired as much as will publishing in primary journals. So one must weigh what is most important: science or career?

June 6, 2013

Oh noes DMCA!

Filed under: Copyright and filesharing — Emil O. W. Kirkegaard @ 16:29

05 June 2013

Dear Site Administrator:

The undersigned declares under penalty of perjury that I am authorized to act on behalf of the above referenced author, the owner of copyright in the Intellectual Property, and Hachette Book Group, Inc., the exclusive US publisher of the Intellectual Property, including without limitation, the cover and other art incorporated therein (collectively, the “IP  Owner”).  I have a good faith belief that the materials identified below are not authorized by the IP Owner, her agent, or the law and therefore infringe the IP Owner’s rights according to federal and state law.  Accordingly, we hereby demand that you immediately remove and/or disable access of the infringing material identified below.

My contact information is listed below. We reserve all legal rights and remedies in the event of failure to comply with this notice.

The infringing material (infringement of copyright, including publication, duplication and distribution rights) is located on your website at:


MarkMonitor Anti-Piracy Team

Please reply to, replies sent to will be ignored.


I have deleted the file in question. Turns out there were 3 copies of the book on the server. I’ve deleted two of them.

January 29, 2013

Polymaths, freedom of information, and copyright – why we need copyright reform to more effectively increase the number of polymaths

Filed under: Copyright and filesharing,Education — Emil O. W. Kirkegaard @ 23:38

I forgot to mention that i hav riten a post about polymathy and copyriet reform over at Project Polymath. Reposted below. Direct link to post.


Polymaths are people with a deep knowledge of multiple academic fields, and often various other interests as well, especially artistic, but sometimes even things like tropical exploring. Here I will focus on acquiring deep knowledge about academic fields, and why copyright reform is necessary to increase the number of polymaths in the world.

Learning method
What is the fastest way to learn about some field of study? There are a few methods of learning, 1) listening to speeches/lectures/podcasts and the like, 2) reading, 3) figuring out things oneself. The last method will not work well for any established academic field. It takes too long to work out all the things other people have already worked out, if indeed it can be done at all. Many experiments are not possible to do oneself. But it can work out well for a very recent field, or some field of study that isn’t in development at all, or some field where it is very easy to work it things oneself (gather and analyze data). Using data mining from the internet is a very easy way to find out many things without having to spend money. However, usually it is faster to find someone else who has already done it. But surely programming ability is a very valuable skill to have for polymaths.

For most fields, however, this leaves either listening in some form, or reading. I have recently discussed these at greater length, so I will just summarize my findings here. Reading is by far the best choice. Not only can one read faster than one can listen, the written language is also of greater complexity, which allows for more information acquired per word, hence per time. Listening to live lectures is probably the most common way of learning by listening. It is the standard at universities. Usually these lectures last too long for one to concentrate throughout them, and if one misses something, it is not possible to go back and get it repeated. It is also not possible to skip ahead if one has already learned whatever it is the that speaker is talking about. Listening to recorded (= non-live) speech is better in both of these ways, but it is still much slower than reading. Khan Academy is probably the best way to learn things like math and physics by listening to recorded, short-length lectures. It also has built-in tests with instant feedback, and a helpful community. See also the book Salman Khan recently wrote about it.

If one seriously wants to be a polymath, one will need to learn at speeds much, much faster than the speeds that people usually learn at, even very clever people (≥2 sd above the mean). This means lots, and lots of self-study, self-directed learning, mostly in the form of reading, but not limited to reading. There are probably some things that are faster and easier to learn by having them explained in speech. Having a knowledgeable tutor surely helps in helping one make a good choice of what to read. When I started studying philosophy, I spent hundreds of hours on internet discussions forums, and from them, I acquired quite a few friends who were knowledgeable about philosophy. They helped me choose good books/texts to read to increase the speed of my learning.

Finally, there is one more way of listening that I didn’t mention, it is the one-to-one tutor-based learning. It is very fast compared to regular classroom learning, usually resulting in a 2 standard deviation improvement. But this method is unavailable for almost everybody, and so not worth discussing. Individual tutoring can be written or verbal or some mix, so it doesn’t fall under precisely one category of those mentioned before.

How to start learning about a new field
So, suppose one wants to learn something about a given field of study. Where to begin? Obviously, the best place to begin almost any study is the internet, especially Wikipedia. When one has read the article about the field on Wikipedia, one can then proceed to read the various articles referred in that article, or jump right into some of the sources listed. However, it is better to get ahold of a good textbook and learn from that. After all, textbooks are exactly the kind of book that is written to introduce one to a field of study. It would be very odd indeed if some other kind of book was better at introducing people to a field. That would mean that textbook authors had utterly and completely failed in their mission. I hammer this point through, because for some people, perhaps including some polymath aspirants, this fact is not obvious. Especially with philosophy, people have some strange idea that the best way to begin is reading huge, incomprehensible works (say, Being and Time), or just ‘start from the beginning’ with the pre-Socratics. See my post here. But it applies equally well to other fields. The best way to start learning physics is not to read Newton’s Principia.

Now, since polymaths need to learn a lot, and the preferred method of learning is reading, it follows that they need to read a lot. However, this can be an economic problem: Information is still costly to acquire. Polymaths are often dedicated to learning and spend their entire day learning (I spend >10 hours most days). So this means that having a job is not a viable solution. There isn’t enough time available. Thanks to the internet, there is now a wealth of information freely available. However, not all the information is freely available, and this presents a problem for would-be polymaths and already established polymaths who want to expand to another field of study. One could buy the material oneself, but this can quickly get expensive. One could lend the material from a library, but this requires that one reads paper books, which is not optimal, and also one cannot keep them around for future reference.

Primarily, there are two kinds of written sources that are not completely freely available yet, 1) journal articles, 2) books. Another less important source is newspaper articles.

Many polymath or stud.polymaths are university students or teachers and thus usually have access to academic journals through their university. However, often the university does not have access to all of the journals, and so if one stumbles upon an interesting paper which happens to be published in some obscure or perhaps defunct journal, it can be hard to find it. One can always try to ask the authors for the paper by email, and this often works, but again, not always. The authors may not want to help, they may be dead, or the email address mentioned out of order. This is clearly unsatisfactory for the polymath, whose curiosity is often insatiable. I know it annoys me very much whenever this happens.

Fortunately, journals are moving in the direction of open access, and the scientific community is increasingly unhappy with the way journals operate or used to operate. Usually researchers want their papers to be read, not hidden away behind a paywall. Even mainstream newspapers are writing about the issue. Countries and universities (Danish) are forcing their researchers to publish in open-access journals, or upload their papers to sites like arXiv or SSRN, where they can be freely downloaded. Internet activist Aaron Swartz also tried to liberate millions of papers recently, but was apparently unfortunately caught in the act. The absurd legal consequences of this act probably contributed to his reason to commit suicide. Still, the situation is improving quickly with respect to getting free access to the information in the journals.

If we legalized non-commercial copying of copyrighted works, then the situation would change almost instantly. Very quickly, companies like Google would make access to all academic papers ever published, at no cost at all to the user. This enormous improvement would of course not only help (stud.)polymaths, it would help anyone wanting to learn more. Most people are not university students or teachers, and so don’t have access to the academic journals. People who are unaffiliated with a university, polymaths or not, stand to win the most with such a change. A huge benefit to society at large.

A lot of good information still exists only in paper book form, and books are prohibitively expensive for a non-wealthy polymath. I don’t consider myself extreme among polymaths, but I read something like >30 nonfiction books a year (reading list). Buying all of these is out of the question – much too expensive. Rare academic books can cost hundreds of dollars to buy in a paper copy. An absurd situation, and extremely unsatisfying for a polymath. It is possible to fight back, however. One can buy books and set them free. Either ebooks, crack the protection and spread them. Or paper books, scan them or have them scanned for you, and then release them.

Of course, a lot of books can be found in ebook versions for free, either legally or not. However, the situation has recently deteriorated due to the copyright industry (in this case, the book publishers) successfully shutting down several of the best illegal ebook downloading sites (specially was very good). Due to the way torrents work, they are ill-suited to handle the sharing of thousands of different books, although several sites have tried (and shut down again, perhaps due to legal pressure). Still, one can find millions of ebooks torrent, either in huge compilations of books about a given subject (e.g. this one is of interest to polymaths, or this, or this), or books in single torrents. Single book torrents are usually only for famous books. Useful at times, but not satisfactory at all.

To be sure, books that are out of copyright can often be found and downloaded legally at great sites such as Project Gutenberg. Surely, if the copyright duration was released, Gutenberg and other similar projects would immediately start working on making millions of more old books freely available. Getting books from Gutenberg and other sites like it is mostly useful for historical studies, and fields where the dating of the books matter less. E.g. in philosophy, there is still much to learn from reading Hume, or John Stuart Mill. But there isn’t that much to learn in empirical science from reading papers from the 17th century, except out of historical interest.

Google have already scanned millions of books. They are made somewhat available for free via the Google Books service, but copyright law demands (and settlements with the publishing industry) that parts of the books are left out. However, if copyright were changed tomorrow, what would happen is that Google would quickly unblock these parts of the books, making the information therein completely freely available. Google has already collaborated with various large libraries in scanning their books. When it comes to freedom of information, the internet pirates and libraries are on the same team. The internet is the world’s greatest library of culture and information. It will get much better when copyright law changes.

When copyright law changes, both books and academic papers will be free, and we will enter the true information age. The question is only a matter of time. This will benefit almost everybody, including polymaths. The losers will be the now obsolete middle-men. It will be much easier, especially for poor people and people not affiliated with a university to become polymaths, and of course for others to learn as well. At that time, only time, interest, and abilities will set the limit – not money.

December 3, 2012

Thoughts about Cypherpunks: Freedom and the Future of the Internet (Assange et al)

Filed under: Copyright and filesharing,Government form — Tags: , — Emil O. W. Kirkegaard @ 02:01

Cypherpunks: Freedom and the Future of the Internet ebook pdf download free

u really shud buy it if u want to read it, just to support Wikileaks. its priced at 10$ for a DRM-free PDF.

heres another review

the summary is that its a rather short book, 170ish pages, which is based on a four way conversation between Julian Assange and three other interesting and influential computer ppl. it contains a lot of rather dystopian information about the future and present of surveillance. apparently, there is a lot more of it than i thought. certainly this gave me som more ideas that i will discuss with the pirate parties.

some quotes and comments

A 120-strong US Pentagon team called the WikiLeaks Task

Force, or WTF, was set up ahead of the release of the Iraq War Logs

and Cablegate, dedicated to “taking action” against WikiLeaks. Simi-

lar publicly declared task forces in the FBI, the CIA and the US State

Department are also still in operation.19

hilarious accidental use of internet slang? :D

The Obama administration warned federal employees that mate-

rials released by WikiLeaks remained classified—even though they

were being published by some of the world’s leading news organiza-

tions including the New York Times and the Guardian. Employees were

told that accessing the material, whether on or in the

New York Times, would amount to a security violation.21

Government agencies such as the Library of Congress, the Commerce Department

and the US military blocked access to WikiLeaks materials over their

networks. The ban was not limited to the public sector. Employees from

the US government warned academic institutions that students hop-

ing to pursue a career in public service should stay clear of material

released by WikiLeaks in their research and in their online activity.


JULIAN: Andy, for years you’ve designed cryptographic telephones.

What sort of mass surveillance is occurring in relation to telecommu-

nications? Tell me what is the state of the art as far as the government

intelligence/bulk-surveillance industry is concerned?

ANDY: Mass storage—meaning storing all telecommunication, all voice

calls, all traffic data, any way groups consume the Short Message Service

(SMS), but also internet connections, in some situations at least limited

to email. If you compare the military budget to the cost of surveillance

and the cost of cyber warriors, normal weapon systems cost a lot of

money. Cyber warriors or mass surveillance are super-cheap compared

to just one aircraft. One military aircraft costs you between…

JULIAN: Around a hundred million.

ANDY: And storage gets cheaper every year. Actually, we made some

calculations in the Chaos Computer Club: you get decent voice-quality

storage of all German telephone calls in a year for about 30 million

euros including administrative overheads, so the pure storage is about

8 million euros.42

scary. it gets more scary when u think about the fact that most systems that i use to communicate with are american owned: skype, facebook, google. perhaps i shud get srs about this encryption thing. sooner rather than later.

JACOB: We can also tie this back to John Gilmore. One of John

Gilmore’s lawsuits about his ability to travel anonymously in the

United States resulted in the court literally saying, “Look, we’re

going to consult with the law, which is secret. We will read it and

we will find out when we read this secret law whether or not you

are allowed to do the thing that you are allowed to do.” And they

found when they read the secret law that, in fact, he was allowed

to do it, because what the secret law said did not restrict him. He

never learned what the secret law was at all and later they changed

the US Transportation Security Administration and Department

of Homeland Security policies in response to him winning his law-

suit, because it turns out the secret law was not restrictive enough

in this way.115

dafuq. the reference is:

Jacob is referring to Gilmore v. Gonzales, 435 F.3d 1125 (9th Cir.

2006). John Gilmore, an original cypherpunk, took a case as far as the

US Supreme Court to disclose the contents of a secret law—a Security

Directive—restricting citizens’ rights to travel on an airplane without

identification. Besides challenging the constitutionality of such a provi-

sion, Gilmore was challenging the fact that the provision itself was secret

and could not be disclosed, even though it has binding effects on US

citizens. The court consulted the Security Directive in camera, and ruled

against Gilmore on the Directive’s constitutionality. The contents of the

law were, however, never disclosed during the course of the proceedings.

See Gilmore v Gonzales at

gilmore/facts.html (accessed October 22, 2012).

ANDY: I totally agree that we need to ensure that the internet is

understood as a universal network with free flow of information;

that we need to not only define that very well, but also to name those

companies and those service providers who provide something they

call internet which is actually something totally different. But I think

we have not answered the key question beyond this filtering thing.

I want to give you an example of what I think we need to answer.

Some years ago, about ten years ago, we protested against Siemens

providing so-called smart filter software. Siemens is one of the big-

gest telcos in Germany and a provider of intelligence software. And

they actually sold this filtering system to companies so that, for exam-

ple, employees couldn’t look at the site of the trade unions to inform

themselves of their labor rights and so on. But they also blocked the

Chaos Computer Club site which made us upset. They designated

it as “criminal content” or something, for which we brought legal

action. But at an exhibition we decided to have a huge protest meet-

ing and to surround Siemens’ booths and filter the people coming

in and out. The funny thing was that we announced it on our site

to attract as many people as possible through the internet, and the

people in the Siemens booth had no fucking clue because they also

used the filter software so they couldn’t read the warning that was

obviously out there.


JULIAN: The Pentagon set up a filtering system so that any email sent

to the Pentagon with the word WikiLeaks in it would be filtered. And

so in the case of Bradley Manning, the prosecution, in attempting to

prosecute the case, of course, was mailing people outside the mili-

tary about “WikiLeaks,” but they never saw the replies because they

had the word “WikiLeaks” in them.118 The national security state

may eat itself yet.

oh god retards

JÉRÉMIE: This debate about full disclosure makes me think of the

group known as LulzSec, who released 70 million records from

Sony—all the users’ data from Sony—and you could see all the

addresses, email addresses and passwords. I think there were even

credit card details from 70 million users. As a fundamental rights

activist I thought, “Wow, there is something wrong here if to prove

your point or to have fun you disclose people’s personal data.”

I was very uncomfortable with seeing people’s email addresses on

the record. In a way, I thought those people were having fun with

computer security, and what they were demonstrating is that a

company as notorious and powerful as Sony wasn’t able to keep its

users’ secrets secret, and having those 70 million users search in

a search engine for their email address or for their name and find

this record would make them instantly realize, “Oh wow, what did

I do when I disclosed this data to Sony? What does it mean to give

personal data to a company?”

JACOB: Then they shoot the messenger.

interesting angle on the LulzSec disclosure

November 28, 2012

Paper: Do Bad Things Happen When Works Enter the Public Domain?: Empirical Tests of Copyright Term Extension (Buccafusco & Heald)

Filed under: Copyright and filesharing,Economics — Emil O. W. Kirkegaard @ 16:53

Do Bad Things Happen When Works Enter the Public Domain Empirical Tests of Copyright Term Extension


The most interesting thing about this paper was the arguments put forward by the supporters of copyright extension. They are so distressingly bad that it seems pointless to empirically test them. Theoretical arguments are sufficient to show them to be faulty. Nevertheless, the authors carried out some experiments that show the obvious to be true.


According to the current copyright statute, in 2018, copyrighted works of music,
film, and literature will begin to transition into the public domain. While this will
prove a boon for users and creators, it could be disastrous for the owners of these
valuable copyrights. Accordingly, the next few years will witness another round of
aggressive lobbying by the film, music, and publishing industries to extend the
terms of already-existing works. These industries, and a number of prominent
scholars, claim that when works enter the public domain bad things will happen
to them. They worry that works in the public domain will be underused, overused,
or tarnished in ways that will undermine the works’ cultural and economic value.
Although the validity of their assertions turn on empirically testable hypotheses,
very little effort has been made to study them.  
This Article attempts to fill that gap by studying the market for audiobook
recordings of bestselling novels. Data from our research, including a novel
human subjects experiment, suggest that the claims about the public domain are
suspect. Our data indicate that audio books made from public domain bestsellers
(1913-22) are significantly more available than those made from copyrighted
bestsellers (1923-32). In addition, our experimental protocol suggests that
professionally made recordings of public domain and copyrighted books are of
similar quality. Finally, while a low quality recording seems to lower a listener’s
valuation of the underlying work, our data do not suggest any correlation
between that valuation and legal status of the underlying work. Accordingly, our
research indicates that the significant costs of additional copyright protection for
already-existing works are not justified by the benefits claimed for it.  These
findings will be crucially important to the inevitable congressional and judicial
debate over copyright term extension in the next few years.

August 14, 2012

Thoughts and quotes: Against Intellectual Monopoly (Boldrin & Levine)

Filed under: Copyright and filesharing,Economics — Tags: — Emil O. W. Kirkegaard @ 14:34

Against intellectual Monopoly

In general, this is an interesting book about patents. It is at times combatant in its language use, other times more neutral. I think it wud have been wiser to use less loaded terms, but it didnt bother me too much. The criticism of IPR is generally sensible, and their case persuasive and plausible, but not as plausible as the case in Patent Failure. References are sometimes missing for questionable claims, but in general there are lots of references. The reference system is annoying, as the notes are at the end of chapters and not in links (it was intended to be published as an ebook, after all) or footnotes or something of that sort.


Below are some more comments and a lot of quotes.


As usual. Colored text is a quote. Colored+italic text is a quote which is also a quote in the source. Black text is my comments. Blue text also mine i.e. links.

Why, however, should creators have the right to control

how purchasers make use of an idea or creation? This gives

creators a monopoly over the idea. We refer to this right as

“intellectual monopoly,” to emphasize that it is this monopoly over

all copies of an idea that is controversial, not the right to buy and

sell copies. The government does not ordinarily enforce

monopolies for producers of other goods. This is because it is

widely recognized that monopoly creates many social costs.

Intellectual monopoly is no different in this respect. The question

we address is whether it also creates social benefits commensurate

with these social costs.

Even on the desktop – open source is spreading and not

shrinking. Ten years ago there were two major word processing

packages, Word and Wordperfect. Today the only significant

competitor to Microsoft for a package of office software including

word-processing is the open source program Openoffice.


Or rather LibreOffice now. But there is also Google Docs, which isnt open source. It is, however, free.

Start with English authors selling books in the United

States in the nineteenth century. “During the nineteenth century

anyone was free in the United States to reprint a foreign

publication”10 without making any payment to the author, besides

purchasing a legally sold copy of the book. This was a fact that

greatly upset Charles Dickens whose works, along with those of

many other English authors, were widely distributed in the U.S.,



yet American publishers found it profitable to make

arrangements with English authors. Evidence before the

1876-8 Commission shows that English authors sometimes

received more from the sale of their books by American

publishers, where they had no copyright, than from their

royalties in [England]11


where they did have copyright. In short without copyright, authors

still got paid, sometime more without copyright than with it.12

How did it work? Then, as now, there is a great deal of

impatience in the demand for books, especially good books.

English authors would sell American publishers the manuscripts of

their new books before their publication in Britain. The American

publisher who bought the manuscript had every incentive to

saturate the market for that particular novel as soon as possible, to

avoid cheap imitators to come in soon after. This led to mass

publication at fairly low prices. The amount of revenues British

authors received up front from American publishers often

exceeded the amount they were able to collect over a number of

years from royalties in the UK. Notice that, at the time, the US

market was comparable in size to the UK market.13


More broadly, the lack of copyright protection, which

permitted the United States publishers’ “pirating” of English

writers, was a good economic policy of great social value for the

people of United States, and of no significant detriment, as the

Commission report and other evidence confirm, for English

authors. Not only did it enable the establishment and rapid growth

of a large and successful publishing business in the United States;

also, and more importantly, it increased literacy and benefited the

cultural development of the American people by flooding the

market with cheap copies of great books. As an example: Dickens’

A Christmas Carol sold for six cents in the US, while it was priced

at roughly two dollars and fifty cents in England. This dramatic

increase in literacy was probably instrumental for the emergence of

a great number of United States writers and scientists toward the

end of the nineteenth century.


But how relevant for the modern era are copyright

arrangements from the nineteenth century? Books, which had to be

moved from England to the United States by clipper ship, can now

be transmitted over the internet at nearly the speed of light.

Furthermore, while the data show that some English authors were

paid more by their U.S. publishers than they earned in England –

we may wonder how many, and if they were paid enough to

compensate them for the cost of their creative efforts. What would

happen to an author today without copyright?


This question is not easy to answer – since today virtually

everything written is copyrighted, whether or not intended by the

author. There is, however, one important exception – documents

produced by the U.S. government. Not, you might think, the stuff

of best sellers – and hopefully not fiction. But it does turn out that

some government documents have been best sellers. This makes it

possible to ask in a straightforward way – how much can be earned

in the absence of copyright? The answer may surprise you as much

as it surprised us.


The most significant government best seller of recent years

has the rather off-putting title of The Final Report of the National

Commission on Terrorist Attacks Upon the United States, but it is

better known simply as the 9/11 Commission Report.14 The report

was released to the public at noon on Thursday July 22, 2004. At

that time, it was freely available for downloading from a

government website. A printed version of the report published by

W.W. Norton simultaneously went on sale in bookstores. Norton

had signed an interesting agreement with the government.


The 81-year-old publisher struck an unusual publishing

deal with the 9/11 commission back in May: Norton agreed

to issue the paperback version of the report on the day of

its public release.…Norton did not pay for the publishing

rights, but had to foot the bill for a rush printing and

shipping job; the commission did not hand over the

manuscript until the last possible moment, in order to

prevent leaks. The company will not reveal how much this

cost, or when precisely it obtained the report. But expedited

printings always cost extra, making it that much more

difficult for Norton to realize a profit.


In addition, the commission and Norton agreed in May on

the 568-page tome’s rather low cover price of $10, making

it that much harder for the publisher to recoup its costs.

( is currently selling copies for $8 plus

shipping, while visitors to the Government Printing Office

bookstore in Washington, D.C. can purchase its version of

the report for $8.50.) There is also competition from the

commission’s Web site, which is offering a downloadable

copy of the report for free. And Norton also agreed to

provide one free copy to the family of every 9/11 victim.15


This might sound like Norton struck a rather bad deal – one

imagines that other publishers were congratulating themselves on

not having been taken advantage of by sharp government

negotiators. It turns out, however, that Norton’s rivals were in fact

envious of this deal. One competitor in particular – the New York

Times – described the deal as a “royalty-free windfall,”16 which

does not sound like a bad thing to have.


Thats pretty cool!

Literature and a market for literary works emerged and

thrived for centuries in the complete absence of copyright. Most of

what is considered “great literature” and is taught and studied in

universities around the world comes from authors who never

received a penny of copyright royalties. Apparently the

commercial quality of the many works produced without copyright

has been sufficiently great that Disney, the greatest champion of

intellectual monopoly for itself, has made enormous use of the

public domain. Such great Disney productions as Snow White,

Sleeping Beauty, Pinocchio and Hiawatha are, of course, all taken

from the public domain. Quite sensibly, from its monopolistic

viewpoint, Disney is reluctant to put anything back. However, the

economic argument that these great works would not have been

produced without an intellectual monopoly is greatly weakened by

the fact that they were.


Hah! :D

At least in the case of sheet music, the police campaign did

not work. After a few months, police stations were filled with tons

of paper on which various musical pieces were printed. Being

unable to bring to court what was a de-facto army of “illegal”

music reproducers, the police itself stopped enforcing the

copyright law.


Pretty much that which i suggested earlier today that we shud do with DMCA notices. Just send them en masse and overwhelm the system from within. After all, companies already send out a massive amount of DMCA notices, and lots of them are bogus auto-generated ones, and this is true even tho they must stand for perjury if they are caught lying!


Surely, there is no intent to deceive if we do the same, since there is no intent at all in writing generating them.

The authors mention some obscure catholic principle in passing. Their reference for it is to AiG. But that makes no sense. AiG is a YEC organisation, not catholic. Catholics are theistic evolutionists, not creationists.

Effective price discrimination is costly to implement and

this cost represents pure waste. For example, music producers love

Digital Rights Management (DRM) because it enables them to

price discriminate. The reason that DVDs have country codes, for

example, is to prevent cheap DVDs sold in one country from being

resold in another country where they have a higher price. Yet the

effect of DRM is to reduce the usefulness of the product. One of

the reasons the black market in MP3s is not threatened by legal

electronic sales is that the unprotected MP3 is a superior product to

the DRM protected legal product. Similarly, producers of computer

software sell constrained products to consumers in an effort to

price discriminate and preserve their more lucrative corporate

market. One consequence of price discrimination by monopolists,

especially intellectual monopolists, is that they artificially degrade

their products in certain markets so as not to compete with other

more lucrative markets.

In recent years there have been innovative efforts to extend

the use of patents to block competitors. For example we find


A federal trade agency might impose $13 million in

sanctions against a New Jersey company that rebuilds used

disposable cameras made by the Fuji Photo Film Company

and sells them without brand names at a discount. Fuji said

yesterday that the International Trade Commission found

that the Jazz photo Corporation infringed Fuji’s patent

rights by taking used Fuji cameras and refurbishing them

for resale. The agency said Jazz sold more that 25 million

cameras since August 2001 in violation of a 1999 order to

stop and will consider sanctions. Fuji, based in Tokyo, has

been fighting makers of rebuilt cameras for seven years.

Jazz takes used shells of disposable cameras, puts in new

film and batteries and then sells them. Jazz’s founder, Jack

Benun, said the company would appeal. “It’s unbelievable

that the recycling of two plastic pieces developed into such

a long case.” Mr. Benun said. ‘There’s a benefit to the

customer. The prices have come down over the years. And

recycling is a good program. Our friends at Fuji do not like




One annoying thing about this book is that it uses the annoying and misleading loaded terms that IP maximalists use. I.e. “steal an idea” instead of “copy an idea” etc.

Another astounding example of American intellectual imperialism

is in – not so surprising – Iraq


The American Administrator of [Iraq] Paul Bremer,

updated Iraq’s intellectual property law to ‘meet current

internationally-recognized standards of protection.’ The

updated law makes saving seeds for next year’s harvest,

practiced by 97% of Iraqi farmers in 2002, the standard

farming practice for thousands of years across human

civilizations, newly illegal. Instead, farmers will have to

obtain a yearly license for genetically modified seeds from

American corporations. These GM seeds have typically

been modified from IP developed over thousands of

generations by indigenous farmers like the Iraqis, shared

freely like agricultural ‘open source.’ Other IP provisions

for technology in the law further integrate Iraq into the

American IP economy.24


Fucking derp.

The private sector has no monopoly on inadequacy.

Government bureaucrats are notorious for their inefficiency. The

U.S. Patent office is no exception. Their questionable competence

increases the cost of getting patents, but this is a small effect, and,

perhaps a good thing, rather than bad. They also issue many

patents of dubious merit. Since the legal presumption is that a

patent is legitimate unless proven otherwise, there is a substantial

legal advantage to the patent holder, who may use it for blackmail,

or other purposes. Moreover, while some bad patents may be

turned down, an obvious strategy is simply to file a great many bad

patents in hopes that a few will get through. Here is a sampling of

some of the ideas the US Patent office thought worthy of patenting

in recent years.41


# U.S. Patent 6,080,436: toasting bread in a toaster operating

beween 2500 and 4500 degrees.

# U.S. Patent 6,004,596: the sealed crustless peanut butter and

jelly sandwich.

# U.S. Patent 5,616,089: a “putting method in which the golfer

controls the speed of the putt and the direction of the putt

primarily with the golfer’s dominant throwing hand, yet uses

the golfer’s nondominant hand to maintain the blade of the

putter stable.”

# U.S. Patent 6,368,227: “A method of swing on a swing is

disclosed, in which a user positioned on a standard swing

suspended by two chains from a substantially horizontal tree

branch induces side to side motion by pulling alternately on

one chain and then the other.”

# U.S. Patent 6,219,045, from the press release by

[The patent was awarded] for its scalable 3D server

technology … [by] the United States Patent Office. The

Company believes the patent may apply to currently, in use,

multi-user games, e-Commerce, web design, advertising and

entertainment areas of the Internet.” This is a refreshing

admission that instead of inventing something new, simply patented something already widely used.

# U.S. Patent 6,025,810: “The present invention takes a

transmission of energy, and instead of sending it through

normal time and space, it pokes a small hole into another

dimension, thus, sending the energy through a place which

allows transmission of energy to exceed the speed of light.”

The mirror image of patenting stuff already in use: patent stuff

that can’t possibly work.


I had thought of the same shotgun style idea.

That monopoly is generally bad for society is well

accepted. It is not surprising that the same should be true of

intellectual monopoly: the evidence presented here is no more than

the tip of the iceberg. Many other inefficiencies, bad business

practices, technological regressions, etc. are documented daily by

the press. These are a consequence of the especially strong form of

monopoly power that current IP legislation bestows upon patent

and copyright holders. We insist on documenting and discussing a

subset of these facts for the simple reason that we have become so

accustomed to them that we inclined to take them for granted. Yet

these inefficiencies are not natural – they are manmade, and we

need not choose to tolerate them. We argue in later chapters that

neither patents nor copyright succeed in fostering innovation and

creativity. So we must ask: what is the point of keeping institutions

that provide so little good while inflicting so much harm?

Examples of individual creativity abound. An astounding

example of the impact of copyright law on individual creativity is

the story of Tarnation.120


Tarnation, a powerful autobiographical documentary by

director Jonathan Caouette, has been one of the surprise

hits of the Cannes Film Festival – despite costing just $218

(£124) to make. After Tarnation screened for the second

time in Cannes, Caouette – its director, editor and main

character – stood up. […] A Texan child whose mother was

in shock therapy, Caouette, 31, was abused in foster care

and saw his mother’s condition worsen as a result of her

treatment.” He began filming himself and his family aged

11, and created movie fantasies as an escape. For

Tarnation, he has spliced his home movie footage together

to create a moving and uncomfortable self-portrait. And

using a home computer with basic editing software,

Caouette did it all for a fraction of the price of a

Hollywood blockbuster like Troy. […] As for the budget,

which has attracted as much attention as the subject

matter, Caouette said he had added up how much he spent

on video tapes – plus a set of angel wings – over the years.

But the total spent will rise to about $400,000 (£230,000),

he said, once rights for music and video clips he used to

illustrate a mood or era have been paid for.9


Yes, you read this right. If he did not have to pay the copyright

royalties for the short clips he used, Caouette’s movie would have

cost a thousand times less.

The most disturbing feature of the DMCA is section 1201,

the anti-circumvention provision. This makes it a criminal offense

to reverse engineer or decrypt copyrighted material, or to distribute

tools that make it possible to do so. On July 27, 2001, Russian

cryptographer Dmitri Sklyarov had the dubious honor of being the

first person imprisoned under the DMCA. Arrested while giving a

seminar publicizing cryptographical weaknesses in Adobe’s

Acrobat Ebook format, Sklyarov was eventually acquitted on

December 17, 2002.

The DMCA has had a chilling effect on both freedom of

speech, and on cryptographical research. The Electronic Frontier

Foundation (EFF) reports on the case of Edward Felten and his

Princeton team of researchers


In September 2000, a multi-industry group known as the

Secure Digital Music Initiative (SDMI) issued a public

challenge encouraging skilled technologists to try to defeat

certain watermarking technologies intended to protect

digital music. Princeton Professor Edward Felten and a

team of researchers at Princeton, Rice, and Xerox took up

the challenge and succeeded in removing the watermarks.


When the team tried to present their results at an academic

conference, however, SDMI representatives threatened the

researchers with liability under the DMCA. The threat

letter was also delivered to the researchers employers and

the conference organizers. After extensive discussions with

counsel, the researchers grudgingly withdrew their paper

from the conference. The threat was ultimately withdrawn

and a portion of the research was published at a

subsequent conference, but only after the researchers filed

a lawsuit.


After enduring this experience, at least one of the

researchers involved has decided to forgo further research

efforts in this field.13



The DMCA is not just a threat to economic prosperity and

creativity, it is also a threat to our freedom. The best illustration is

the recent case of Diebold, which makes computerized voting

machines now used in various local, state and national elections.

Unfortunately, it appears from internal corporate documents that

these machines are highly insecure and may easily be hacked.

Those documents were leaked, and posted at various sites on the

Internet. Rather than acknowledge or fix the security problem,

Diebold elected to send “takedown” notices in an effort to have the

embarrassing “copyrighted” material removed from the Internet.

Something more central to political discourse than the

susceptibility of voting machines to fraud is hard to imagine. To

allow this speech to be repressed in the name of “copyright” is



Perhaps this sounds cliched and exaggerated – a kind of

“leftist college kids” over-reactive propaganda. In keeping with

this tone here is a college story about the leaked documents, and

how the Diebold and the DMCA helped to teach our future

generations about the first amendment.


Last fall, a group of civic-minded students at Swarthmore

[… came] into possession of some 15,000 e-mail messages

and memos – presumably leaked or stolen – from Diebold

Election Systems, the largest maker of electronic voting

machines in the country. The memos featured Diebold

employees’ candid discussion of flaws in the company’s

software and warnings that the computer network was

poorly protected from hackers. In light of the chaotic 2000

presidential election, the Swarthmore students decided that

this information shouldn’t be kept from the public. Like

aspiring Daniel Ellsbergs with their would-be Pentagon

Papers, they posted the files on the Internet, declaring the

act a form of electronic whistle-blowing. Unfortunately for

the students, their actions ran afoul of the 1998 Digital

Millennium Copyright Act (D.M.C.A.), […] Under the law,

if an aggrieved party (Diebold, say) threatens to sue an

Internet service provider over the content of a subscriber’s

Web site, the provider can avoid liability simply by

removing the offending material. Since the mere threat of a

lawsuit is usually enough to scare most providers into

submission, the law effectively gives private parties veto

power over much of the information published online — as

the Swarthmore students would soon learn.


Not long after the students posted the memos, Diebold sent

letters to Swarthmore charging the students with copyright

infringement and demanding that the material be removed

from the students’ Web page, which was hosted on the

college’s server. Swarthmore complied. […]19


The story did not end there, nor did it end too badly. The

controversy went on for a while. The Swarthmore students held

their ground and bravely fought against both Diebold and

Swarthmore. They managed to create enough negative publicity

for Diebold and for their liberal arts college, that Diebold

eventually had to back down and promise not to sue for copyright

infringement. Eventually the memos went back on the net.

All’s well what ends well? When the wise man points at the

moon, the dumb man looks at the finger.

Economists refer to the net benefit to society from an

exchange as “social surplus.” With intellectual property the

innovator collects a share of the social surplus she generates,

without intellectual property the innovator collects a smaller share:

this is the competitive value of an innovation. When such

competitive value is enough to compensate the innovator for the

cost of creation the allocation of resources is efficient, neither too

few nor too many innovations are brought about, and social surplus

is maximized. One can show mathematically that, under a variety

of competitive mechanisms, the private value accruing to an

innovator increases with the social surplus: inventors of better

gadgets make more money. This is true even when the private

value becomes a smaller share of the social surplus as the latter



Notice that we insist on “a share of the social surplus”, not

the entire surplus. Contrary to what many pundits repeat over and

over, there is nothing terrifying about this: even under intellectual

monopoly innovators receives a less than 100% share of the social

surplus from innovation, the rest going to consumers. Under

competition for those innovations that are produced both

consumers and imitators receive a portion of the social surplus an

innovation generates, and such portion is strictly larger than in the

previous case. These pundits use the jargon “uncompensated

spillovers” to refer to the social surplus accruing to those besides

the original innovator. There is nothing wrong with such

spillovers, however. That competitive markets do allow for social

surplus to accrue to people other than producers is, indeed, one of

their most valuable features, at least from a social perspective; it is

what makes capitalism a good system also for the not-so-

successful among us. The goal of economic efficiency is not that of

making monopolists as rich as possible, in fact: it is almost the

opposite. The goal of economic efficiency is that of making us all

as well off as possible. To accomplish this producers must be

compensated for their costs, thereby providing them with the

economic incentive of doing what they are best at doing. But they

do not need to be compensated more than this. If, by selling her

original copy of the idea in a competitive market and thereby

establishing the root of the tree from which copies will come, the

innovator earns her opportunity cost, that is: she earns as much or

more than she could have earned while doing the second best thing

she knows how to do, then efficient innovation is achieved, and we

should all be happy.


This no copyright at all is interesting. Notice how it instantly solves all problems with sampling. Under a for profit copyright only, sampling is difficult to deal with.

Consider the problem of automobiles and air pollution.

When I drive my car, I do not have to pay you for the harm the

poison in my exhaust does to your health. So naturally, people

drive more than is socially desirable and there is too much air

pollution. Economists refer to this as a negative externality, and we

all agree it is a problem. Even conservative economists usually

agree that government intervention of some sort is required.


We propose the following solution to the problem of

automobile pollution: the government should grant us the exclusive

right to sell automobiles. Naturally, as a monopolist, we will insist

on charging a high price for automobiles, fewer automobiles will

be sold, there will be less driving, and so less pollution. The fact

that this will make us unspeakably rich is of course beside the

point; the sole purpose of this policy is to reduce air pollution. This

is of course all logically correct – but so far we don’t think anyone

has had the chutzpah to suggest that this is a good solution to the

problem of air pollution.


If someone were to make a serious suggestion along these

lines, we would simply point out that this “solution” has actually

been tried. In Eastern Europe, under the old communist

governments, each country did in fact have a government

monopoly over the production of automobiles. As the theory

predicts, this did indeed result in expensive automobiles, fewer

automobiles sold, and less driving. It is not so clear, however, that

it actually resulted in less pollution. Sadly, the automobiles

produced by the Eastern European monopolists were of such

miserably bad quality that for each mile they were driven they

created vastly more pollution than the automobiles driven in the

competitive West. And, despite their absolute power, the

monopolies of Eastern Europe managed to produce a lot more

pollution per capita than the West.


Arguments in favor of intellectual monopoly often have a

similar flavor. They may be logically correct, but they tend to defy

common sense. Ed Felten suggests applying what he calls the

“pizzaright” test. The pizzaright is the exclusive right to sell pizza

and makes it illegal to make or serve pizza without a license from

the pizzaright owner.1 We all recognize, of course, that this would

be a foolhardy policy and that we should allow the market to

decide who can make and sell pizza. The pizzaright test says that

when evaluating an argument in favor of intellectual monopoly, if

your argument serves equally well as an argument for a pizzaright,

then your argument is defective – it proves too much. Whatever

your argument is, it had better not apply to pizza.



While replacing secrecy with legal monopoly may have

some impact on the direction of innovation, there is little reason to

believe that it actually succeeds in making important secrets public

and easily accessible to other innovators. For most innovations, it

is the details that matter, not the rather vague descriptions required

in patent applications. Take for example, the controversial Amazon

one-click patent, U.S. Patent 5,960,411. The actual idea is rather

trivial, and there are a variety of ways in which one-click purchase

can be implemented by computer, any one of which can be coded

by a competent programmer given a modest investment of time

and effort. For the record, here is the detailed description of the

invention from the patent application:


The present invention provides a method and system for

single-action ordering of items in a client/server

environment. The single-action ordering system of the

present invention reduces the number of purchaser

interactions needed to place an order and reduces the

amount of sensitive information that is transmitted between

a client system and a server system. In one embodiment, the

server system assigns a unique client identifier to each

client system. The server system also stores purchaser-

specific order information for various potential purchasers.

The purchaser-specific order information may have been

collected from a previous order placed by the purchaser.

The server system maps each client identifier to a

purchaser that may use that client system to place an order.

The server system may map the client identifiers to the

purchaser who last placed an order using that client

system. When a purchaser wants to place an order, the

purchaser uses a client system to send the request for

information describing the item to be ordered along with its

client identifier. The server system determines whether the

client identifier for that client system is mapped to a

purchaser. If so mapped, the server system determines

whether single-action ordering is enabled for that

purchaser at that client system. If enabled, the server

system sends the requested information (e.g., via a Web

page) to the client computer system along with an

indication of the single action to perform to place the order

for the item. When single-action ordering is enabled, the

purchaser need only perform a single action (e.g., click a

mouse button) to order the item. When the purchaser

performs that single action, the client system notifies the

server system. The server system then completes the order

by adding the purchaser-specific order information for the

purchaser that is mapped to that client identifier to the item

order information (e.g., product identifier and quantity).

Thus, once the description of an item is displayed, the

purchaser need only take a single action to place the order

to purchase that item. Also, since the client identifier

identifies purchaser-specific order information already

stored at the server system, there is no need for such

sensitive information to be transmitted via the Internet or

other communications medium.28


As can be seen, the “secret” that is revealed is, if anything, less

informative than the simple observation that the purchaser buys

something by means of a single click. Information that might

actually be of use to a computer programmer – for example the

source code to the specific implementation used by Amazon – is

not provided as part of the patent, nor is it required to be. In fact,

the actual implementation of the one-click procedure consists of a

complicated system of subcomponents and modules requiring a

substantial amount of human capital and of specialized working

time to be assembled. The generic idea revealed in the patent is

easy to understand and “copy,” but of no practical value

whatsoever. The useful ideas are neither revealed in the patent nor

easy to imitate without reinventing them from scrap, which is what

lots of other people beside Amazon’s direct competitors (books are

not the only thing sold on the web, after all) would have done to

everybody’s else benefit, had the U.S. Patent 5,960,411 not

prevented them from actually doing so. Certainly it is hard to argue

that the social cost of giving Amazon a monopoly over purchasing

by clicking a single button is somehow offset by the social benefit

of the information revealed in the patent application.

What we have argued so far may not sound altogether

incredible to the alert observer of the economics of innovation.

Theory aside, what have we shown, after all? That thriving

innovation has been and still is commonplace in the absence of

intellectual monopoly and that intellectual monopoly leads to

substantial and well-documented reductions in economic freedom

and general prosperity. However, while expounding the theory of

competitive innovation, we also recognized that under perfect

competition some socially desirable innovations will not be

produced because the indivisibility involved with introducing the

first copy or implementation of the new idea is too large, relative

to the size of the underlying market. When this is the case,

monopoly power may generate the necessary incentive for the

putative innovator to introduce socially valuable goods. And the

value for society of these goods could dwarf the social losses we

have documented. In fact, were standard theory correct so that

most innovators gave up innovating in a world without intellectual

property, the gains from patents and copyright would certainly

dwarf those losses. Alas, as we noted, standard theory is not even

internally coherent, and its predictions are flatly violated by the

facts reported in chapters 2 and 3.


Nevertheless, when in the previous chapter we argued

against all kinds of theoretical reasons brought forward to justify

intellectual monopoly on “scientific grounds”, we carefully

avoided stating that it is never the case the fixed cost of innovation

is too large to be paid for by competitive rents. We did not argue it

as a matter of theory because, as a matter of theory, fixed costs can

be so large to prevent almost anything from being invented. So, by

our own admission, it is a theoretical possibility that intellectual

monopoly could, at the end of the day, be better than competition.

But does intellectual monopoly actually lead to greater innovation

than competition?


From a theoretical point of view the answer is murky. In

the long-run, intellectual monopoly provides increased revenues to

those that innovate, but also makes innovation more costly.

Innovations generally build on existing innovations. While each

individual innovator may earn more revenue from innovating if he

has an intellectual monopoly, he also faces a higher cost of

innovating: he must pay off all those other monopolists owning

rights to existing innovations. Indeed, in the extreme case when

each new innovation requires the use of lots of previous ideas, the

presence of intellectual monopoly may bring innovation to a

screeching halt.1


Difficult indeed to say on theoretical grounds alone. Only empirical data can show.

On the problem of measuring innovation.


One important difficulty is in determining the level of

innovative activity. One measure is the number of patents, of

course, but this is meaningless in a country that has no patents, or

when patent laws change. Petra Moser gets around this problem by

examining the catalogs of innovations from 19th century World

Fairs. Of the catalogued innovations, some are patented, some are

not, some are from countries with patent systems, and some are

from countries without. Moser catalogues over 30,000 innovations

from a variety of industries.


Mid-nineteenth century Switzerland [a country without

patents], for example, had the second highest number of

exhibits per capita among all countries that visited the Crystal

Palace Exhibition. Moreover, exhibits from countries without

patent laws received disproportionate shares of medals for

outstanding innovations.7


Moser does, however, find a significant impact of patent law on

the direction of innovation


The analysis of exhibition data suggests that patent laws may

be an important factor in determining the direction of

innovative activity. Exhibition data show that countries without

patents share an exceptionally strong focus on innovations in

two industries: scientific instruments and food processing. At

the Crystal Palace, every fourth exhibit from a country without

patent laws is a scientific instrument, while no more than one

seventh of other countries innovations belong to this category.

At the same time, the patentless countries have significantly

smaller shares of innovation in machinery, especially in

machinery for manufacturing and agricultural machinery.

After the Netherlands abolished her patent system in 1869 for

political reasons, the share of Dutch innovations that were

devoted to food processing increased from 11 to 37 percent.8


Moser then goes on to say that


Nineteenth-century sources report that secrecy was

particularly effective at protecting innovations in scientific

instruments and in food processing. On the other hand,

patenting was essential to protect and motivate innovations in

machinery, especially for large-scale manufacturing.9


Evidence that secrecy was important for scientific instruments

and food processing is provided, but no evidence is given that

patenting was actually essential to protect and motivate

innovations in machinery. Notice that in an environment in which

some countries provide patent protection, and others do not, bias

caused by the existence of patent laws will be exaggerated.

Countries with patent laws will tend to specialize in innovations

for which secrecy is difficult, while those without will tend to

specialize in innovations for which secrecy is easy. This means

that variations of patent protection would have different effects in

different countries.


It is interesting also that patent laws may reflect the state of

industry and innovation in a country


Anecdotal evidence for the late nineteenth and for the twentieth

century suggests that a country’s choice of patent laws was

often influenced by the nature of her technologies. In the

1880s, for example, two of Switzerland’s most important

industries chemicals and textiles were strongly opposed to the

introduction of a patent system, as it would restrict their use of

processes developed abroad.10


The 19th century type of innovation – small process innovations

– are of the type for which patents may be most socially beneficial.

Despite this and the careful study of economic historians, it is

difficult to conclude that patents played an important role in

increasing the rate of 19th and early 20th century innovation.


More recent work by Moser,11 exploiting the same data set

from two different angles, strengthens this finding – that is, that

patents did not increase the level of innovation. In her words:

“Comparisons between Britain and the United States suggest that

even the most fundamental differences in patent laws failed to raise

the proportion of patented innovations.”12 Her work appears to

confirm two of the stylized facts we have often repeated in this

book. First that, as we just mentioned in discussing the work of

Sokoloff, Lamoreaux and Khan, innovations that are patented tend

to be traded more than those that are not, and therefore to disperse

geographically farther away from the original area of invention.

Based on data for the period 1841-1901, innovation for industries

in which patents are widely used is not higher but more dispersed

geographically than innovation in industries in which patents are

not or scarcely used. Second, when the “defensive patenting”

motive is absent, as it was in 1851, an extremely small percentage

of inventors (less than one in five) chooses patents as a method for

maximizing revenues and protect intellectual property.


Summing up: careful statistical analyses of the 19th century’s

available data, carried out by distinguished economic historians,

uniformly shows two things. Patents neither increase the rate of

innovation, nor are the best instrument to maximizes inventors’

revenue. Patents create a market in patents and in the legal and

technical services required to trade and enforce them.


Very interesting data.

Quoting this for linguistic reasons…

Nevertheless, the core idea of a unified European patent

system was not abandoned and continued to be pursued in various

forms, first under the leadership of the European Commission, and

then under the European Union. In 2000 a Community Patent

Regulation proposal was approved, which was considered a major

step toward the final establishment of a European Patent. Things,

nevertheless, did not proceed as expeditiously as the supporters of

a E.U. Patent had expected. As of 2007 the project is still, in the

words of E.U. Commissioner Charlie McCreevy, “stuck in the

mud”13 and far from being finalized. Interestingly the obstacles are

neither technical nor due to a particularly strong political

opposition to the establishment of a continent-wide form of

intellectual monopoly. The obstacles are purely due to rent-seeking

by interest groups in the various countries involved, the number of

which notoriously keeps growing. Current intellectual monopolists

(and their national lawyers) would rather remain monopolists

(legal specialists) for a bit longer in their own smaller markets than

risk the chance of loosing everything to a more powerful

monopolist (or to a foreign firm with more skilled lawyers) in the

bigger continental market.


That feel when reading academic books in revised editions… and they still fail to do lose vs. loose distinction. Useless distinction. At least, they chose the most sensible spelling. The spelling loose still has a pointless and silent e in the end.

It could be, and sometimes is, argued that the modern

pharmaceutical industry is substantially different from the

chemical industry of the last century. In particular, it is argued that

the most significant cost of developing new drugs lies in testing

numerous compounds to see which ones work. Insofar as this is

true, it would seem that the development of new drugs is not so

dependent on the usage and knowledge of old drugs. However, this

is not the case according to the chief scientific officer at Bristol

Myers Squib, Peter Ringrose, who


told The New York Times that there were ‘more than 50

proteins possibly involved in cancer that the company was

not working on because the patent holders either would not

allow it or were demanding unreasonable royalties.18


Truth-telling remarks by pharmaceutical executives aside,

there is a deeper reason why the pharmaceutical industry of the

future will be more and more characterized by complex innovation

chains: biotechnology. As of 2004, already more than half of the

research projects carried out in the pharmaceutical industry had

some biomedical foundation. In biomedical research gene

fragments are, in more than a metaphorical sense, the initial link of

any valuable innovation chain. Successful innovation chains depart

from, and then combine, very many gene fragments, and cannot do

without at least some of them. As gene fragments are in finite

number, patenting them is equivalent to artificially fabricating

what scientists in this area have labeled an “anticommons”

problem. So it seems that the impact of patent law in either

promoting or inhibiting research remains, even in the modern

pharmaceutical industry.19

A few additional facts may help the reader get a better

understanding of why, at the end, we reach the conclusion we do.

Sales are growing, fast; at about 12% a year for most of the 1990s,

and still now at around 8% a year; R&D expenditure during the

same period has been rising of only 6%. A company such as

Novartis (a big R&D player, relative to industry’s averages) spends

about 33% of sales on promotion, and 19% on R&D. The industry

average for R&D/sales seems to be around 16-17%, while

according to the CBO [1998] report the same percentage was

approximately 18% for American pharmaceuticals in 1994;

according to PhRMA [2007] it was 19% in 2006. The point here is

not that the pharmaceutical companies are spending “too little” in

R&D – no one has managed (and we doubt anyone could manage)

to calculate what the socially optimal amount of pharmaceutical

R&D is. The point here is that the top 30 firms spend about twice

as much in promotion and advertising as they do in R&D; and the

top 30 are where private R&D expenditure is carried out, in the



Next we note that no more than 1/3 – more likely 1/4 – of

new drug approvals are considered by the FDA to have therapeutic

benefit over existing treatments, implying that, under the most

generous hypotheses, only 25-30% of the total R&D expenditure

goes toward new drugs. The rest, as we will see better in a

moment, goes toward the so called “me-too” drugs. Related to this,

is the more and more obvious fact that the amount of price

discrimination carried out by the top 30 firms between North

America, Europe and Japan is dramatically increasing, with price

ratios for identical drugs reaching values as high as two or three.

The designated victims, in this particular scheme, are apparently

the U.S. consumers and, to a lesser extent, the Northern European

and the Swiss. At the same time, operating margins in the

pharmaceutical industry run at about 25% against 15% or less for

other consumer goods, with peaks, for US market-based firms, as

high as 35%. The U.S. pharmaceutical industry has been topping

the list of the most profitable sectors in the U.S. economy for

almost two decades, never dropping below third place; an

accomplishment unmatched by any other manufacturing sector.

Price discrimination, made possible by monopoly power, does

have its rewards.


Summing up and moving forward, here are the symptoms

of the malaise we should investigate further.

• There is innovation, but not as much as one might think

there is, given what we spend.

• Pharmaceutical innovation seems to cost a lot and

marketing new drugs even more, which makes the final

price for consumers very high and increasing.

• Some consumers are hurt more than others, even after the

worldwide extension of patent protection.


Very interesting data. Perhaps some kind of government sponsorship cud do better?

Where do Useful Drugs Come From?

Useful new drugs seem to come in a growing percentage

from small firms, startups and university laboratories. But this is

not an indictment of the patent system as, probably, such small

firms and university labs would have not put in all the effort they

did without the prospect of a patent to be sold to a big

pharmaceutical company.


Next there is the not so small detail that most of those

university laboratories are actually financed by public money,

mostly federal money flowing through the NIH. The

pharmaceutical industry is much less essential to medical research

than their lobbyists might have you believe. In 1995, according to

a study by two well reputed University of Chicago economists, the

U.S. spent about $25 billion on biomedical research. About $11.5

billion came from the Federal government, with another $3.6

billion of academic research not funded by the feds. Industry spent

about $10 billion.26 However, industry R&D is eligible for a tax

credit of about 20%, so the government also picked up about $2

billion of the cost of “industry” research. That was then, but are

things different now? They do not appear to be. According to

industry’s own sources27

, total research expenditure by the industry

was, in 2006, about $57 billion while the NIH budget in the same

year (the largest but by no means the only source of public funding

for biomedical research) reached $28.5 bn. So, it seems, things are

not changing: private industry pays for only about 1/3rd of

biomedical R&D. By way of contrast, outside of the biomedical

area, private industry pays for more than 2/3rds of R&D.

Many infected with HIV can still recall the 1980s when no

effective treatment for AIDS was available, and being HIV

positive was a slow death sentence. Not unnaturally many of these

individuals are grateful to the pharmaceutical industry for bringing

to market drugs that – if they do not eliminate HIV – make life



the “evil” pharmaceutical companies are, in fact, among

the most beneficent organizations in the history of mankind

and their research in the last couple of decades will one

day be recognized as the revolution it truly is. Yes, they’re

motivated by profits. Duh. That’s the genius of capitalism –

to harness human improvement to the always-reliable yoke

of human greed. Long may those companies prosper. I owe

them literally my life.28


But it is wise to remember that the modern “cocktail” that is used

to treat HIV was not invented by a large pharmaceutical company.

It was invented by an academic researcher: Dr. David Ho.

The bottom line is rather simple: even today, more than

thirty years after Germany, Italy and Switzerland adopted patents

on drugs and a good half a century after pharmaceutical companies

adopted the policy of patenting anything they could develop, more

than half of the top selling medicines around the world do not owe

their existence to pharmaceutical patents. Are we still so certain

that valuable medicines would stop to be invented if drug patents

were either abolished or drastically curtailed?


This is not particularly original news, though. Older

American readers may remember of the Kefauver Committee of

1961, which investigated monopolistic practices in the

pharmaceutical industry.33 Among the many interesting findings

reported, the study showed that 10 times as many basic drug

inventions were made in countries without product patents as were

made in nations with them. It also found that countries that did

grant product patents had higher prices than those who did not,

again something we seem to be well aware of.


The next question then is, if not in fundamental new

medical discoveries, where does all that pharmaceutical R&D

money go?

Rent-Seeking and Redundancy

There is much evidence of redundant research on

pharmaceuticals. The National Institutes of Health Care

Management reveals that over the period 1989-2000, 54% of FDA-

approved drug applications involved drugs that contained active

ingredients already in the market. Hence, the novelty was in

dosage form, route of administration, or combination with other

ingredients. Of the new drug approvals, 35% were products with

new active ingredients, but only a portion of these drugs were

judged to have sufficient clinical improvements over existing

treatments to be granted priority status. In fact, only 238 out of

1035 drugs approved by the FDA contained new active ingredients

and were given priority ratings on the base of their clinical

performances. In other words, about 77% percent of what the FDA

approves is “redundant” from the strictly medical point of view.34

The New Republic, commenting on these facts, pointedly



If the report doesn’t convince you, just turn on your

television and note which drugs are being marketed most

aggressively. Ads for Celebrex may imply that it will enable

arthritics to jump rope, but the drug actually relieves pain

no better than basic ibuprofen; its principal supposed

benefit is causing fewer ulcers, but the FDA recently

rejected even that claim. Clarinex is a differently packaged

version of Claritin, which is of questionable efficacy in the

first place and is sold over the counter abroad for vastly

less. Promoted as though it must be some sort of elixir, the

ubiquitous “purple pill,” Nexium, is essentially

AstraZeneca’s old heartburn drug Prilosec with a minor

chemical twist that allowed the company to extend its

patent. (Perhaps not coincidentally researchers have found

that purple is a particularly good pill color for inducing

placebo effects.)35


Sad but ironically true, me-too or copycat drugs are largely

the only available tool capable of inducing some kind of

competition in an otherwise monopolized market. Because of

patent protection lasting long enough to make future entry by

generics nearly irrelevant, the limited degree of substitutability and

price competition that copycat drugs bring about is actually

valuable. We are not kidding here, and this is a point that many

commentators often miss in their “anti Big Pharma” crusade.

Given the institutional environment pharmaceutical companies are

currently operating in, me-too drugs are the obvious profit

maximizing tools, and there is nothing wrong with firms

maximizing profits. They also increase the welfare of consumers,

if ever so slightly, by offering more variety of choice and a bit

lower prices. Again, they are an anemic and pathetic version of the

market competition that would take place without patents, but

competition they are. The ironic aspect of me-too drugs, obviously,

is that they are very expensive because of patent protection, and

this cost we have brought upon ourselves for no good reason.


Very interesting. One thing i want to point out, tho, is that it may be worth it to develop drugs that work via a different route or with a slightly different form. Even tho to many people these differences make no difference medically, they can increase comfort by being administered by a difference route. Compare orally taking a pill vs. getting a shot vs. suppositories. It might also be the case that some patients cannot use, for medical reasons, a given route of delivery. In such cases it is medically useful to use another route, ofc. Finally, some patients may be allergic to a drug, and in that case having slightly different form may help.


But in general, i agree with the authors.

The Bad

Despite the fact that our system of intellectual property is

badly broken, there are those who seek to break it even further.

The first priority must be to stem the tide of rent-seekers

demanding ever greater privilege. Within the United States and

Europe, there is a continued effort to expand the scope of

innovations subject to patent, to extend the length of copyright, and

to impose ever more draconian penalties for intellectual property

violation. Internationally, the United States – as a net exporter of

ideas – has been negotiating dramatic increases in protection of

U.S. intellectual monopolists as part of free trade agreements; the

recent Central American Free Trade Agreement (CAFTA) is an

outstanding example of this bad practice.


There seems to be no end to the list of bad proposals for

strengthening intellectual monopoly. To give a partial list starting

with the least significant


# Extend the scope of patent to include sports moves and plays.2

# Extend the scope of copyright to include news clips, press

releases and so forth.3

# Allow for patenting of story lines – something the U.S. Patent

Office just did by awarding a patent to Andrew Knight for his

“The Zombie Stare” invention.4

# Extend the level of protection copyright offers to databases,

along the lines of the 1996 E.U. Database Directive, and of the

subsequent WIPO’s Treaty proposal.5

# Extend the scope of copyright and patents to the results of

scientific research, including that financed by public funds;

something already partially achieved with the Bayh-Dole Act.6

# Extend the length of copyright in Europe to match that in the

U.S. – which is most ironic, as the sponsors of the CTEA and

the DMCA in the USA claimed they were necessary to match

… new and longer European copyright terms.7

# Extend the set of circumstances in which “refusal to license” is

allowed and enforced by anti-trust authorities. More generally,

turn around the 1970’s Antitrust Division wisdom that lead to

the so called “Nine No-No’s” to licensing practices. Previous

wisdom correctly saw such practices as anticompetitive

restraints of trade in the licensing business. Persistent and

successful, lobbying from the beneficiaries of intellectual

monopoly has managed to turn the table around, portraying

such monopolistic practices as “necessary” or even “vital”

ingredients for a well functioning patents’ licensing market.8

# Establish, as a relatively recent U.S. Supreme Court ruling in

the case of Verizon vs Trinko did, that legally acquired

monopoly power and its use to charge higher prices is not only

admissible, it “is an important element of the free-market

system” because “it induces innovation and economicgrowth.”9

# Impose legal restrictions on the design of computers forcing

them to “protect” intellectual property.10

# Make producers of software used in P2P exchanges directly

liable for any copyright violation carried out with the use of

their software, something that may well be in the making after

the Supreme Court ruling in the Grokster case.11

# Allow the patenting of computer software in Europe – this we

escaped, momentarily, due to a sudden spark of rationality by

the European Parliament.12

# Allow the patenting of any kind of plant variety outside of the

United States, where it is already allowed.13

# Allow for generalized patenting of genomic products outside of

the United States, where it is already allowed.14

# Force other countries, especially developing countries, to

impose the same draconian intellectual property laws as the

U.S., the E.U. and Japan.15



Handling properly the pharmaceutical industry constitutes

the litmus test for the reform process we are advocating. Simple

abolition, or even a progressive scaling down of patent term, would

not work in this sector for the reasons outlined earlier. Reforming

the system of intellectual property in the pharmaceutical industry is

a daunting task that involves multiple dimensions of government

intervention and regulation of the medical sector. While we are

perfectly aware that knowledgeable readers and practitioners of the

pharmaceutical and medical industry will probably find the

statements that follow utterly simplistic, when not arrogantly

preposterous, we will try nevertheless. In sequential order, here is

our list of desiderata.


• Free the pharmaceutical industry of the stage II and III

clinical trials’ costs, which are the cost-intensive ones.

Have them financed by the NIH, on a competitive basis:

pharmaceutical companies that have completed stage I

trials, submit applications to the NIH for having stages II

and III financed. In parallel, medical clinics and university

hospitals submit competitive bids to the NIH to have the

approved trials assigned to them. Match the winning drugs

to the best bids, and use public choice common sense to

minimize the most obvious risks of capture. Clinical trial

results become public goods and are available, possibly for

a fee covering administrative and maintenance costs, to all

that request them. This would not prevent drug companies

from deciding that, for whatever reason, they carry out their

clinical trials privately and pay for them; that is their

choice. Nevertheless, allowing the public financing of

stages II and III of clinical trials – by far the largest

component of the private fixed cost associated with the

development of new drugs – would remove the biggest

(nay, the only) rationale for allowing drugs’ patents longer

than a handful of years.


• Begin reducing the term of pharmaceutical patents

proportionally. Should we take pharmaceuticals’ claims at

their face value, our reform eliminates between 70% and

80% of the private fixed cost. Hence, patent length should

be lowered to 4 years, instead of the current 20, without

extension. Recall that, again according to the industry,

effective patent terms are currently around 12 years from

the first day the drug is commercialized, hence we are

proposing to cut them down by 2/3, which is less than the

proportional cost reduction. To compensate for the fact that

NIH-related inefficiencies may slow down the clinical trial

process, start patent terms from the first day in which

commercialization of the drug is authorized. A ten year

transition period would allow enough time to prepare for

the new regulatory environment.


• Sizably reduce the number of drugs that cannot be sold

without medical prescription. For many drugs this is less a

protection of otherwise well informed consumers than a

way of enforcing monopolistic control over doctors’

prescription patterns, and to artificially increase distribution

costs, with rents accruing partly to pharmaceutical

companies and partly to the inefficient local monopolies

called pharmacies.


• Allow for simultaneous or independent discovery, along the

lines of Gallini and Scotchmer.29 Further, because patent

terms should be running from the start of

commercialization, applications should be filed (but not

disclosed) earlier, and mandatory licensing of “idle” or

unused active chemical component and drugs should be

introduced. In other words, make certain the following

monopolistic tactic becomes unfeasible: file a patent

application for entire families of compounds, and then

develop them sequentially over a long period of time,

postponing clinical trials and production of some

compounds until patents on earlier members of the same

family have been fully exploited.


August 11, 2012

DMCAs – its time to game the system

Filed under: Copyright and filesharing — Emil O. W. Kirkegaard @ 19:27

Following recent news that Google will start downgrading sites that have received many DMCA complaints, it is time to show Google why this system is hopeless. To do so, we must game the system, to show how it doesn’t work. To avoid collateral damage, we do so by using it against Google’s own services. Youtube, Google scholar, Google books, and so on.


Here’s how i imagine it to be done:

  1. Make a system for finding content on the target site.
  2. Find a useful list of company names. This can also be taken from actual DMCA complaints, but can also just be a list of auto-generated names.
  3. Find a lot of human written DMCA notices.
  4. Locate the names of the content and the complaining company in those notices from (3).
  5. Swap the names with content found via (1) and names from (2).
  6. Send in the DMCA notices to Google.


That’s the basic idea. Many variations are possible. For instance, one cud probably auto-generate DMCAs as well, to avoid detection. Obviously, Google is a master of data, and if they receive 100k almost identical DMCAs, they can weed them out automatically. They cannot do this easily or at all with generated DMCAs.


This gaming of the system will show that the current DMCA situation is ridiculess and damaging to the internet.

July 24, 2012

Report: Copyright and Innovation: The Untold Story (Michael A. Carrier)

Filed under: Copyright and filesharing — Tags: — Emil O. W. Kirkegaard @ 14:28

Official loation.

Download mirror: Copyright and Innovation The Untold Story


It reads like a series of cases studies of how horribly the current legislation is being abused.




Copyright has an innovation problem. Judicial decisions, private enforcement, and

public dialogue ignore innovation and overemphasize the harms of copyright infringement.

Just to pick one example, “piracy,” “theft,” and “rogue websites” were the focus of debate

in connection with the PROTECT IP Act (PIPA) and Stop Online Piracy Act (SOPA). But

such a debate ignores the effect of copyright law and enforcement on innovation. Even

though innovation is the most important factor in economic growth, it is difficult to observe,

especially in comparison to copyright infringement.

This Article addresses this problem. It presents the results of a groundbreaking study of

31 CEOs, company founders, and vice-presidents from technology companies, the recording

industry, and venture capital firms. Based on in-depth interviews, the Article offers original

insights on the relationship between copyright law and innovation. It also analyzes the

behavior of the record labels when confronted with the digital music revolution. And it

traces innovators’ and investors’ reactions to the district court’s injunction in the case

involving peer-to-peer (p2p) service Napster.

The Napster ruling presents an ideal setting for a natural experiment. As the first

decision to enjoin a p2p service, it presents a crucial data point from which we can trace

effects on innovation and investment. This Article concludes that the Napster decision

reduced innovation and that it led to a venture capital “wasteland.” The Article also

explains why the record labels reacted so sluggishly to the distribution of digital music. It

points to retailers, lawyers, bonuses, and (consistent with the “Innovator’s Dilemma”) an

emphasis on the short term and preservation of existing business models.

The Article also steps back to look at copyright litigation more generally. It

demonstrates the debilitating effects of lawsuits and statutory damages. It gives numerous

examples, in the innovators’ own words, of the effects of personal liability. It traces the

possibilities of what we have lost from the Napster decision and from copyright litigation

generally. And it points to losses to innovation, venture capital, markets, licensing, and the

“magic” of music.

The story of innovation in digital music is a fascinating one that has been ignored for

too long. This Article aims to fill this gap, ensuring that innovation plays a role in today’s

copyright debates.


Disgusting part:

D. Personal Liability: Experience

The concerns about the effects of personal liability are not theoretical.

Several of the innovators I interviewed relayed the harrowing experience of

being personally sued. The first described a “process server that broke into the
office” and “knocked on the door like it was the police.”414

He continued:

“Everything about it was meant to psychologically intimidate,” “it made a huge

impact on me,” and “I am going to do what I can the rest of my career to avoid

being in that situation again.”415

Another innovator explained that the labels said “we’re not going to sue the

company, we are going to sue you personally” since “we can make all kinds of

allegations and it’s your job to prove you’re not infringing” and “the lawsuit is

going to cost you between 15 and 20 million bucks.”416

The innovator decided

that he could “find better uses” for his money “than to give it to lawyers.”417

A third respondent noted how “stressful” it was when he was sued

personally. It was “definitely very scary” when they came with the “multiple

inch lawsuit for a couple billion bucks.”418

The innovator was afraid of the

“unknown” and worried that he could have a judgment “the rest of [his] life.”419

A fourth participant relayed a comment from a high-ranking official in the

recording industry who said “it’s too bad you have” children “who are going to

want to go to college and you’re not going to be able to pay for it.”420


innovator recognized a “real undisguised intimidation factor” and commented

on the “thug-like nature” of the “behavior of the record companies.”421

A fifth innovator knew that the personal lawsuit was “part of the game,”

but still thought it was a “slimy, scummy thing to do.”422

He was disappointed

since he was not a “‘free anarchist’ kind of guy” but was “quite the opposite,”

trying to “do things that [we]re positive for the industry.”423

The labels,

however, “just make up stuff to slander you and disparage people.”424


made partners “very hesitant,” since few would work with a company that was

sued and could go out of business.

The personal attacks were potent, and “most people do not have the

intestinal fortitude to weather [them].”425

One respondent “could list a dozen

people who have been sued and say ‘I want to fight,’” but then “just go away”

and “close up shop, even if they’re doing something that is reasonable.”426

A sixth respondent explained that “by far the most significant factor

worrying the [company’s] founders” and “frankly the thing that pushed them
over the edge to stop the business rather than fight on appeal” was “the

prospect that they could be personally liable.”427

There was “no reason” to sue

the company founder individually, and the plaintiffs made “fairly ludicrous


But the “mere fact” that the allegations were “out there” meant

“the CEO had to watch his step” and could “risk losing his house and his

family’s life savings.”429

There was “no question” that the personal lawsuit

“had the deterrent effect it was intended to have on innovation.”430



Today’s front-page stories and front-line battles on copyright have focused

on issues of piracy and theft. Given the figures of lost profits and jobs bandied

about by the entertainment industry, that is not surprising. But any discussion

of these harms must consider the countervailing argument.

Overaggressive copyright law and enforcement has substantially and

adversely affected innovation. This story has not been told. For it is a difficult

story to tell. It relies on a prediction of what would have happened if history

had taken a different course. We cannot pinpoint these losses with certainty.

And this gap is no match for piracy harms, which have been proclaimed with

the loudest of megaphones.

This Article addresses this age-old problem. It treats the Napster decision

as a case study to ascertain the effects of the decision on innovation and

investment. By interviewing 31 CEOs, company founders, and VPs who

operated in the digital music scene at the time of Napster and afterwards, it

paints the fullest picture to date of the effect of copyright law on innovation.

The Article concludes that the Napster decision stifled innovation,

discouraged negotiation, pushed p2p underground, and led to a venture capital

“wasteland.” It also recounts the industry’s mistakes and adherence to the

Innovator’s Dilemma in preserving an existing business model and ignoring or

quashing disruptive threats to the model. And it shows how the labels used

litigation as a business model, buttressed by vague copyright laws, statutory

damages, and personal liability.

Innovation is crucial to economic growth. But the difficulty of accounting

for it leads courts and policymakers to ignore it in today’s debates. Any

discussion of the appropriate role of copyright law must consider the effects on

innovation. This Article begins this process.

Older Posts »

Powered by WordPress