A small thing about enumerative induction #2

A followup to yesterday (?)’s post.

I had an idea about how to be able to recognize the normal inference, but avoiding recognizing the other one if one so wants to. I don’t particularly feel like this is the right way to go, but supposing someone wants to do it.

The idea is based on similarity. The inference is about two sets of stuff, F and G. The idea is that some sets are somehow more ‘natural’ or important than others, and that this is based on similarity between its members. Clearly, the members of the set of “swans” have more in common than does the members of the set “white things”. Thus, one can infer that all swans are white, but not that all white things are swans.

Problems with this approach? Most likely. For instance, try to pick some sets of stuff that thare both ‘natural’ or neither. Say, in the first case, the set of women and the set of people with red hair. Does the members of the set “women” have more in common with each other than the set of “people with red hair”? Seems difficult to answer.

So, perhaps it isnt a good idea to make a requirement that the set be the most ‘natural’. But there are some other possibilities. For instance, one cud go with the interesting idea that the more the members of a set have in common, the stronger is the inference with that set as the F. The idea of varying strengths of inference fits very well with it being an inductive inference to begin with. In the swans and white things scenario, the inference to “all swans are white” is much stronger than the inference to “all white things are swans”. But perhaps in women and red haired people scenario, the inferences are about of equal strength.

There is also a nice theoretical niceness to this solution. The fact that the members of a particular set have more in common does imply that if one picks an attribute of those things at random, and generalizes from the members, then it is more likely that the generalization holds true. It even seems trivial.

Also, a friend of mine mentioned that this problem is quite similar to the Raven Paradox, and i agree.

The paradox

Hempel describes the paradox in terms of the hypothesis:[2][3]

(1) All ravens are black.

In strict logical terms, via contraposition, this statement is equivalent to:

(2) Everything that is not black is not a raven.

It should be clear that in all circumstances where (2) is true, (1) is also true; and likewise, in all circumstances where (2) is false (i.e. if a world is imagined in which something that was not black, yet was a raven, existed), (1) is also false. This establishes logical equivalence.

Given a general statement such as all ravens are black, a form of the same statement that refers to a specific observable instance of the general class would typically be considered to constitute evidence for that general statement. For example,

(3) Nevermore, my pet raven, is black.

is evidence supporting the hypothesis that all ravens are black.

The paradox arises when this same process is applied to statement (2). On sighting a green apple, one can observe:

(4) This green (and thus not black) thing is an apple (and thus not a raven).

By the same reasoning, this statement is evidence that (2) everything that is not black is not a raven. But since (as above) this statement is logically equivalent to (1) all ravens are black, it follows that the sight of a green apple is evidence supporting the notion that all ravens are black. This conclusion seems paradoxical, because it implies that information has been gained about ravens by looking at an apple.

Both of them are odd things about inductive inferences. I rather dislike using the word paradox in this loose sense meaning something like a puzzle.

Pun #398284

[13:17:11] Emil – Deleet: emilkirkegaard.dk/clusterwiki
[13:32:05] Emil – Deleet: wikipedia
[13:32:07] Emil – Deleet: is a gold mine
[13:32:09] Emil – Deleet: for data mining
[13:32:26] Emil – Deleet: one might say that it is the gold standard for data mining ;)))))
[13:32:36] Emil – Deleet: lame puns ftw

A small thing about enumerative induction

A thing occurred to me while i was reviewing the idea of enumerative induction becus i mentioned it to a friend of mine. The thing is that such inductions are often presented in the a way like this:

A particular thing of the type F is also of the type G
Here is another one
And another

Thus, all things of type F are also of type G

Suppose we formalize it in a simple lazy way:

(∃!x1)(F x1∧G x1)
(∃!x2)(F x2∧G x2)

(∃!xn)(F xn∧G xn)
⊢ (∀x)(Fx→Gx)

But wait, one might as well draw the conclusion:

 ⊢ (∀x)(Gx→Fx)

Surely that follows just as well. However, if we keep this in mind when reviewing the typical example, then we get a result that differs from normal:

This swan is white.
So is this one.
And this one.

So, all swans are white.

But following the above, we might as well just draw this conclusion:

So, all white things are swans.

I see no way to block that inference while letting the other thru, for the simple reason that F and G above are arbitrary. In second-order predicate logic, it looks something like this (with greek capital letters for predicate variables):

(∃Ψ)(∃Ω)(∃!x1)(Ψx1∧Ωx1)
(∃Ψ)(∃Ω)(∃!x2)(Ψx2∧Ωx2)

(∃Ψ)(∃Ω)(∃!xn)(Ψxn∧Ωxn)
⊢ (∃Ψ)(∃Ω) (∀x)(Ψx→Ωx)
⊢ (∃Ψ)(∃Ω) (∀x)(Ωx→Ψx)

Altho, the formalizations above are broken in a slight way. They don’t capture the fact that the predicates in each premise have to be the same. So, one wud have to do it something like this:

(∃Ψ)(∃Ω)(∃!x1)(∃!x2)…(∃!xn)(Ψx1∧Ωx1)∧(Ψx2∧Ωx2)∧…∧(∃!xn)(Ψxn∧Ωxn)

Asimov, anti-intellectualism (more Wiki quotes)

 

So, the picture made me go on a Wikipedia reading frency (as it so happens).

en.wikipedia.org/wiki/Anti-intellectualism

Anti-intellectualism is hostility towards and mistrust of intellect, intellectuals, and intellectual pursuits, usually expressed as the derision of education, philosophy, literature, art, and science, as impractical and contemptible. Alternatively, self-described intellectuals who are alleged to fail to adhere to rigorous standards of scholarship may be described as anti-intellectuals although psuedo-intellectualism is a more commonly, and perhaps more accurately, used description for this phenomenon.

In public discourse, anti-intellectuals usually perceive and publicly present themselves as champions of the common folk — populists against political elitism and academic elitism — proposing that the educated are a social class detached from the everyday concerns of the majority, and that they dominate political discourse and higher education.

Because “anti-intellectual” can be pejorative, defining specific cases of anti-intellectualism can be troublesome; one can object to specific facets of intellectualism or the application thereof without being dismissive of intellectual pursuits in general. Moreover, allegations of anti-intellectualism can constitute an appeal to authority or an appeal to ridicule that attempts to discredit an opponent rather than specifically addressing his or her arguments.[1]

Anti-intellectualism is a common facet of totalitarian dictatorships to oppress political dissent. The Nazi party’s populist rhetoric featured anti-intellectual rants as a common motif, including Adolf Hitler‘s political polemic, Mein Kampf. Perhaps its most extreme political form was during the 1970s in Cambodia under the rule of Pol Pot and the Khmer Rouge, when people were killed for being academics or even for merely wearing eyeglasses (as it suggested literacy) in the Killing Fields.[2]

Dictators, and their dictatorship supporters, use anti-intellectualism to gain popular support, by accusing intellectuals of being a socially detached, politically dangerous class who question the extant social norms, who dissent from established opinion, and who reject nationalism, hence they are unpatriotic, and thus subversive of the nation. Violent anti-intellectualism is common to the rise and rule of authoritarian political movements, such as Italian Fascism, Stalinism in Russia, Nazism in Germany, the Khmer Rouge in Cambodia, and Iranian theocracy.[citation needed]

University

In the English-speaking world, especially in the US, critics like David Horowitz (viz. the David Horowitz Freedom Center), William Bennett, an ex-US secretary of education, and paleoconservative activist Patrick Buchanan, criticize schools and universities as ‘intellectualist‘[citation needed]

In his book The Campus Wars[15] about the widespread student protests of the late 1960s, philosopher John Searle wrote:

the two most salient traits of the radical movement are its anti-intellectualism and its hostility to the university as an institution. […] Intellectuals by definition are people who take ideas seriously for their own sake. Whether or not a theory is true or false is important to them independently of any practical applications it may have. [Intellectuals] have, as Richard Hofstadter has pointed out, an attitude to ideas that is at once playful and pious. But in the radical movement, the intellectual ideal of knowledge for its own sake is rejected. Knowledge is seen as valuable only as a basis for action, and it is not even very valuable there. Far more important than what one knows is how one feels.

In 1972, sociologist Stanislav Andreski[16] warned readers of academic works to be wary of appeals to authority when academics make questionable claims, writing, “do not be impressed by the imprint of a famous publishing house or the volume of an author’s publications. […] Remember that the publishers want to keep the printing presses busy and do not object to nonsense if it can be sold.”

Critics have alleged that much of the prevailing philosophy in American academia (i.e., postmodernism, poststructuralism, relativism) are anti-intellectual: “The displacement of the idea that facts and evidence matter by the idea that everything boils down to subjective interests and perspectives is — second only to American political campaigns — the most prominent and pernicious manifestation of anti-intellectualism in our time.”[17]

In the notorious Sokal Hoax of the 1990s, physicist Alan Sokal submitted a deliberately preposterous paper to Duke University’s Social Texts journal to test if, as he later wrote, a leading “culture studies” periodical would “publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors’ ideological preconceptions.”[18] Social Texts published the paper, seemingly without noting any of the paper’s abundant mathematical and scientific errors, leading Sokal to declare that “my little experiment demonstrate[s], at the very least, that some fashionable sectors of the American academic Left have been getting intellectually lazy.”

In a 1995 interview, social critic Camille Paglia[19] described academics (including herself) as “a parasitic class,” arguing that during widespread social disruption “the only thing holding this culture together will be masculine men of the working class. The cultural elite–women and men–will be pleading for the plumbers and the construction workers.”

Surely Paglia is right about that.

Soviet Union

In the first decade after the Russian Revolution of 1917, the Bolsheviks suspected the Tsarist intelligentsia as potentially traitorous of the proletariat, thus, the initial Soviet government comprised men and women without much formal education. Lenin derided the old intelligentsia with the expression (roughly translated): ‘We ain’t completed no academies’ (мы академиев не кончали).[48] Moreover, the deposed propertied classes were termed Lishentsy (‘the disenfranchised’), whose children were excluded from education; eventually, some 200 Tsarist intellectuals were deported to Germany on Philosophers’ ships in 1922; others were deported to Latvia and to Turkey in 1923.

During the revolutionary period, the pragmatic Bolsheviks employed ‘bourgeois experts’ to manage the economy, industry, and agriculture, and so learn from them. After the Russian Civil War (1917–23), to achieve socialism, the USSR (1922–91) emphasised literacy and education in service to modernising the country via an educated working class intelligentsia, rather than an Ivory Tower intelligentsia. During the 1930s and the 1950s, Joseph Stalin replaced Lenin’s intelligentsia with a “communist” intelligentsia, loyal to him and with a specifically Soviet world view, thereby producing the most egregious examples of Soviet anti-intellectualism — the pseudoscientific theories of Lysenkoism and Japhetic theory, most damaging to biology and linguistics in that country, by subordinating science to a dogmatic interpretation of Marxism.

en.wikipedia.org/wiki/Philosophers%27_ships

Deliberate brain drain? That must be a new low.

en.wikipedia.org/wiki/Equality_of_outcome

Equality of outcome, equality of condition, or equality of results is a controversial political concept.[1] Although it is not always clearly defined, it usually describes a state in which people have approximately the same material wealth or, more generally, in which the general economic conditions of their lives are similar. Achieving this requires reducing or eliminating material inequalities between individuals or households in a society. This could involve a transfer of income and/or wealth from wealthier to poorer individuals, or adopting other institutions designed to promote equality of condition from the start. The concept is central to some political ideologies and is used regularly in political discourse, often in contrast to the term equality of opportunity. A related way of defining equality of outcome is to think of it as “equality in the central and valuable things in life.”[2]

Comparisons with related concepts

Equality of outcome is often compared to related concepts of equality. Generally, the concept is most often contrasted with the concept of equality of opportunity, but there are other concepts as well. The term has been seen differently from differing political perspectives, but of all of the terms relating to equality, equality of outcome is the most “controversial” or “contentious”.[1]

  • Equality of opportunity. This conception generally describes fair competition for important jobs and positions such that contenders have equal chances to win such positions,[3] and applicants are not judged or hampered by unfair or arbitrary discrimination.[4][5][6][7] It entails the “elimination of arbitrary discrimination in the process of selection.”[8] The term is usually applied in workplace situations but has been applied in other areas as well such as housing, lending, and voting rights.[9] The essence is that job seekers have “an equal chance to compete within the framework of goals and the structure of rules established,” according to one view.[10] It is generally seen as a procedural value of fair treatment by the rules.[11]

Political philosophy

In political philosophy, there are differing views whether equal outcomes are beneficial or not. One view is that there is a moral basis for equality of outcome, but that means to achieve such an outcome can be malevolent. Equality of outcome can be a good thing after it has been achieved since it reflects the natural “interdependence of citizens in a highly organized economy” and provides a “basis for social policies” which foster harmony and good will, including social cohesion and reduced jealousy. One writer suggested greater socioeconomic equality was “indispensable if we want to realise our shared commonsense values of societal fairness.”[17] Analyst Kenneth Cauthen in his 1987 book The Passion for Equality suggested that there were moral underpinnings for having equal outcomes because there is a common good––which people both contribute to and receive benefits from––and therefore should be enjoyed in common; Cauthen argued that this was a fundamental basis for both equality of opportunity as well as equality of outcome.[18] Analyst George Packer, writing in the journal Foreign Affairs, argued that “inequality undermines democracy” in the United States partially because it “hardens society into a class system, imprisoning people in the circumstances of their birth.”[19] Packer elaborated that inequality “corrodes trust among fellow citizens” and compared it to an “odorless gas which pervades every corner” of the nation.[19]

An opposing view is that equality of outcomes is not beneficial overall for society since it dampens motivation necessary for humans to achieve great things, such as new inventions, intellectual discoveries, and artistic breakthroughs. According to this view, wealth and income is a reward needed to spur such activity, and with this reward removed, then achievements which would benefit everybody may not happen.

If equality of outcomes is seen as beneficial for society, and if people have differing levels of material wealth in the present, then methods to transform a society towards one with greater equality of outcomes is problematic. A mainstream view is that mechanisms to achieve equal outcomes––to take a society and with unequal wealth and force it to equal outcomes––are fraught with moral as well as practical problems since they often involve force to compel the transfer.[18]

And there is general agreement that outcomes matter. In one report in Britain, unequal outcomes in terms of personal wealth had a strong impact on average life expectancy, such that wealthier people tended to live seven years longer than poorer people, and that egalitarian nations tended to have fewer problems with societal issues such as mental illness, violence, teenage pregnancy, and other social problems.[20] Authors of the book The Spirit Level contended that “more equal societies almost always do better” on other measures, and as a result, striving for equal outcomes can have overall beneficial effects for everybody.[20]

Philosopher John Rawls, in his A Theory of Justice (1971), developed a “second principle of justice” that economic and social inequalities can only be justified if they benefit the most disadvantaged members of society. Further, Rawls claims that all economically and socially privileged positions must be open to all people equally. Rawls argues that the inequality between a doctor’s salary and a grocery clerk’s is only acceptable if this is the only way to encourage the training of sufficient numbers of doctors, preventing an unacceptable decline in the availability of medical care (which would therefore disadvantage everyone). Analyst Paul Krugman writing in The New York Times agreed with Rawls’ position in which both equality of opportunity and equality of outcome were linked, and suggested that “we should try to create the society each of us would want if we didn’t know in advance who we’d be.”[21] Krugman favored a society in which hard-working and talented people can get rewarded for their efforts but in which there was a “social safety net” created by taxes to help the less fortunate.[21]

Krugman’s view is pretty similar to mine. Some equivality of outcome is good (cf. Spirit Level above), but too much is bad.

Comparing equalities: outcome vs opportunity

Both equality of outcome and equality of opportunity have been contrasted to a great extent. When evaluated in a simple context, the more preferred term in contemporary political discourse is equality of opportunity which the public, as well as individual commentators, see as the nicer or more “well-mannered”[14] of the two terms.[22] And the term equality of outcome is seen as more controversial which connotes socialism or possibly communism and is viewed skeptically. A mainstream political view is that the comparison of the two terms is valid, but that they are somewhat mutually exclusive in the sense that striving for either type of equality would require sacrificing the other to an extent, and that achieving equality of opportunity necessarily brings about “certain inequalities of outcome.”[8][23] For example, striving for equal outcomes might require discriminating between groups to achieve these outcomes; or striving for equal opportunities in some types of treatment might lead to unequal results.[23] Policies that seek an equality of outcome often require a deviation from the strict application of concepts such as meritocracy, and legal notions of equality before the law for all citizens.[citation needed] ‘Equality seeking’ policies may also have a redistributive focus.

One newspaper account criticized discussion by politicians on the subject of equality as “weasely”, and thought that terms using the word were politically correct and bland. Nevertheless, when comparing equality of opportunity with equality of outcome, the sense was that the latter type was “worse” for society.[25] Equality of outcome may be incorporated into a philosophy that ultimately seeks equality of opportunity. Moving towards a higher equality of outcome (albeit not perfectly equal) can lead to an environment more adept at providing equality of opportunity by eliminating conditions that restrict the possibility for members of society to fulfill their potential. For example, a child born in a poor, dangerous neighborhood with poor schools and little access to healthcare may be significantly disadvantaged in his attempts to maximize use of talents, no matter his work ethic. Thus, even proponents of meritocracy may promote some level of equality of outcome in order to create a society capable of truly providing equality of opportunity.

While outcomes can usually be measured with a great degree of precision, it is much more difficult to measure the intangible nature of opportunities. That is one reason why many proponents of equal opportunity use measures of equality of outcome to judge success. Analyst Anne Phillips argued that the proper way to assess the effectiveness of the hard-to-measure concept of equality of opportunity is by the extent of the actual and easier-to-measure equality of outcome.[14] Nevertheless, she described single criteria to measure equality of outcome as problematic: the metric of “preference satisfaction” was “ideologically loaded” while other measures such as income or wealth were insufficient, according to her view, and she advocated an approach which combined data about resources, occupations, and roles.[14]

When i think of equality of opportunities, i think of free access to education.

 

Greater equality of outcome is likely to reduce relative poverty, purportedly leading to a more cohesive society. However, if taken to an extreme it may lead to greater absolute poverty if it negatively affects a country’s GDP by damaging workers’ sense of work ethic by destroying incentives to work harder. Critics of equality of outcome believe that it is more important to raise the standard of living of the poorest in absolute terms[citation needed]. Some critics additionally disagree with the concept of equality of outcome on philosophical grounds[citation needed] .

Indeed.

en.wikipedia.org/wiki/Dumbing_down

The term dumbing down describes the deliberate diminishment of the intellectual level of the content of literature, film, schooling and education, news, and other aspects of culture. Conceptually, the term “dumb down” originated (c. 1933) as movie-business slang, used by screenplay writers, to mean “revise so as to appeal to those of little education or intelligence”.[1] The occurrences of dumbing down vary in nature, but usually involve the oversimplification of critical thought to the degree of undermining the concept of intellectual standards — of language and learning — whereby are justified the trivialization of cultural, artistic, and academic standards of cultural works, as in popular culture. Nonetheless, the term “dumbing down” is subjective, because what someone considers as “dumbed down” usually depends upon the taste (value judgement) of the reader, the listener, and the viewer. Sociologically, Pierre Bourdieu proposes that, in a society, the cultural practices of dominant social classes are made legitimate culture to the social disadvantage of subordinate social classes and cultural groups.

en.wikipedia.org/wiki/Mickey_Mouse_degrees

Mickey Mouse degrees is the dysphemism built from the common usage of the term “Mickey Mouse” as a pejorative. It came to prominence in the UK after use by the national tabloids of the United Kingdom to label certain university degree courses worthless or irrelevant.

Origins

The term was used by education minister Margaret Hodge, during a discussion on higher education expansion.[1] Hodge defined a Mickey Mouse course as “one where the content is perhaps not as rigorous as one would expect and where the degree itself may not have huge relevance in the labour market”; and that, furthermore, “simply stacking up numbers on Mickey Mouse courses is not acceptable”. This opinion is often raised in the summer when exam results are released and new university courses revealed. The phrase took off in the late 1990s, as the Labour government created the target of having 50% of students in higher education by 2010.[2]

Examples

In 2000, Staffordshire University was mocked as providing ‘David Beckham Studies’ because it provided a module on the sociological importance of football to students taking sociology, sports science or media studies.[3] A professor for the department stressed that the course would not focus on Beckham, and that the module examines “the rise of football from its folk origins in the 17th century, to the power it’s become and the central place it occupies in British culture, and indeed world culture, today.”[3] Similarly, Durham University designed a module centred around Harry Potter to examine “prejudice, citizenship and bullying in modern society” as a part of a BA degree in Education Studies.[4]

Other degrees deemed ‘Mickey Mouse’ include golf management and surf science.[5] One thing these courses share is that they are vocational, which are perceived to be less intellectually rigorous than the traditional academic degrees.[5] Perception has not been helped in the United Kingdom by the conversion of polytechnics to New Universities.[5] These universities then have trouble competing with the more established institutions instead of being judged as polytechnic universities (though some Polytechnics have been around since 1838 – London Polytechnic) and have been offering bachelors, masters and doctorate degrees in academically challenging subjects such as engineering, physics and mathematics and natural sciences since the early 1900s.

Defenders of these courses object that the derogatory comments made in the media rely on the low symbolic capital of new subjects and rarely discuss course contents beyond the titles.[1] Another factor is the correct or incorrect perception that the take up of these subjects, and the decline of more traditional academic subjects like science, engineering, mathematics,[6] is causing the predictable annual grade rise in the United Kingdom.

Although it is perceived as a recent phenomenon, accusations of “dumbing down” have historical roots. In 1828, University College London was criticised for teaching English literature, a subject which has now become relatively prestigious.[7]

A-level subjects and “soft options”

The A-level in General Studies is seen as a Mickey Mouse subject,[5] as well as A-level Critical Thinking, with many universities not accepting it as part of the requirements for an offer.

Additionally, although not considered Mickey Mouse subjects as such, some qualifications are not preferred by top universities and are regarded as “soft options“.[8] A 2007 report stated that the sciences were more challenging than subjects such as English, which might be taken by students to get higher grades for university applications.[9] An American example is a degree in physical education. These have been issued to members of the college’s athletics teams, to make them eligible to play; otherwise they would fail to pass traditional subjects.[10]

-

en.wikipedia.org/wiki/Academic_inflation

Academic inflation is the process of inflation of the minimum job requirement, resulting in an excess of college-educated individuals with lower degrees (associate and bachelor’s degrees) competing for too few jobs that require these degrees and even higher, preferred qualifications (master’s or doctorate degrees). This condition causes an intensified race for higher qualification and education in a society where a bachelor’s degree today is no longer sufficient to gain employment in the same jobs that may have only required a two- or four-year degree in former years. [1] Inflation has occurred in the minimum degree requirements for jobs, to the level of master’s degrees, Ph.D.s, and post-doctoral, even where advanced degree knowledge is not absolutely necessary to perform the required job.

en.wikipedia.org/wiki/Elitism

Elitism is the belief or attitude that some individuals, who form an elite — a select group of people with a certain ancestry, intrinsic quality or worth, higher intellect, wealth, specialized training or experience, or other distinctive attributes — are those whose views on a matter are to be taken the most seriously or carry the most weight; whose views and/or actions are most likely to be constructive to society as a whole; or whose extraordinary skills, abilities or wisdom render them especially fit to govern.[1]

Alternatively, the term elitism may be used to describe a situation in which power is concentrated in the hands of a limited number of people. Oppositions of elitism include anti-elitism, egalitarianism, populism and political theory of pluralism. Elite theory is the sociological or political science analysis of elite influence in society – elite theorists regard pluralism as a utopian ideal. Elitism also refers to situations in which an individual assumes special privileges and responsibilities in the hope that this arrangement will benefit humanity or themselves. At times, elitism is closely related to social class and what sociologists call social stratification. Members of the upper classes are sometimes known as the social elite. The term elitism is also sometimes used to denote situations in which a group of people claiming to possess high abilities or simply an in-group or cadre grant themselves extra privileges at the expense of others. This form of elitism may be described as discrimination.

Characteristics

Attributes that identify an elite vary; personal achievement may not be essential. As a term “Elite” usually describes a person or group of people who are members of the uppermost class of society and wealth can contribute to that class determination. Personal attributes commonly purported by elitist theorists to be characteristic of the elite include: rigorous study of, or great accomplishment within, a particular field; a long track record of competence in a demanding field; an extensive history of dedication and effort in service to a specific discipline (e.g., medicine or law) or a high degree of accomplishment, training or wisdom within a given field. Elitists tend to favor systems such as meritocracy, technocracy and plutocracy as opposed to radical democracy, political egalitarianism and populism.

Some synonyms for “elite” might be “upper-class,” “aristocratic,” or “big-headed” indicating that the individual in question has a relatively large degree of control over a society’s means of production. This includes those who gain this position due to socioeconomic means and not personal achievement. However, these terms are misleading when discussing elitism as a political theory, because they are often associated with negative “class” connotations and fail to appreciate a more unbiased exploration of the philosophy.

-

en.wikipedia.org/wiki/Academic_elitism

Academic elitism is the criticism that academia or academicians are prone to elitism, or that certain experts or intellectuals propose ideas based more on support from academic colleagues than on real world experience. The term “ivory tower” often carries with it an implicit critique of academic elitism.

Description

Some of economist Thomas Sowell‘s writings (Intellectuals and Society) suggest that academicians and intellectuals have an undeserved “halo effect” and face fewer disincentives than other professions against speaking outside their expertise. Sowell cites Bertrand Russell, Noam Chomsky and Edmund Wilson as paradigmatic examples of this phenomenon. Though respected for their contributions to various academic disciplines (respectively mathematics, linguistics, and literature), the three men became known to the general public only by making often-controversial and disputed pronouncements on politics and public policy that would not be regarded as noteworthy if offered by a medical doctor or skilled tradesman.[1]

Critics of academic elitism argue that highly-educated people tend to form an isolated social group whose views tend to be overrepresented amongst journalists, professors, and other members of the intelligentsia who often draw their salary and funding from taxpayers. Economist Dan Klein shows that the worldwide top-35 economics departments pull 76 percent of their faculty from their own graduates. He argues that the academic culture is pyramidal, not polycentric, and resembles a closed and genteel social circle. Meanwhile, academia draws on resources from taxpayers, foundations, endowments, and tuition payers, and it judges the social service delivered. The result is a self-organizing and self-validating circle.[2]

Another criticism is that universities tend more to pseudo-intellectualism than intellectualism per se; for example, to protect their positions and prestige, academicians may over-complicate problems and express them in obscure language (e.g., the Sokal affair, a hoax by physicist Alan Sokal attempting to show that American humanities professors invoke complicated, pseudoscientific jargon to support their political positions.) Some observers [Camille Paglia] argue that, while academicians often perceive themselves as members of an elite, their influence is mostly imaginary: “Professors of humanities, with all their leftist fantasies, have little direct knowledge of American life and no impact whatever on public policy.”[3]

Academic elitism suggests that in highly competitive academic environments only those individuals who have engaged in scholarship are deemed to have anything worthwhile to say, or do. It suggests that individuals who have not engaged in such scholarship are cranks. Steven Zhang of the Cornell Daily Sun has described the graduates of elite schools, especially those in the Ivy League, of having a “smug sense of success” because they believe “gaining entrance into the Ivy League is an accomplishment unto itself.”[citation needed]

I wonder what pronouncements of Russell and Chomsky Sowell was referring to. I don’t recall reading anything bad by Russell, and Chomsky’s ideas about politics are not that bad. I’m not familiar with the last example.

Paglia again made some nice remarks.

In one of the articles quoted above, there is a ref to an interview with Camille Paglia.

It is rather funny. :D Here is a pdf, and some quotes from it.

Stripping is “a sacred dance of pagan origins” and the money men stuff into G-strings is a

“ritual offering.” “The more a woman takes off her clothes, the more power she has ” and

feminists hate strippers because “modern professional women cannot stand the thought that

their hardwon achievements can be outweighed in an instant by a young hussy flashing a

little tits and ass.”

She was asked to resign from Bennington after she kicked one student and got into a

fistfight with another A lawyer helped her stay on for two more years. She left to begin a

successful teaching career at the Philadelphia College of Performing Arts, which is now the

University of the Arts, where she remains.

PLAYBOY: Are you a feminist?

PAGLIA: I’m absolutely a feminist. The reason other feminists don’t like me is that I

criticize the movement, explaining that it needs a correction. Feminism has betrayed women,

alienated men and women, replaced dialogue with political correctness. PC feminism has

boxed women in. The idea that feminism–that liberation from domestic prison–is going to

bring happiness is just wrong. Women have advanced a great deal, but they are no happier.

The happiest women I know are not those who are balancing their careers and families, like

a lot of my friends are. The happiest people I know are the women–like my cousins–who

have a high school education, got married immediately graduating and never went to

college. They are very religious and they never question their Catholicism. They do not

regard the house as a prison.

I seem to recall that women’s happiness are declining as they get more free. Perhaps that’s the data she is refering to. I did a quick Google and found this.

PLAYBOY: Do you support the men’s movement?

PAGLIA: I think it’s absolutely necessary. It’s no coincidence that Tim Allen’s book is vying

with the Pope’s for the top of the best-seller lists. He is one of the voices of men who are

looking to define masculinity in this age. Robert Bly does this, too. We have allowed the

sexual debate to be defined by women, and that’s not right. Men must speak, and speak in

their own voices, not voices coerced by feminist moralists. Warren Farrell, in The Myth of

Male Power, points out how much propaganda has infiltrated the culture. For example, he

says that the assertion that women earn so much less than men is bullshit. The reason

women earn less than men is that women don’t want the dirty jobs. They aren’t picking up

the garbage, taking the janitorial jobs and so on. They aren’t taking the sales commission

jobs that require you to work all night and on weekends. Most women like clean, safe

offices, which is why they are still secretaries. They don’t want to get too dirty. Also, women

want offices to be nice, happy places. What bullshit. The women’s movement is rooted in the

belief that we don’t even need men. All it will take is one natural disaster to prove how

wrong that is. Then, the only thing holding this culture together will be masculine men of the

working class. The cultural elite–women and men–will be pleading for the plumbers and

the construction workers. We are such a parasitic class.

I began to realize this in the Seventies when I thought women could do it on their own. But

then something would go wrong with my car and I’d have to go to the men. Men would stop,

men would lift up the hood, more men would come with a truck and take the car to a place

where there were other men who would call other men who would arrive with parts. I saw

how feminism was completely removed from this reality.

I also learned something from the men at the garage. At Bennington, I would go to a faculty

meeting and be aware that everyone hated me. The men were appalled by a strong, loud

woman. But I went to this auto shop and the men there thought I was cute. “Oh, there’s that

Professor Paglia from the college.” The real men, men who work on cars, find me cute. They

are not frightened by me, no matter how loud I am. But the men at the college were terrified

because they are eunuchs, and I threatened every goddamned one of them.

:D

PLAYBOY: Do you think that feminism is antisexual?

PAGLIA: The problem with America is that there’s too little sex, not too much. The more

our instincts are repressed, the more we need sex, pornography and all that. The problem is

that feminists have taken over with their attempts to inhibit sex. We have a serious

testosterone problem in this country.

PLAYBOY: Caused by what?

PAGLIA: It’s a mess out there. Men are suspicious of women’s intentions. Feminism has

crippled them. They don’t know when to make a pass. If they do make a pass, they don’t

know if they’re going to end up in court.

PLAYBOY: Is that why you’ve been so critical about the growing number 6f sexual

harassment cases?

PAGLIA: Yes, though I believe in moderate sexual harassment guidelines. But you can’t the

Stalinist situation we have in America right now, where any neurotic woman can make any

stupid charge and destroy a man’s reputation. If there is evidence of false accusation, the

accuser should be expelled. Similarly, a woman who falsely accuses a man of rape should be

sent to jail. My definition of sexual harassment is specific. It is only sexual harassment–by a

man or a woman–if it is quid pro quo. That is, if someone says, “You must do this or I’m going to do that”–for instance, fire you. And whereas touching is sexual harassment, speech

is not. I am militant on this. Words must remain free. The solution to speech is that women

must signal the level of their tolerance–women are all different. Some are very bawdy.

PLAYBOY: What, about women who are easily offended and too scared or intimidated to

speak up?

PAGLIA: Too bad. You must develop the verbal tools to counter offensive language. That s

life. Feminism has created a privileged, white middle class of girls who claim they’re victims

because they want to preserve their bourgeois decorum and passivity.

Amen. Recall Sweden’s rape laws?

Sweden has one of the toughest laws on sexual crime in the world – lawyers sometimes joke that men need written permission first.

… (these quotes are from BBC)

Under Swedish law, there are legal gradations of the definition of rape.

There is the most serious kind, involving major violence.

But below that there is the concept of ‘regular rape’, still involving violence but not violence of the utmost horror.

And below that there is the idea of ‘unlawful coercion’. Talking generally, and not about the Assange case, this might involve putting emotional pressure on someone.

The three categories involve prison sentences of 10, six and four years respectively.

Putting emotional pressure on someone? wtf

The case may turn on if or when consensual sex turned into non-consensual sex – is a male decision not to use a condom a case of that, for example?

Under Swedish law, Mr Assange has not been formally charged. He has merely been accused and told he has questions to answer.

The process is for the prosecutor to question him to see if a formal criminal accusation should then be laid before a court.

There would then be a hearing in front of some lay people to see if that formal charge should go to a formal trial.

The attitude towards rape in Sweden – informed by a strong sense of women’s rights – means that it is more likely to be reported to police.

Some 53 rape offences are reported per 100,000 people, the highest rate in Europe.

The figures may reflect a higher number of actual rapes committed but it seems more likely that tough attitudes and a broader definition of the crime are more significant factors.

… (back to Paglia interview)

PLAYBOY: You once said that you look through the eyes of a rapist. What did you mean?

PAGLIA: I have lesbian impulses, so I understand how a man looks at a woman.

PLAYBOY: Why did you say a rapist rather than a man?

PAGLIA: Men do look at women as rapists. When I was growing up, it wasn’t possible for

me to do anything about my attraction to women. Lesbianism didn’t exist in that time, as far

as I knew. If I were young today, when everyone is experimenting-bisexuality is in with a lot

of young women–it would have been different. But I always felt frustrated and excluded,

looking in from a distance. As a woman, I couldn’t rape–it’s not possible–but if I had been a

man with similar feelings, who knows? I developed a stalking thing.PLAYBOY: When does that kind of lust become rape?

PAGLIA: There may have been cases when I would have gone over the line. I understand

when men complain about women giving mixed messages, because women have given me a

lot of mixed messages. I understand the rage that this can cause.

PLAYBOY: Give us an example.

PAGLIA: A woman I’m talking with at some event says, “Let’s leave here and go to this bar,”

which is a lesbian bar. We go to the bar and we’re talking and then she says, “Let’s go have

coffee,” and we go to this coffee shop and end up, at three in the morning, half a block from

her apartment. Finally, she says, “All right, well, goodnight.” She’s ready to go home alone

and I look at her, like, “What do you mean? Aren’t we going to go back to your apartment?”

“No.” “What?” And she says, “Do you think I was leading you on?” Un-fucking-believable. I

can’t tell you the rage. I am, at that point, looking at her and…. All I can say is, if I had been

an 18-year-old street kid instead of a 45-year-old woman, I would have stabbed her. I was

completely humiliated and furious. If I had been a guy with a hard-on, I would have hit her.

PLAYBOY: Would you have been justified in hitting her?

PAGLIA: That’s not the point. The point is that I would have. Women must be aware of the

signals they send out, aware that, at three in the morning, with that flirting, they have created

expectations. If they fail to fulfill those expectations, they can be in trouble. They could be

out with a Ted Bundy or a Jeffrey Dahmer. A woman cannot go on a date, have a bunch of

drinks and go back to some guy’s dorm room or apartment and then, when he jumps on her,

cry date rape. Most people aren’t sure what’s going to happen on a first date. Given that

ambiguity, every woman must be totally aware at every moment that she is responsible for

every choice she makes.

PLAYBOY: Is there a certain personality type that becomes obsessed?

PAGLIA: I collected 599 pictures of Elizabeth Taylor–some people find that obsessive. I

collected 599. Not 600, but 599. I feel that genius and obsession be the same thing. It is rare when a woman is driven by obsession. Similarly, it is rare when a woman is a genius. That’s

why I said one of my most notorious sentences, that there is no woman Mozart because there

is no woman Jack the Ripper. Men are more prone to obsession because they are fleeing

domination by women. They flee to a chess game or to a computer or to fixing a car, or

whatever, to attempt to complete their identities, because they always feel incomplete.

PLAYBOY: Why do cars or computers complete our identities?

PAGLIA: Because they are separate from the emotion that is fixated on women. Very

masculine men are not at home in the world of emotion, which requires judgments that are

not cause and effect. Heterosexuals have a kind of tunnel vision, which is a virtue, in my

opinion. It allows them to make the great breakthroughs in music or science. The feminist

line is that there are no women Mozarts because we have been trained to believe that we

can’t succeed in that field or we were never given the opportunity to excel because we were

being groomed to be wives. I don’t think that anymore. It’s hormones.

PLAYBOY: You have said that you disagree with Germaine Greer’s contrary opinion–that

the greatest artists are not women because “you cannot get great art from mutilated egos.”

PAGLIA: The fact is, you get great art only from mutilated egos. Only mutilated egos are

obsessive enough. When I entered graduate school in 1968, 1 thought women were going to

have all these enormous achievements, that they would redo everything. Then I saw every

one of my female friends–these great minds who were going to transform the world–get

married, move because their husbands moved and have babies. I screamed at them: What are

you doing? Finish your great book! But they all read me the riot act. They said, “Camille, we

are not you.” They said, “We want life. We want love. We want happiness. We are not

happy–like you are–just living off ideas.” I am weird. I am more like Dahmer was or

Hinckley. I’m like one of those obsessives. Or Dante.

Some quotes from Every Thing Must Go (Ladymann, Ross, and others)

every thing must go

 

Preface

This is a polemical book. One of its main contentions is that contemporary

analytic metaphysics, a professional activity engaged in by some extremely

intelligent and morally serious people, fails to qualify as part of the enlightened

pursuit of objective truth, and should be discontinued.We think it is impossible

to argue for a point like this without provoking some anger. Suggesting that

a group of highly trained professionals have been wasting their talents—and,

worse, sowing systematic confusion about the nature of the world, and how to

find out about it—isn’t something one can do in an entirely generous way. Let

us therefore stress that we wrote this book not in a spirit of hostility towards

philosophy or our fellow philosophers, but rather the opposite. We care a great

deal about philosophy, and are therefore distressed when we see its reputation

harmed by its engagement with projects and styles of reasoning we believe bring

it into disrepute, especially among scientists. We recognize that we may be

regarded as a bit rough on some other philosophers, but our targets are people

with considerable influence rather than novitiates. We think the current degree

of dominance of analytic metaphysics within philosophy is detrimental to the

health of the subject, and make no apologies for trying to counter it.

-

1

In Defence of Scientism

The revival ofmetaphysics after the implosion of logical positivismwas accom-

panied by the ascendancy of naturalism in philosophy, and so it seemed obvious

to many that metaphysics ought not to be ‘revisionary’ but ‘descriptive’ (in Peter

Strawson’s terminology, 1959). That is, rather than metaphysicians using ratio-

nal intuition to work out exactly how the absolute comes to self-consciousness,

they ought instead to turn to science and concentrate on explicating the deep

structural claims about the nature of reality implicit in our best theories. So, for

example, Special Relativity ought to dictate the metaphysics of time, quantum

physics the metaphysics of substance, and chemistry and evolutionary biology

the metaphysics of natural kinds. However, careful work by various philosophers

of science has shown us that this task is not straightforward because science,

usually and perhaps always, underdetermines the metaphysical answers we are

seeking. (See French 1998, 93). Many people have taken this in their stride and

set about exploring the various options that are available. Much excellent work

has resulted.⁹ However, there has also been another result of the recognition that

science doesn’t wear metaphysics on its sleeve, namely the resurgence of the kind

of metaphysics that floats entirely free of science. Initially granting themselves

permission to do a bit of metaphysics that seemed closely tied to, perhaps even

important to, the success of the scientific project, increasing numbers of philoso-

phers lost their positivistic spirit. The result has been the rise to dominance of

projects in analytic metaphysics that have almost nothing to do with (actual)

science. Hence there are now, once again, esoteric debates about substance,

universals, identity, time, properties, and so on, which make little or no reference

to science, and worse, which seem to presuppose that science must be irrelevant

to their resolution. They are based on prioritizing armchair intuitions about the

nature of the universe over scientific discoveries. Attaching epistemic significance

to metaphysical intuitions is anti-naturalist for two reasons. First, it requires

ignoring the fact that science, especially physics, has shown us that the universe

is very strange to our inherited conception of what it is like. Second, it requires

ignoring central implications of evolutionary theory, and of the cognitive and

behavioural sciences, concerning the nature of our minds.

-

1.2.1 Intuitions and common sense in metaphysics

The idea that intuitions are guides to truth, and that they constitute the basic

data for philosophy, is of course part of the Platonic and Cartesian rationalist

tradition.¹⁰ However, we have grounds that Plato and Descartes lacked for

thinking that much of what people find intuitive is not innate, but is rather a

developmental and educational achievement. What counts as intuitive depends

partly on our ontogenetic cognitive makeup and partly on culturally specific

learning. Intuitions are the basis for, and are reinforced and modified by,

everyday practical heuristics for getting around in the world under various

resource (including time) pressures, and navigating social games; they are not

cognitive gadgets designed to produce systematically worthwhile guidance in

either science or metaphysics. In light of the dependence of intuitions on species,

cultural, and individual learning histories, we should expect developmental and

cultural variation in what is taken to be intuitive, and this is just what we find. In

the case of judgements about causes, for example,Morris et al. (1995) report that

Chinese and American subjects differed with respect to how they spontaneously

allocated causal responsibility to agents versus environmental factors. Given

that the ‘common sense’ of many contemporary philosophers is shaped and

supplemented by ideas from classical physics, the locus of most metaphysical

discussions is an image of the world that sits unhappily between the manifest

image and an out of date scientific image.¹¹

 

While contemporary physics has become even more removed from common

sense than classical physics, we also have other reasons to doubt that our common

sense image of the world is an appropriate basis for metaphysical theorizing.

Evolution has endowed us with a generic theory or model of the physical world.

This is evident from experiments with very young children, who display surprise

and increased attention when physical objects fail to behave in standard ways. In

particular, they expect ordinary macroscopic objects to persist over time, and not

to be subject to fusion or fission (Spelke et al. 1995). For example, if a ball moves

behind a screen and then two balls emerge from the other side, or vice versa,

infants are astonished. We have been equipped with a conception of the nature

of physical objects which has been transformed into a foundational metaphysics

of individuals, and a combinatorial and compositional conception of reality that

is so deeply embedded in philosophy that it is shared as a system of ‘obvious’

presuppositions by metaphysicians who otherwise disagree profoundly.

 

This metaphysics was well suited to the corpuscularian natural philosophy of

Descartes, Boyle, Gassendi, and Locke. Indeed, the primary qualities of matter

which became the ontological basis of the mechanical philosophy are largely

properties which form part of the manifest image of the world bequeathed to

us by our natural history. That natural history has been a parochial one, in the

sense that we occupy a very restricted domain of space and time. We experience

events that last from around a tenth of a second to years. Collective historical

memory may expand that to centuries, but no longer. Similarly, spatial scales of

a millimetre to a few thousand miles are all that have concerned us until recently.

Yet science has made us aware of how limited our natural perspective is. Protons,

for example, have an effective diameter of around 10−15m, while the diameter of

the visible universe is more than 1019 times the radius of the Earth. The age of

the universe is supposed to be of the order of 10 billion years. Even more homely

sciences such as geology require us to adopt time scales that make all of human

history seem like a vanishingly brief event.

 

As LewisWolpert (1992) chronicles,modern science has consistently shown us

that extrapolating our pinched perspective across unfamiliar scales, magnitudes,

and spatial and temporal distances misleads us profoundly. Casual inspection

and measurement along scales we are used to suggest that we live in a Euclidean

space; General Relativity says that we do not. Most people, Wolpert reports, are

astounded to be told that there are more molecules in a glass of water than there

are glasses of water in the oceans, and more cells in one human finger than there

are people in the world (ibid. 5). Inability to grasp intuitively the vast time scales

on which natural selection works is almost certainly crucial to the success of

creationists in perpetuating foolish controversies about evolution (Kitcher 1982).

The problems stemming from unfamiliar measurement scales are just the tip of

an iceberg of divergences between everyday expectations and scientific findings.

No one’s intuitions, in advance of the relevant science, told them that white

light would turn out to have compound structure, that combustion primarily

involves something being taken up rather than given off (Wolpert 1992, 4), that

birds are the only living descendants of dinosaurs, or that Australia is presently

on its way to a collision with Alaska. AsWolpert notes, science typically explains

the familiar in terms of the unfamiliar. Thus he rightly says that ‘both the ideas

that science generates and the way in which science is carried out are entirely

counter-intuitive and against common sense—by which I mean that scientific

ideas cannot be acquired by simple inspection of phenomena and that they

are very often outside everyday experience’ (ibid. 1). He later strengthens the

point: ‘I would almost contend that if something fits with common sense it

almost certainly isn’t science’ (ibid. 11). B. F. Skinner characteristically avoids

all waffling on the issue: ‘What, after all, have we to show for non-scientific or

pre-scientific good judgment, or common sense, or the insights gained through

personal experience? It is science or nothing’ (Skinner 1971, 152–3).

 

Lewis famously advocated a metaphysical methodology based on subjecting

rival hypotheses to a cost–benefit analysis. Usually there are two kinds of cost

associated with accepting a metaphysical thesis. The first is accepting some kind

of entity into one’s ontology, for example, abstracta, possibilia, or a relation

of primitive resemblance. The second is relinquishing some intuitions, for

example, the intuition that causes antedate their effects, that dispositions reduce

to categorical bases, or that facts about identity over time supervene on facts

about instants of time. It is taken for granted that abandoning intuitions should

be regarded as a cost rather than a benefit. By contrast, as naturalists we are

not concerned with preserving intuitions at all, and argue for the wholescale

abandonment of those associated with the image of the world as composed of

little things, and indeed of the more basic intuition that there must be something

of which the world is made.

 

There are many examples of metaphysicians arguing against theories by

pointing to unintuitive consequences, or comparing theories on the basis of

the quantity and quality of the intuitions with which they conflict. Indeed,

proceeding this way is more or less standard. Often, what is described as intuitive

or counterintuitive is recondite. For example, L. A. Paul (2004, 171) discusses

the substance theory that makes the de re modal properties of objects primitive

consequences of their falling under the sortals that they do: ‘A statue is essentially

statue shaped because it falls under the statue-sort, so cannot persist through

remoulding into a pot’ (171). This view apparently has ‘intuitive appeal’, but

sadly, ‘any counterintuitive consequences of the view are difficult to explain

or make palatable’. The substance theory implies that two numerically distinct

objects such as a lump of bronze and a statue can share their matter and their

region, but this ‘is radically counterintuitive, for it seems to contradict our usual

way of thinking aboutmaterial objects as individuated by theirmatter and region’

(172). Such ways of thinking are not ‘usual’ except among metaphysicians and

we do not share them.

 

Paul says ‘[I]t seems, at least prima facie, that modal properties should super-

vene on the nonmodal properties shared by the statue and the lump’ (172).

This is the kind of claim that is regularly made in the metaphysics literature.

We have no idea whether it is true, and we reject the idea that such claims can

be used as data for metaphysical theorizing. Paul summarizes the problem for

the advocate of substance theory as follows: ‘This leaves him in the unfortunate

position of being able to marshal strong and plausible commonsense intuitions

to support his view but of being unable to accommodate these intuitions in

a philosophically respectable way’ (172). So according to Paul, metaphysics

proceeds by attempts to construct theories that are intuitive, commonsensical,

palatable, and philosophically respectable. The criteria of adequacy for meta-

physical systems have clearly come apart from anything to do with the truth.

Rather they are internal and peculiar to philosophy, they are semi-aesthetic,

and they have more in common with the virtues of story-writing than with

science.

-

In 1.1 we announced our resistance to the ‘domestication’ of science. It would

be easy to get almost any contemporary philosopher to agree that domestication

is discreditable if the home for which someone tries to make science tame is

a populist environment. Consider, for example, the minor industry that seeks

to make sense of quantum mechanics by analogies with Eastern mysticism.

This is obviously, in an intellectual context much less rigorous than that of

professional philosophy, an attempt to domesticate physics by explaining it in

terms of things that common sense thinks it comprehends. Few philosophers

will regard the gauzy analogies found in this genre as being of the slightest

metaphysical interest. Yet are quantum processes any more like those described

by Newtonian physics than they are like the temporal and spatial dislocations

imagined by mystics, which ground the popular comparisons? People who

know almost no formal physics are encouraged by populists to find quantum

mechanics less wild by comparing it to varieties of disembodiment. Logically,

this is little different from philosophers encouraging people who know a bit

of physics to make quantum accounts seem less bizarre by comparing them

to what they learned in A-level chemistry.²⁸ We might thus say that whereas

naturalistic metaphysics ought to be a branch of the philosophy of science, much

metaphysics that pays lip-service to naturalism is really philosophy of A-level

chemistry.

-

and then i got bored with the next parts.

 

Wikipedia on contraception, abortion, and everything in between!

I really just was curious to know how whores in older times avoided getting pregnant… but it turned into a longer read. Here are some excerpts. Enjoy :)

en.wikipedia.org/wiki/History_of_condoms

The history of condoms goes back at least several centuries, and perhaps beyond. For most of their history, condoms have been used both as a method of birth control, and as a protective measure against sexually transmitted diseases. Condoms have been made from a variety of materials; prior to the 19th century, chemically treated linen and animal tissue (intestine or bladder) are the best documented varieties. Rubber condoms gained popularity in the mid-19th century, and in the early 20th century major advances were made in manufacturing techniques. Prior to the introduction of the combined oral contraceptive pill, condoms were the most popular birth control method in the Western world. In the second half of the 20th century, the low cost of condoms contributed to their importance in family planning programs throughout the developing world. Condoms have also become increasingly important in efforts to fight the AIDS pandemic.

Distribution of condoms in the United States was limited by passage of the Comstock laws, which included a federal act banning the mailing of contraceptive information (passed in 1873) as well as State laws that banned the manufacture and sale of condoms in thirty states.[1]:144,193 In Ireland the 1889 Indecent Advertisements Act made it illegal to advertise condoms, although their manufacture and sale remained legal.[1]:163-4,168 Contraceptives were illegal in 19th century Italy and Germany, but condoms were allowed for disease prevention.[1]:169-70 Despite legal obstacles, condoms continued to be readily available in both Europe and America, widely advertised under euphemisms such as male shield and rubber good.[1]:146-7 In late 19th century England, condoms were known as “a little something for the weekend”.[1]:165 Only in the Republic of Ireland were condoms effectively outlawed. There, their sale and manufacture remained illegal until the 1970s.[1]:171

In the 1960s and 1970s quality regulations tightened,[1]:267,285 and legal barriers to condom use were removed. In 1965, the U.S. Supreme Court case Griswold v. Connecticut struck down one of the remaining Comstock laws, the bans of contraception in Connecticut and Massachusetts. France repealed its anti-birth control laws in 1967. Similar laws in Italy were declared unconstitutional in 1971. Captain Beate Uhse in Germany founded a birth control business, and fought a series of legal battles continue her sales.[1]:276-9 In Ireland, legal condom sales (only to people over 18, and only in clinics and pharmacies) were allowed for the first time in 1978. (All restrictions on Irish condom sales were lifted in 1993.)[1]:329-30

The first New York Times story on acquired immunodeficiency syndrome (AIDS) was published on July 3, 1981.[1]:294 In 1982 it was first suggested that the disease was sexually transmitted.[10] In response to these findings, and to fight the spread of AIDS, the U.S. Surgeon General Dr. C. Everett Koop supported condom promotion programs. However, President Ronald Reagan preferred an approach of concentrating only on abstinence programs. Some opponents of condom programs stated that AIDS was a disease of homosexuals and illicit drug users, who were just getting what they deserved. In 1990 North Carolina senator Jesse Helms argued that the best way to fight AIDS would be to enforce state sodomy laws.[1]:296-7

Their claims about AIDS and homosexuals reminds me of

en.wikipedia.org/wiki/Gay-related_immune_deficiency

Gay-related immune deficiency (GRID) (sometimes informally called the gay plague) was the 1982 name first proposed to describe an “unexpected cluster of cases”[1] of what is now known as AIDS,[2] after public health scientists noticed clusters of Kaposi’s sarcoma and pneumocystis pneumonia among gay males in Southern California and New York City.[1]

en.wikipedia.org/wiki/Contraception#History

Birth control, contraception, family planning or fertility control[1] refers to the usage of methods or devices intended to control the incidence of a pregnancy.[2][3] Some include the termination of pregnancy in the definition.[4]

There are a number of ways that a female can engage in sexual activity while reducing or otherwise controlling the risk of becoming pregnant. Available contraception methods include barrier methods, such as condoms and diaphragms; hormonal contraception including oral pills, patches and vaginal rings, injectable contraceptives, and intrauterine devices.[5] Birth control options shortly after sex includes emergency contraceptives.[6] Permanent methods include sterilization. Some people regard abstinence as a contraception method as well as engaging in sexual activity which does not involve penile-vaginal penetration.

While methods of birth control have been used since ancient times, effective and safe methods only become avaliable in the 20th century.[5] For some people, birth control involves moral issues, and many countries limit access to contraception due to the moral and political issues involved.[5] Some argue, for example, that the availability of contraception increases the level of sexual activity within society.

In modern Europe, knowledge of herbal abortifacients and contraceptives to regulate fertility has largely been lost.[41]Historian John M. Riddle found that this remarkable loss of basic knowledge can be attributed to attempts of the early modern European states to “repopulate” Europe after dramatic losses following the plague epidemics that started in 1348.[41] According to Riddle, one of the policies implemented by the church and supported by feudal lords to destroy the knowledge of birth control included the initiation of witch hunts againstmidwives, who had knowledge of herbal abortifacients and contraceptives.[41][42][43]

On December 5, 1484, Pope Innocent VIII issued the Summis desiderantes affectibus, a papal bull in which he recognized the existence of witches and gave full papal approval for the Inquisition to proceed “correcting, imprisoning, punishing and chastising” witches “according to their deserts.” In the bull, which is sometimes referred to as the “Witch-Bull of 1484″, the witches were explicitly accused of having “slain infants yet in the mother’s womb” (abortion) and of “hindering men from performing the sexual act and women from conceiving” (contraception).[44] Famous texts that served to guide the witch hunt and instruct magistrates on how to find and convict so-called “witches” include the Malleus Maleficarum, and Jean Bodin‘s De la demonomanie des sorciers.[45] The Malleus Maleficarum was written by the priest J. Sprenger (born in Rheinfelden, today Switzerland), who was appointed by Pope Innocent VIII as the General Inquisitor for Germany around 1475, and H. Institoris, who at the time was inquisitor for Tyrol, Salzburg, Bohemia and Moravia. The authors accused witches, among other things, of infanticide and having the power to steal men’s penises.[46]

Barrier methods such as the condom have been around much longer, but were seen primarily as a means of preventingsexually transmitted diseases, not pregnancy. Casanova in the 18th century was one of the first reported using “assurance caps” to prevent impregnating his mistresses.[47]

en.wikipedia.org/wiki/Comstock_laws

The Comstock Act, 17 Stat. 598, enacted March 3, 1873, was a United States federal law which amended the Post Office Act[1] and made it illegal to send any “obscene, lewd, and/or lascivious” materials through the mail, including contraceptive devices and information. In addition to banning contraceptives, this act also banned the distribution of information on abortion for educational purposes. Twenty-four states passed similar prohibitions on materials distributed within the states.[2] These state and federal restrictions are collectively known as the Comstock laws.

The Comstock Laws were variously case tested, but courts struggled to establish definitive thinking about the laws. One of the most notable applications of Comstock was Roth v. United States, in which the Supreme Court affirmed Comstock, but set limits on what could be considered obscene. This landmark case represented one of the first notable revisions since the Hicklin test, and the evolving nature of the laws on which Comstock was conceived.

The sale and distribution of obscene materials had been prohibited prior to Comstock in most American states since the early 19th century, and by federal law since 1873. Federal anti-obscenity laws are currently still in effect and enforced,[3][4] though the definition of obscenity has changed much (now expressed in the Miller Test) and extensive debates on what is obscene continue.

The Comstock laws banned distribution of sex education information, based on the premise that it was obscene and led to promiscuous behavior[6] Mary Ware Dennett was fined $300 in 1928, for distributing a pamphlet containing sex education material. The American Civil Liberties Union (ACLU), led by Morris Ernst, appealed her conviction and won a reversal, in which judge Learned Hand ruled that the pamphlet’s main purpose was to “promote understanding”.[6]

Publications addressing homosexuality were automatically deemed obscene under the Comstock Act until 1958.[7] In One, Inc. v. Olesen, as a follow-on to Roth v. United States, the Supreme Court granted free press rights around homosexuality.

In 1915, architect William Sanger was charged under the New York law against disseminating contraceptive information.[10] In 1918, his wife Margaret Sanger was similarly charged. On appeal, her conviction was reversed on the grounds that contraceptive devices could legally be promoted for the cure and prevention of disease.[11]

The prohibition of devices advertised for the explicit purpose of birth control was not overturned for another eighteen years. During World War I, U.S. Servicemen were the only members of the Allied forces sent overseas without condoms which led to more widespread STDs among U.S. troops. In 1932, Sanger arranged for a shipment of diaphragms to be mailed from Japan to a sympathetic doctor in New York City. When U.S. customs confiscated the package as illegal contraceptive devices, Sanger helped file a lawsuit. In 1936, a federal appeals court ruled in United States v. One Package of Japanese Pessaries that the federal government could not interfere with doctors providing contraception to their patients.[11]

In 1965, the U.S. Supreme Court case Griswold v. Connecticut struck down one of the remaining contraception Comstock laws in Connecticut and Massachusetts. However, Griswold only applied to marital relationships. Eisenstadt v. Baird (1972) extended its holding to unmarried persons as well.

en.wikipedia.org/wiki/Miller_Test

The Miller test (also called the Three Prong Obscenity Test[1]), is the United States Supreme Court‘s test for determining whether speech or expression can be labeled obscene, in which case it is not protected by the First Amendment to the United States Constitution and can be prohibited.

The Miller test was developed in the 1973 case Miller v. California.[2] It has three parts:

The work is considered obscene only if all three conditions are satisfied.

The first two prongs of the Miller test are held to the standards of the community, and the last prong is held to what is reasonable to a person of the United States as a whole. The national reasonable person standard of the third prong acts as a check on the community standard of the first two prongs, allowing protection for works that in a certain community might be considered obscene but on a national level might have redeeming value.

For legal scholars, several issues are important. One is that the test allows for community standards rather than a national standard. What offends the average person in Nacogdoches, Texas, may differ from what offends the average person in Chicago. The relevant community, however, is not defined.

Another important issue is that Miller asks for an interpretation of what the “average” person finds offensive, rather than what the more sensitive persons in the community are offended by, as obscenity was defined by the previous test, the Hicklin test, stemming from the English precedent.

In practice, pornography showing genitalia and sexual acts is not ipso facto obscene according to the Miller test. For instance, in 2000 a jury in Provo, Utah, took only a few minutes to clear Larry Peterman, owner of a Movie Buffs video store, in Utah County, Utah, a region which had often boasted of being one of the most conservative areas in the US. Researchers had shown that guests at the local Marriott Hotel were disproportionately large consumers of pay-per-view pornographic material, accessing far more material than the store was distributing.[4][5]

en.wikipedia.org/wiki/Combined_oral_contraceptive_pill

The combined oral contraceptive pill (COCP), often referred to as the birth-control pill or colloquially as “the Pill“, is a birth control method that includes a combination of an estrogen (oestrogen) and a progestin (progestogen). When taken by mouth every day, these pills inhibit female fertility. They were first approved for contraceptive use in the United States in 1960, and are a very popular form of birth control. They are currently used by more than 100 million women worldwide and by almost 12 million women in the United States.[6][7] Usage varies widely by country,[8] age, education, and marital status: one third of women[9] aged 16–49 in the United Kingdom currently use either the combined pill or a progestogen-only “minipill“,[10] compared to only 1% of women in Japan.[11]

The placebo pills allow the user to take a pill every day; remaining in the daily habit even during the week without hormones. Placebo pills may contain an iron supplement,[14][15] as iron requirements increase during menstruation.

Rather clever.

Less frequent placebos

Main article: Extended cycle combined oral contraceptive pill

If the pill formulation is monophasic, it is possible to skip withdrawal bleeding and still remain protected against conception by skipping the placebo pills and starting directly with the next packet. Attempting this with bi- or tri-phasic pill formulations carries an increased risk of breakthrough bleeding and may be undesirable. It will not, however, increase the risk of getting pregnant.

Starting in 2003, women have also been able to use a three-month version of the Pill.[17] Similar to the effect of using a constant-dosage formulation and skipping the placebo weeks for three months, Seasonale gives the benefit of less frequent periods, at the potential drawback of breakthrough bleeding. Seasonique is another version in which the placebo week every three months is replaced with a week of low-dose estrogen.

A version of the combined pill has also been packaged to completely eliminate placebo pills and withdrawal bleeds. Marketed as Anya or Lybrel, studies have shown that after seven months, 71% of users no longer had any breakthrough bleeding, the most common side effect of going longer periods of time without breaks from active pills.[18]

Weight

The same 1992 French review article noted that in the subgroup of adolescents 15–19 years of age in the 1982 National Survey of Family Growth (NSFG) who had stopped taking the Pill, 20–25% reported they stopped taking the Pill because of either acne or weight gain, and another 25% stopped because of fear of cancer.[26] A 1986 Hungarian study comparing two high-dose estrogen (both 50 µg ethinyl estradiol) pills found that women using a lower-dose biphasic levonorgestrel formulation (50 µg levonorgestrel x 10 days + 125 µg levonorgestrel x 11 days) reported a significantly lower incidence of weight gain compared to women using a higher-dose monophasic levonorgestrel formulation (250 µg levonorgestrel x 21 days).[42]

Many clinicians consider the public perception of weight gain on the Pill to be inaccurate and dangerous. A 2000 British review article concluded there is no evidence that modern low-dose pills cause weight gain, but that fear of weight gain contributed to poor compliance in taking the Pill and subsequent unintended pregnancy, especially among adolescents.[43]

More recently a Swedish study concluded that combined oral contraceptive use was not found to be a predictor for weight increase in the long term. Postal questionnaires regarding weight/height, and contraception were sent to random samples of 19-year-old women born in 1962 (n = 656) and 1972 (n = 780) in 1981 and 1991. The responders were followed longitudinally, and the same women were contacted again every fifth year from 1986–2006 and from 1996–2006, respectively. There was no significant difference in weight increase in the women grouped according to use or non-use of combined oral contraceptive or duration of combined oral contraceptive use. The two cohorts of women were grouped together in a longitudinal analysis and the following factors age, combined oral contraceptive use, children, smoking and exercise were included in the model. The only predictor for weight increase was age (P < 0.001), resulting in a gain of 0.45 kg/year. Smokers decreased (P < 0.001) their weight by 1.64 kg per 15 years.[44]

Mortality

Overall, use of oral contraceptives appears to slightly reduce all-cause mortality, with a rate ratio for overall mortality of 0.87 (confidence interval: 0.79–0.96) when comparing ever-users of OCs with never-users.[58]

Environmental impact

A woman using COCPs excretes from her urine and feces natural estrogens, estrone (E1) and estradiol (E2), and synthetic estrogen ethinylestradiol (EE2).[129] These hormones can pass through water treatment plants and into rivers.[130] Other forms of contraception, such as the contraceptive patch, use the same synthetic estrogen (EE2) that is found in COCPs, and can add to the hormonal concentration in the water when flushed down the toilet.[131] This excretion is shown to play a role in causing endocrine disruption, which affects the sexual development and the reproduction, in wild fish populations in segments of streams contaminated by treated sewage effluents.[129][132] A study done in British rivers supported the hypothesis that the incidence and the severity of intersex wild fish populations were significantly correlated with the concentrations of the E1, E2, and EE2 in the rivers.[129]

A review of activated sludge plant performance found estrogen removal rates varied considerably but averaged 78% for estrone, 91% for estradiol, and 76% for ethinylestradiol (estriol effluent concentrations are between those of estrone and estradiol, but estriol is a much less potent endocrine disruptor to fish).[133] Effluent concentrations of ethinylestradiol are lower than estradiol which are lower than estrone, but ethinylestradiol is more potent than estradiol which is more potent than estrone in the induction of intersex fish and synthesis of vitellogenin in male fish.[134]

Cool.

en.wikipedia.org/wiki/Extended_cycle_combined_oral_contraceptive_pill

Extended cycle combined oral contraceptive pills are COCPs packaged to reduce or eliminate the withdrawal bleeding that occurs once every 28 days in traditionally packaged COCPs. Extended cycle use of COCPs may also be called menstrual suppression.[1]

Other combined hormonal contraceptives (those containing both an estrogen and a progestogen) may also be used in an extended or continuous cycle. For example, the NuvaRing vaginal ring[2] and the contraceptive patch[3] have been studied for extended cycle use, and the monthly combined injectable contraceptive may similarly eliminate bleeding.[4]

Before the advent of modern contraceptives, reproductive age women spent most of their time either pregnant or nursing. In modern western society women typically have about 450 periods during their lives, as compared to about 160 formerly.[5]

en.wikipedia.org/wiki/Condom#Other_uses

Other uses

Condoms excel as multipurpose containers because they are waterproof, elastic, durable, and will not arouse suspicion if found. Ongoing military utilization begun during World War II includes:

  • Tying a non-lubricated condom over the muzzle of the rifle barrel in order to prevent barrel fouling by keeping out detritus.[88]
  • The OSS used condoms for a plethora of applications, from storing corrosive fuel additives and wire garrotes (with the T-handles removed) to holding the acid component of a self-destructing film canister, to finding use in improvised explosives.[89]
  • Navy SEALs have used doubled condoms, sealed with neoprene cement, to protect non-electric firing assemblies for underwater demolitions—leading to the term “Dual Waterproof Firing Assemblies.”[90]

Other uses of condoms include:

  • Covers for endovaginal ultrasound probes.[91] Covering the probe with a condom reduces the amount of blood and vaginal fluids that the technician must clean off between patients.
  • Condoms can be used to hold water in emergency survival situations.[92]
  • Condoms have also been used to smuggle cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose.[93]
  • In Soviet gulags, condoms were used to smuggle alcohol into the camps by prisoners who worked outside during daylight. While outside, the prisoner would ingest an empty condom attached to a thin piece of rubber tubing, the end of which was wedged between his teeth. The smuggler would then use a syringe to fill the tubing and condom with up to three liters of raw alcohol, which the prisoner would then smuggle back into the camp. When back in the barracks, the other prisoners would suspend him upside down until all the spirit had been drained out. Aleksandr Solzhenitsyn records that the three liters of raw fluid would be diluted to make seven liters of crude vodka, and that although such prisoners risked an extremely painful and unpleasant death if the condom burst inside them, the rewards granted them by other prisoners encouraged them to run the risk.[94]
  • In his book entitled Last Chance to See, Douglas Adams reported having used a condom to protect a microphone he used to make an underwater recording. According to one of his traveling companions, this is standard BBC practice when a waterproof microphone is needed but cannot be procured.[95]
  • Condoms are used by engineers to keep soil samples dry during soil tests.[96]
  • Condoms are used in the field by engineers to initially protect sensors embedded in the steel or aluminum nose-cones of Cone Penetration Test (CPT) probes when entering the surface to conduct soil resistance tests to determine the bearing strength of soil.[97]
  • Condoms are used as a one-way valve by paramedics when performing a chest decompression in the field. The decompression needle is inserted through the condom, and inserted into the chest. The condom folds over the hub allowing air to exit the chest, but preventing it from entering.[98]

lol’d

en.wikipedia.org/wiki/Masters_and_Johnson#Four_stage_model_of_the_sexual_response

Four stage model of the sexual response

One of the most enduring and important aspects of their work has been the four stage model of sexual response, which they described as the human sexual response cycle. They defined the four stages of this cycle as:

This model shows no difference between Freud‘s purported “vaginal orgasm” and “clitoral orgasm“: the physiologic response was identical, even if the stimulation was in a different place.

Masters and Johnson’s findings also revealed that men undergo a refractory period following orgasm during which they are not able to ejaculate again, whereas there is no refractory period in women: this makes women capable of multiple orgasm. They also were the first to describe the phenomenon of the rhythmic contractions of orgasm in both sexes occurring initially in 0.8 second intervals and then gradually slowing in both speed and intensity.

Laboratory comparison of homosexual male versus female sex

Masters and Johnson randomly assigned gay men into couples and lesbians into couples and then observed them having sex in the laboratory, at the Masters and Johnson Institute. They provided their observations in Homosexuality in Perspective:

Assigned male homosexual study subjects A, B, and C…, interacting in the laboratory with previously unknown male partners, did discuss procedural matters with these partners, but quite briefly. Usually, the discussion consisted of just a question or a suggestion, but often it was limited to nonverbal communicative expressions such as eye contact or hand movement, any of which usually proved sufficient to establish the protocol of partner interaction. No coaching or suggestions were made by the research team.

—p. 55

According to Masters and Johnson, this pattern differed in the lesbian couples:

While initial stimulative activity tended to be on a mutual basis, in short order control of the specific sexual experience usually was assumed by one partner. The assumption of control was established without verbal communication and frequently with no obvious nonverbal direction, although on one occasion discussion as to procedural strategy continued even as the couple was interacting physically.

—p. 55

en.wikipedia.org/wiki/History_of_abortion

The practice of abortion, the termination of a pregnancy so that it does not result in birth, dates back to ancient times. Pregnancies were terminated through a number of methods, including the administration of abortifacient herbs, the use of sharpened implements, the application of abdominal pressure, and other techniques.

Abortion laws and their enforcement have fluctuated through various eras. In many western nations during the 20th century various women’s rights groups, doctors, and social reformers successfully worked to have abortion bans repealed. While abortion remains legal in most of the West, this legality is regularly challenged by pro-life groups.[2]

Natural abortifacients

Art from a 13th-century illuminated manuscript features a herbalist preparing a concotion containing pennyroyal for a woman.

Botanical preparations reputed to be abortifacient were common in classical literature and folk medicine. Such folk remedies, however, varied in effectiveness and were not without the risk of adverse effects. Some of the herbs used at times to terminate pregnancy are poisonous.

A list of plants which cause abortion was provided in De viribus herbarum, an 11th-century herbal written in the form of a poem, the authorship of which is incorrectly attributed to Aemilius Macer. Among them were rue, Italian catnip, savory, sage, soapwort, cyperus, white and black hellebore, and pennyroyal.[16]

King’s American Dispensatory of 1898 recommended a mixture of brewer’s yeast and pennyroyal tea as “a safe and certain abortive”.[37] Pennyroyal has been known to cause complications when used as an abortifacient. In 1978 a pregnant woman from Colorado died after consuming 2 tablespoonfuls of pennyroyal essential oil[38][39] which is known to be toxic.[40] In 1994 a pregnant woman, unaware of an ectopic pregnancy that needed immediate medical care, drank a tea containing pennyroyal extract to induce abortion without medical help. She later died as a result of the untreated ectopic pregnancy, mistaking the symptoms for the abortifacient working.[41]

Tansy has been used to terminate pregnancies since the Middle Ages.[42] It was first documented as an emmenagogue in St. Hildegard of Bingen’s De simplicis medicinae.[16]

A variety of juniper, known as savin, was mentioned frequently in European writings.[3] In one case in England, a rector from Essex was said to have procured it for a woman he had impregnated in 1574; in another, a man wishing to remove his girlfriend of like condition recommended to her that black hellebore and savin be boiled together and drunk in milk, or else that chopped madder be boiled in beer. Other substances reputed to have been used by the English include Spanish fly, opium, watercress seed, iron sulphate, and iron chloride. Another mixture, not abortifacient, but rather intended to relieve missed abortion, contained dittany, hyssop, and hot water.[34]

The root of worm fern, called “prostitute root” in the French, was used in France and Germany; it was also recommended by a Greek physician in the 1st century. In German folk medicine, there was also an abortifacient tea, which included marjoram, thyme, parsley, and lavender. Other preparations of unspecified origin included crushed ants, the saliva of camels, and the tail hairs of black-tailed deer dissolved in the fat of bears.[31]

19th century to present

“Admonition against abortion.” Late 19th-century Japanese Ukiyo-e woodblock print.

19th century medicine saw advances in the fields of surgery, anaesthesia, and sanitation, in the same era that doctors with the American Medical Association lobbied for bans on abortion in the United States[44] and the Parliament of the United Kingdom passed the Offences against the Person Act 1861.

Various methods of abortion were documented regionally in the 19th century and early 20th century. A paper published in 1870 on the abortion services to be found in Syracuse, New York, concluded that the method most often practiced there during this time was to flush inside of the uterus with injected water. The article’s author, Ely Van de Warkle, claimed this procedure was affordable even to a maid, as a man in town offered it for $10 on an installment plan.[45] Other prices which 19th-century abortion providers are reported to have charged were much more steep. In Great Britain, it could cost from 10 to 50 guineas, or 5% of the yearly income of a lower middle class household.[3]

In France during the latter half of the 19th century, social perceptions of abortion started to change. In the first half of the 19th century, abortion was viewed as the last resort for pregnant but unwed women. But as writers began to write about abortion in terms of family planning for married women, the practice of abortion was reconceptualized as a logical solution to unwanted pregnancies resulting from ineffectual contraceptives.[46] The formulation of abortion as a form of family planning for married women was made “thinkable” because both medical and non-medical practitioners agreed on the relative safety of the procedure.[46]

In the United States and England, the latter half of the 19th century saw abortion become increasingly punished. One writer justified this by claiming that the number of abortions among married women had increased markedly since 1840.[47] In the United States, these laws had a limited effect on middle and upper class women who could, though often with great expense and difficulty, still obtain access to abortion, while poor and young women had access only to the most dangerous and illegal methods.[48]

After a rash of unexplained miscarriages in Sheffield, England, were attributed to lead poisoning caused by the metal pipes which fed the city’s water supply, a woman confessed to having used diachylon — a lead-containing plaster — as an abortifacient in 1898.[3] Criminal investigation of an abortionist in Calgary, Alberta in 1894 revealed through chemical analysis that the concoction he had supplied to a man seeking an abortifacient contained Spanish fly.[49]

Women of Jewish descent in Lower East Side, Manhattan are said to have carried the ancient Indian practice of sitting over a pot of steam into the early 20th century.[31] Dr. Evelyn Fisher wrote of how women living in a mining town in Wales during the 1920s used candles intended for Roman Catholic ceremonies to dilate the cervix in an effort to self-induce abortion.[3] Similarly, the use of candles and other objects, such as glass rods, penholders, curling irons, spoons, sticks, knives, and catheters was reported during the 19th century in the United States.[50]

Abortion remained a dangerous procedure into the early 20th century; more dangerous than childbirth until about 1930.[51] Of the estimated 150,000 abortions that occurred annually in the US during the early 20th century, one in six resulted in the woman’s death.[52]

Another case where prohibition simply makes things worse?

Effects of legislation on population

Abortion has been banned or restricted throughout history in countries around the world. Multiple scholars have noticed a that in many cases, this has caused women to seek dangerous, illegal abortions underground or inspired trips abroad for “reproductive tourism”.[87][88][89] Half of the world’s current deaths due to unsafe abortions occur in Asia.[87]

Predictable. The same result as almost always happens (speed tickets being the only exception i know of) when one makes something illegal and the law is unenforceable, and there is popular demand for the thing.

India

See also: Abortion in India

India enforced the Indian Penal Code from 1860 to 1971, criminalizing abortion and punishing both the practitioners and the women who sought out the procedure.[89] As a result, countless women died in an attempt to obtain illegal abortions from unqualified midwives and “doctors”.[89] Abortion was made legal under specific circumstances in 1971, but as scholar S. Chandrasekhar notes, lower class women still find themselves at a greater risk of injury or death as a result of a botched abortion.[89]

 

Thoughts and comments: Is psychology a science? (Paul Lutus)

www.arachnoid.com/psychology/index.html

 

In order to consider whether psychology is a science, we must first define our terms. It is not

overarching to say that science is what separates human beings from animals, and, as time goes by

and we learn more about our animal neighbors here on Earth, it becomes increasingly clear that

science is all that separates humans from animals. We are learning that animals have feelings,

passions, and certain rights. What animals do not have is the ability to reason, to rise above feeling.

 

Wat

The point here is that legal evidence is not remotely scientific evidence. Contrary to popular belief,

science doesn’t use sloppy evidentiary standards like “beyond a reasonable doubt,” and scientific

theories never become facts. This is why the oft-heard expression “proven scientific fact” is never

appropriate – it only reflects the scientific ignorance of the speaker. Scientific theories are always

theories, they never become the final and only explanation for a given phenomenon.

 

Meh. Sure is phil of sci 101 here.

Besides the confusing word usage “become facts” (wat), a scientific fact is just something that is beyond reasonable doubt and enjoys virtually unanimous agreement among the relevant scientists.

Apart from being filtered through all possible explanations, scientific theories have another

important property – they must make predictions that can be tested and possibly falsified. In fact,

and this may surprise you, scientific theories can only be falsified, they can never be proven true

once and for all. That is why they are called “theories,” as certain as some of them are – it is always

possible they may be replaced by better theories, ones that explain more, or are simpler, or that

make more accurate predictions than their forebears.

 

No, that is not why they are called “theories”, they are called “theories” because thats the word for “explanation” in science.

 

Nothing can be “proven true once and for all” with absolute certainty. This is not specific to science.

It’s very simple, really. If a theory doesn’t make testable predictions, or if the tests are not practical,

or if the tests cannot lead to a clear outcome that supports or falsifies the theory, the theory is not

scientific. This may come as another surprise, but very little of the theoretical content of human

psychology meets this scientific criterion. As to the clinical practice of psychology, even less meets

any reasonable definition of “scientific.”

 

Nonsense. There have been many scientific theories that we could not figure out how to test to begin with, but we later did, and the evidence either test either confirmed or disconfirmed the theories.

Human psychology and the related fields of psychoanalysis and psychotherapy achieved their

greatest acceptance and popularity in the 1950s, at which time they were publicly perceived as

sciences. But this was never true, and it is not true today – human psychology has never risen to the

status of a science, for several reasons

 

Derp. Conflation of psychoanalysis crap with good psychology.

 

Although, to his defense, he did somewhat announce this in the beginning:

Since its first appearance in 2003, this article has become required reading in a number of college-

level psychology courses. Because this article is directed toward educated nonspecialist readers

considering psychological treatment, students of psychology are cautioned that terms such as

“psychology,” “clinical psychology” and “psychiatry” are used interchangeably, on the ground that

they rely on the field of human psychology for validation, in the same way that astronomy and

particle physics, though very different, rely on physics for validation.

But as to the study of human beings, there are severe limitations on what kinds of

studies are permitted. As an example, if you want to know whether removing specific

brain tissue results in specific behavioral changes, you cannot perform the study on

humans. You have to perform it on animals and try to extrapolate the result to humans.

 

Eh. One can just look at case studies of people with brain injuries.

 

Besides, there are lots of studies that are allowed, and in the past we did some studies that probably would not be allowed today, say Milgram Experiment or perhaps Stanford Prison Experiment.

One of the common work-arounds to this ethical problem is to perform what are called

“retrospective studies,” studies that try to draw conclusions from past events rather than

setting up a formal laboratory experiment with strict experimental protocols and a

control group. If you simply gather information about people who have had a certain

kind of past experience, you are freed from the ethical constraint that prevents you from

exposing experimental subjects to that experience in the present.

 

But, because of intrinsic problems, retrospective studies produce very poor evidence

and science. For example, a hypothetical retrospective study meant to discover whether

vitamin X makes people more intelligent may only “discover” that the people who took

the vitamin were those intelligent enough to take it in the first place. In general,

retrospective studies cannot reliably distinguish between causes and effects, and any

conclusions drawn from them are suspect.

 

Think about this for a moment. In order for human psychology to be placed on a

scientific footing, it would have to conduct strictly controlled experiments on humans,

in some cases denying treatments or nutritional elements deemed essential to health (in

order to have a control group), and the researchers would not be able to tell the subjects

whether or not they were receiving proper care (in order not to bias the result). This is

obviously unethical behavior, and it is a key reason why human psychology is not a

science.

 

He is just wrong. It is possible to distinguish between cause and effects. One has to do more studies of different kinds. Etc. It is difficult but not impossible.

The items listed above inevitably create an atmosphere in which absolutely anything

goes (at least temporarily), judgments about efficacy are utterly subjective, and as a

result, the field of psychology perpetually splinters into cults and fads (examples

below). “Studies” are regularly published that would never pass muster with a self-

respecting peer review committee from some less soft branch of science.

 

Another dumb conflation of psychology as a whole with some specific subfield, and the most dodgy of them all.

In an effort to answer the question of whether intelligence is primarily governed

by environment or genes, psychologist Cyril Burt (1883-1971) performed a

long-term study of twins that was later shown to be most likely a case of

conscious or unconscious scientific fraud. His work, which purported to show

that IQ is largely inherited, was used as a “scientific” basis by various racists and

others, and, despite having been discredited, still is.

 

1) The case against him seems rather weak.

2) His conclusions are very consistent with modern studies of the same thing.

 

See, John Philippe Rushton – New evidence on Sir Cyril Burt His 1964 Speech to the Association of Educational Psychologists

In the 1950s, at the height of psychology’s public acceptance, neurologist Walter

Freeman created a surgical procedure known as “prefrontal lobotomy.” As

though on a quest and based solely on his reputation and skills of persuasion,

Freeman singlehandedly popularized lobotomy among U.S. psychologists,

eventually performing about 3500 lobotomies, before the dreadful consequences

of this practice became apparent.

 

At the height of Freeman’s personal campaign, he drove around the country in a

van he called the “lobotomobile,” performing lobotomies as he traveled. There

was plenty of evidence that prefrontal lobotomy was a catastrophic clinical

practice, but no one noticed the evidence or acted on it. There was — and is —

no reliable mechanism within clinical psychology to prevent this sort of abuse.

 

Ah yes, lobotomies. He seems to have missed ECT on his example list.

 

The last claim is clearly wrong.

These examples are part of a long list of people who have tried to use psychology to

give a scientific patina to their personal beliefs, perhaps beginning with Francis Galton

(1822-1911), the founder and namer of eugenics. Galton tried (and failed) to design

psychological tests meant to prove his eugenic beliefs. This practice of using

psychology as a personal soapbox continues to the present, in fact, it seems to have

become more popular.

 

What these accounts have in common is that no one was able (or willing) to use

scientific standards of evidence to refute the claims at the time of their appearance,

because psychology is only apparently a science. Only through enormous efforts and

patience, including sometimes repeating an entire study using the original materials, can

a rare, specific psychological claim be refuted. Such exceptions aside, there is ordinarily

no recourse to the “testable, falsifiable claims” criterion that sets science apart from

ordinary human behavior.

 

Galton was a very cool guy, and eugenics is well and alive today, we just call eugenic practices, like prenatal screening, something else (well, most people do).

 

Intelligence does actually seem to have fallen from when Galton and others measured reaction times to modern reaction time measurements, cf. this post.

Some may object that the revolution produced by psychoactive drugs has finally placed psychology

on a firm scientific footing, but the application of these drugs is not psychology, it is pharmacology.

The efficacy of drugs in treating conditions once thought to be psychological in origin simply

presents another example where psychology got it wrong, and the errors could only be uncovered

using disciplines outside psychology.

 

It’s neither. It’s psychopharmacology.

To summarize this section, psychology is the sort of field that can describe things, but as shown

above, it cannot reliably explain what it has described. In science, descriptions are only a first step

— explanations are essential:

• An explanation, a theory, allows one to make a prediction about observations not yet made.

• A prediction would permit a laboratory test that might support or falsify the underlying

theory.

• The possibility of falsification is what distinguishes science from cocktail chatter.

 

A labaratory test? Perhaps geology isn’t science either? Surely, it has a history of crazy theories as well, try Expanding Earth theory.

As with most professions, scientists have a private language, using terms that seem completely

ordinary but that convey special meaning to other scientists. For example, when a scientist identifies

a field as a “descriptive science,” he is politely saying it is not a science.

 

No… It means that is isn’t a causal science. Say, grammar is a descriptive science/subfield within linguistics.

 

Depending on whather we include non-empirical fields in science, there is also logic and math, which are formal, descrptive and noncausal fields.

 

But in another use of the word, it means something else, namely, descriptive as opposed to applied.

This seems an appropriate time (and context) to comment on psychology’s “bible”: the Diagnostic

and Statistical Manual of Mental Disorders and its companion, the International Classifications of

Diseases, Mental Disorders Section (hereafter jointly referred to as DSM). Now in its fourth edition,

this volume is very revealing because of its significance to the practice of psychology and

psychiatry and because of what it claims are valid mental illnesses.

 

These comparisons with religion (“bible”) are not very impartial. He would have helped his case if he was more neutral in his word choice.

 

That’s not to say that the DSM’s, psychiatry and the various diagnosis aren’t dodgy.

Putting aside for the moment the nebulous “phase of life problem,” excuse me? – “Sibling rivalry”

is now a mental illness? Yes, according to the current DSM/ICD. And few are as strict about

spelling as I am, but even I am not ready to brand as mentally ill those who (frequently) cannot

accurately choose from among “site,” “cite” and “sight” when they write to comment on my Web

pages. As to “mathematics disorder” being a mental illness, sorry, that just doesn’t add up.

 

Eh, they are refering to dyslexia probably, not the inability to distinguish various English homophones.

[table “” not found /]

Based on this table and extrapolating into the future using appropriate regression methods, in 100

years there will be more than 3600 conditions meriting treatment as mental illnesses. To put it

another way, there will be more mental states identified as abnormal than there are known, distinct

mental states. In short, no behavior will be normal.

 

This doesn’t follow. It might be that the diagnoses are simply getting more and more specific. For instance, there are now quite a few different eating disorders diagnosed, and quite a few diferent schizophrenic disorders. These are just splitting the diagnoses into more without covering more or much more behavior.

 

There is also the possibility that the future diagnoses will be more and more niche related, covering less and less behavior. In that case, there won’t be any sharp increase.

Many conditions have made their way into the DSM and nearly none are later removed.

Homosexuality was until recently listed as a mental illness, one believed to be amenable to

treatment, in spite of the total absence of clinical evidence. Then a combination of research findings

from fields other than psychology, and simple political pressure, resulted in the belated removal of

homosexuality from psychology’s official list of mental illnesses. Imagine a group of activists

demanding that the concept of gravity be removed from physics. Then imagine physicists yielding

to political pressure on a scientific issue. But in psychology, this is the norm, not the exception, and it is nearly always the case that the impetus for change comes from a field other than psychology.

 

Meh. Extrapolating much.

Does research honor the null hypothesis? The “null hypothesis” is a scientific precept

that says assertions are assumed to be false unless and until there is evidence to support

them. In scientific fields the null hypothesis serves as a threshold-setting device to

prevent the waste of limited resources on speculations and hypotheses that are not

supported by direct evidence or reasonable extrapolations from established theory.

 

Does psychology meet this criterion? Well, to put it diplomatically, if psychiatrist John

Mack of the Harvard Medical School can conduct a research program that takes alien

abduction stories at face value, if clinical psychologists can appear as expert witnesses

in criminal court to testify about nonexistent “recovered memories,” only to see their

clients vigorously deny and retract those “memories” later, if any imaginable therapeutic

method can be put into practice without any preliminary evaluation or research, then no,

the null hypothesis is not honored, and psychology fails Point B.

 

That’s not how the null hypothesis works. From Wiki:

The practice of science involves formulating and testing hypotheses, assertions that are capable of being proven false using a test of observed data. The null hypothesis typically corresponds to a general or default position. For example, the null hypothesis might be that there is no relationship between two measured phenomena[1] or that a potential treatment has no effect.[2]

In response to my claim that evidence-based practice is to date an unrealized idea, a

psychologist recently replied that there is “practice-based evidence.” Obviously this

argument was offered in the heat of the moment and my correspondent could not have

considered the implications of his remark.

 

Practice-based evidence, to the degree that it exists, suffers from serious ethical and

practical issues. It fails an obvious ethical standard — if the “evidence” is coincidental

to therapy, a client will be unable to provide informed consent to be a research subject

on the ground that neither he nor the therapist knows in advance that he will be a

research subject. Let me add that a scenario like this would never be acceptable in

mainstream medicine (not to claim that it never happens), but it is all too common in

clinical psychology for research papers to exploit evidence drawn from therapeutic

settings.

 

What? Practice-based evidence is common in medicine. The reason is that we simply don’t know how well many often used treatments work. Cf. Bad Science.

 

Case studies are also very common, and useful.

Comparison

Let’s compare the foregoing to physics, a field that perfectly exemplifies the interplay of

scientific research and practice. When I use a GPS receiver to find my way across the

landscape, every aspect of the experience is governed by rigorously tested physical

theory. The semiconductor technology responsible for the receiver’s integrated circuits

obeys quantum theory and materials science. The mathematics used to reduce satellite

radio signals to a terrestrial position honors Einstein’s relativity theories (both of them,

and for different reasons) as well as orbital mechanics. If any of these theories is not

perfectly understood and taken into account, I won’t be where the GPS receiver says I

am, and that could easily have serious consequences.

 

Yes, let’s compare it to a very disimilar field. Psychology is a social science. The fields are very different.

I offer this mini-essay and this comparison because most of my psychological

correspondents have no idea what makes a field scientific. Many people believe that any

field where science takes place is ipso facto scientific. But this is not true — there is

more to science than outward appearances.

 

But physics is not a good field to compare with. The epistemology of physics is EASY compared with social science, including psychology.

But this is all hypothetical, because psychology and psychiatry have never been based in science,

and therefore are free of the constraints placed on scientific theories. This means these fields will

prevail far beyond their last shred of credibility, just as religions do, and they will be propelled by

the same energy source — belief. That pure, old-fashioned fervent variety of belief, unsullied by

reason or evidence.

 

Meh.

This essay feels like it was written by a physicist or something like that who is disppointed that the same evidence standard is not used in other fields. He chose some kinda of mix of psychology and psychiatry to blame. Unfairly blaiming the entire field of psychology, when the problems are mostly within certain subfields.

 

He also displays a lack of knowledge about many of the things he mentions.

 

Mix it with a poor understanding of phil of sci, yeah.

 

So what is he? Well, read for yourself.

Quotes, links, comments 15-08-12

torrentfreak.com/google-starts-punishing-pirate-sites-in-search-results-120810/

The beginning of the end of Google?

www.techdirt.com/articles/20120809/09213019977/amazon-stops-processing-payments-crowdfunding-platform-creative-commons-books.shtml

This one is a great idea in the pre-copyright reform world.

www.techdirt.com/blog/casestudies/articles/20120728/19122219866/traditional-publisher-ebook-pricing-harming-authors-careers.shtml

Yes, for ebooks as well as for games. Lower prices → more sales, and more money too, since the number of sales more than makes up for the lower prices.

news.yahoo.com/detroit-schools-fight-sexting-plan-search-students-cell-225700550.html

Craaaazy. Via THC.

“While sexting is not illegal, doing so under the age of 18 can count as child pornography. In Michigan, kids who text can be prosecuted under child pornography laws and can be sentenced with 20 years in prison if convicted. Even having sexually explicit photos on your phone is a four-year felony.”

Insane.

en.wikipedia.org/wiki/Unsimulated_sex_in_film

Director/actor Melvin Van Peebles appears in several real sex scenes. His son, Mario, performed in a simulated sex scene. Van Peebles contracted a sexually transmitted disease while filming and successfully filed for worker’s compensation. While the sex scenes may have been explicit and the actor maintains that they were real, nothing is shown onscreen that could not have been faked. Nonetheless, after Peebles came up with the winning ad slogan “rated X by an all-white jury” the film’s rating was reduced to R in 1974.[11]

 

lol’d

en.wikipedia.org/wiki/Faster_than_light

Interesting stuff

 

If a laser is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than c.[7] Similarly, a shadow projected onto a distant object can be made to move across the object faster than c.[8] In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light.[7][8][9][10]

 

Good for trolling fysicists!

en.wikipedia.org/wiki/Novikov_self-consistency_principle

Good stuff. Heinlein ftw.

 

 

Thoughts and quotes: Against Intellectual Monopoly (Boldrin & Levine)

Against intellectual Monopoly

In general, this is an interesting book about patents. It is at times combatant in its language use, other times more neutral. I think it wud have been wiser to use less loaded terms, but it didnt bother me too much. The criticism of IPR is generally sensible, and their case persuasive and plausible, but not as plausible as the case in Patent Failure. References are sometimes missing for questionable claims, but in general there are lots of references. The reference system is annoying, as the notes are at the end of chapters and not in links (it was intended to be published as an ebook, after all) or footnotes or something of that sort.

 

Below are some more comments and a lot of quotes.

 

As usual. Colored text is a quote. Colored+italic text is a quote which is also a quote in the source. Black text is my comments. Blue text also mine i.e. links.

Why, however, should creators have the right to control

how purchasers make use of an idea or creation? This gives

creators a monopoly over the idea. We refer to this right as

“intellectual monopoly,” to emphasize that it is this monopoly over

all copies of an idea that is controversial, not the right to buy and

sell copies. The government does not ordinarily enforce

monopolies for producers of other goods. This is because it is

widely recognized that monopoly creates many social costs.

Intellectual monopoly is no different in this respect. The question

we address is whether it also creates social benefits commensurate

with these social costs.

Even on the desktop – open source is spreading and not

shrinking. Ten years ago there were two major word processing

packages, Word and Wordperfect. Today the only significant

competitor to Microsoft for a package of office software including

word-processing is the open source program Openoffice.

 

Or rather LibreOffice now. But there is also Google Docs, which isnt open source. It is, however, free.

Start with English authors selling books in the United

States in the nineteenth century. “During the nineteenth century

anyone was free in the United States to reprint a foreign

publication”10 without making any payment to the author, besides

purchasing a legally sold copy of the book. This was a fact that

greatly upset Charles Dickens whose works, along with those of

many other English authors, were widely distributed in the U.S.,

and

 

yet American publishers found it profitable to make

arrangements with English authors. Evidence before the

1876-8 Commission shows that English authors sometimes

received more from the sale of their books by American

publishers, where they had no copyright, than from their

royalties in [England]11

 

where they did have copyright. In short without copyright, authors

still got paid, sometime more without copyright than with it.12

How did it work? Then, as now, there is a great deal of

impatience in the demand for books, especially good books.

English authors would sell American publishers the manuscripts of

their new books before their publication in Britain. The American

publisher who bought the manuscript had every incentive to

saturate the market for that particular novel as soon as possible, to

avoid cheap imitators to come in soon after. This led to mass

publication at fairly low prices. The amount of revenues British

authors received up front from American publishers often

exceeded the amount they were able to collect over a number of

years from royalties in the UK. Notice that, at the time, the US

market was comparable in size to the UK market.13

 

More broadly, the lack of copyright protection, which

permitted the United States publishers’ “pirating” of English

writers, was a good economic policy of great social value for the

people of United States, and of no significant detriment, as the

Commission report and other evidence confirm, for English

authors. Not only did it enable the establishment and rapid growth

of a large and successful publishing business in the United States;

also, and more importantly, it increased literacy and benefited the

cultural development of the American people by flooding the

market with cheap copies of great books. As an example: Dickens’

A Christmas Carol sold for six cents in the US, while it was priced

at roughly two dollars and fifty cents in England. This dramatic

increase in literacy was probably instrumental for the emergence of

a great number of United States writers and scientists toward the

end of the nineteenth century.

 

But how relevant for the modern era are copyright

arrangements from the nineteenth century? Books, which had to be

moved from England to the United States by clipper ship, can now

be transmitted over the internet at nearly the speed of light.

Furthermore, while the data show that some English authors were

paid more by their U.S. publishers than they earned in England –

we may wonder how many, and if they were paid enough to

compensate them for the cost of their creative efforts. What would

happen to an author today without copyright?

 

This question is not easy to answer – since today virtually

everything written is copyrighted, whether or not intended by the

author. There is, however, one important exception – documents

produced by the U.S. government. Not, you might think, the stuff

of best sellers – and hopefully not fiction. But it does turn out that

some government documents have been best sellers. This makes it

possible to ask in a straightforward way – how much can be earned

in the absence of copyright? The answer may surprise you as much

as it surprised us.

 

The most significant government best seller of recent years

has the rather off-putting title of The Final Report of the National

Commission on Terrorist Attacks Upon the United States, but it is

better known simply as the 9/11 Commission Report.14 The report

was released to the public at noon on Thursday July 22, 2004. At

that time, it was freely available for downloading from a

government website. A printed version of the report published by

W.W. Norton simultaneously went on sale in bookstores. Norton

had signed an interesting agreement with the government.

 

The 81-year-old publisher struck an unusual publishing

deal with the 9/11 commission back in May: Norton agreed

to issue the paperback version of the report on the day of

its public release.…Norton did not pay for the publishing

rights, but had to foot the bill for a rush printing and

shipping job; the commission did not hand over the

manuscript until the last possible moment, in order to

prevent leaks. The company will not reveal how much this

cost, or when precisely it obtained the report. But expedited

printings always cost extra, making it that much more

difficult for Norton to realize a profit.

 

In addition, the commission and Norton agreed in May on

the 568-page tome’s rather low cover price of $10, making

it that much harder for the publisher to recoup its costs.

(Amazon.com is currently selling copies for $8 plus

shipping, while visitors to the Government Printing Office

bookstore in Washington, D.C. can purchase its version of

the report for $8.50.) There is also competition from the

commission’s Web site, which is offering a downloadable

copy of the report for free. And Norton also agreed to

provide one free copy to the family of every 9/11 victim.15

 

This might sound like Norton struck a rather bad deal – one

imagines that other publishers were congratulating themselves on

not having been taken advantage of by sharp government

negotiators. It turns out, however, that Norton’s rivals were in fact

envious of this deal. One competitor in particular – the New York

Times – described the deal as a “royalty-free windfall,”16 which

does not sound like a bad thing to have.

 

Thats pretty cool!

Literature and a market for literary works emerged and

thrived for centuries in the complete absence of copyright. Most of

what is considered “great literature” and is taught and studied in

universities around the world comes from authors who never

received a penny of copyright royalties. Apparently the

commercial quality of the many works produced without copyright

has been sufficiently great that Disney, the greatest champion of

intellectual monopoly for itself, has made enormous use of the

public domain. Such great Disney productions as Snow White,

Sleeping Beauty, Pinocchio and Hiawatha are, of course, all taken

from the public domain. Quite sensibly, from its monopolistic

viewpoint, Disney is reluctant to put anything back. However, the

economic argument that these great works would not have been

produced without an intellectual monopoly is greatly weakened by

the fact that they were.

 

Hah! :D

At least in the case of sheet music, the police campaign did

not work. After a few months, police stations were filled with tons

of paper on which various musical pieces were printed. Being

unable to bring to court what was a de-facto army of “illegal”

music reproducers, the police itself stopped enforcing the

copyright law.

 

Pretty much that which i suggested earlier today that we shud do with DMCA notices. Just send them en masse and overwhelm the system from within. After all, companies already send out a massive amount of DMCA notices, and lots of them are bogus auto-generated ones, and this is true even tho they must stand for perjury if they are caught lying!

 

Surely, there is no intent to deceive if we do the same, since there is no intent at all in writing generating them.

The authors mention some obscure catholic principle in passing. Their reference for it is to AiG. But that makes no sense. AiG is a YEC organisation, not catholic. Catholics are theistic evolutionists, not creationists.

Effective price discrimination is costly to implement and

this cost represents pure waste. For example, music producers love

Digital Rights Management (DRM) because it enables them to

price discriminate. The reason that DVDs have country codes, for

example, is to prevent cheap DVDs sold in one country from being

resold in another country where they have a higher price. Yet the

effect of DRM is to reduce the usefulness of the product. One of

the reasons the black market in MP3s is not threatened by legal

electronic sales is that the unprotected MP3 is a superior product to

the DRM protected legal product. Similarly, producers of computer

software sell constrained products to consumers in an effort to

price discriminate and preserve their more lucrative corporate

market. One consequence of price discrimination by monopolists,

especially intellectual monopolists, is that they artificially degrade

their products in certain markets so as not to compete with other

more lucrative markets.

In recent years there have been innovative efforts to extend

the use of patents to block competitors. For example we find

 

A federal trade agency might impose $13 million in

sanctions against a New Jersey company that rebuilds used

disposable cameras made by the Fuji Photo Film Company

and sells them without brand names at a discount. Fuji said

yesterday that the International Trade Commission found

that the Jazz photo Corporation infringed Fuji’s patent

rights by taking used Fuji cameras and refurbishing them

for resale. The agency said Jazz sold more that 25 million

cameras since August 2001 in violation of a 1999 order to

stop and will consider sanctions. Fuji, based in Tokyo, has

been fighting makers of rebuilt cameras for seven years.

Jazz takes used shells of disposable cameras, puts in new

film and batteries and then sells them. Jazz’s founder, Jack

Benun, said the company would appeal. “It’s unbelievable

that the recycling of two plastic pieces developed into such

a long case.” Mr. Benun said. ‘There’s a benefit to the

customer. The prices have come down over the years. And

recycling is a good program. Our friends at Fuji do not like

it.20

 

Sigh.

One annoying thing about this book is that it uses the annoying and misleading loaded terms that IP maximalists use. I.e. “steal an idea” instead of “copy an idea” etc.

Another astounding example of American intellectual imperialism

is in – not so surprising – Iraq

 

The American Administrator of [Iraq] Paul Bremer,

updated Iraq’s intellectual property law to ‘meet current

internationally-recognized standards of protection.’ The

updated law makes saving seeds for next year’s harvest,

practiced by 97% of Iraqi farmers in 2002, the standard

farming practice for thousands of years across human

civilizations, newly illegal. Instead, farmers will have to

obtain a yearly license for genetically modified seeds from

American corporations. These GM seeds have typically

been modified from IP developed over thousands of

generations by indigenous farmers like the Iraqis, shared

freely like agricultural ‘open source.’ Other IP provisions

for technology in the law further integrate Iraq into the

American IP economy.24

 

Fucking derp.

The private sector has no monopoly on inadequacy.

Government bureaucrats are notorious for their inefficiency. The

U.S. Patent office is no exception. Their questionable competence

increases the cost of getting patents, but this is a small effect, and,

perhaps a good thing, rather than bad. They also issue many

patents of dubious merit. Since the legal presumption is that a

patent is legitimate unless proven otherwise, there is a substantial

legal advantage to the patent holder, who may use it for blackmail,

or other purposes. Moreover, while some bad patents may be

turned down, an obvious strategy is simply to file a great many bad

patents in hopes that a few will get through. Here is a sampling of

some of the ideas the US Patent office thought worthy of patenting

in recent years.41

 

# U.S. Patent 6,080,436: toasting bread in a toaster operating

beween 2500 and 4500 degrees.

# U.S. Patent 6,004,596: the sealed crustless peanut butter and

jelly sandwich.

# U.S. Patent 5,616,089: a “putting method in which the golfer

controls the speed of the putt and the direction of the putt

primarily with the golfer’s dominant throwing hand, yet uses

the golfer’s nondominant hand to maintain the blade of the

putter stable.”

# U.S. Patent 6,368,227: “A method of swing on a swing is

disclosed, in which a user positioned on a standard swing

suspended by two chains from a substantially horizontal tree

branch induces side to side motion by pulling alternately on

one chain and then the other.”

# U.S. Patent 6,219,045, from the press release by Worlds.com:

[The patent was awarded] for its scalable 3D server

technology … [by] the United States Patent Office. The

Company believes the patent may apply to currently, in use,

multi-user games, e-Commerce, web design, advertising and

entertainment areas of the Internet.” This is a refreshing

admission that instead of inventing something new,

Worlds.com simply patented something already widely used.

# U.S. Patent 6,025,810: “The present invention takes a

transmission of energy, and instead of sending it through

normal time and space, it pokes a small hole into another

dimension, thus, sending the energy through a place which

allows transmission of energy to exceed the speed of light.”

The mirror image of patenting stuff already in use: patent stuff

that can’t possibly work.

 

I had thought of the same shotgun style idea.

That monopoly is generally bad for society is well

accepted. It is not surprising that the same should be true of

intellectual monopoly: the evidence presented here is no more than

the tip of the iceberg. Many other inefficiencies, bad business

practices, technological regressions, etc. are documented daily by

the press. These are a consequence of the especially strong form of

monopoly power that current IP legislation bestows upon patent

and copyright holders. We insist on documenting and discussing a

subset of these facts for the simple reason that we have become so

accustomed to them that we inclined to take them for granted. Yet

these inefficiencies are not natural – they are manmade, and we

need not choose to tolerate them. We argue in later chapters that

neither patents nor copyright succeed in fostering innovation and

creativity. So we must ask: what is the point of keeping institutions

that provide so little good while inflicting so much harm?

Examples of individual creativity abound. An astounding

example of the impact of copyright law on individual creativity is

the story of Tarnation.120

 

Tarnation, a powerful autobiographical documentary by

director Jonathan Caouette, has been one of the surprise

hits of the Cannes Film Festival – despite costing just $218

(£124) to make. After Tarnation screened for the second

time in Cannes, Caouette – its director, editor and main

character – stood up. […] A Texan child whose mother was

in shock therapy, Caouette, 31, was abused in foster care

and saw his mother’s condition worsen as a result of her

treatment.” He began filming himself and his family aged

11, and created movie fantasies as an escape. For

Tarnation, he has spliced his home movie footage together

to create a moving and uncomfortable self-portrait. And

using a home computer with basic editing software,

Caouette did it all for a fraction of the price of a

Hollywood blockbuster like Troy. […] As for the budget,

which has attracted as much attention as the subject

matter, Caouette said he had added up how much he spent

on video tapes – plus a set of angel wings – over the years.

But the total spent will rise to about $400,000 (£230,000),

he said, once rights for music and video clips he used to

illustrate a mood or era have been paid for.9

 

Yes, you read this right. If he did not have to pay the copyright

royalties for the short clips he used, Caouette’s movie would have

cost a thousand times less.

The most disturbing feature of the DMCA is section 1201,

the anti-circumvention provision. This makes it a criminal offense

to reverse engineer or decrypt copyrighted material, or to distribute

tools that make it possible to do so. On July 27, 2001, Russian

cryptographer Dmitri Sklyarov had the dubious honor of being the

first person imprisoned under the DMCA. Arrested while giving a

seminar publicizing cryptographical weaknesses in Adobe’s

Acrobat Ebook format, Sklyarov was eventually acquitted on

December 17, 2002.

The DMCA has had a chilling effect on both freedom of

speech, and on cryptographical research. The Electronic Frontier

Foundation (EFF) reports on the case of Edward Felten and his

Princeton team of researchers

 

In September 2000, a multi-industry group known as the

Secure Digital Music Initiative (SDMI) issued a public

challenge encouraging skilled technologists to try to defeat

certain watermarking technologies intended to protect

digital music. Princeton Professor Edward Felten and a

team of researchers at Princeton, Rice, and Xerox took up

the challenge and succeeded in removing the watermarks.

 

When the team tried to present their results at an academic

conference, however, SDMI representatives threatened the

researchers with liability under the DMCA. The threat

letter was also delivered to the researchers employers and

the conference organizers. After extensive discussions with

counsel, the researchers grudgingly withdrew their paper

from the conference. The threat was ultimately withdrawn

and a portion of the research was published at a

subsequent conference, but only after the researchers filed

a lawsuit.

 

After enduring this experience, at least one of the

researchers involved has decided to forgo further research

efforts in this field.13

 

Disgusting!

The DMCA is not just a threat to economic prosperity and

creativity, it is also a threat to our freedom. The best illustration is

the recent case of Diebold, which makes computerized voting

machines now used in various local, state and national elections.

Unfortunately, it appears from internal corporate documents that

these machines are highly insecure and may easily be hacked.

Those documents were leaked, and posted at various sites on the

Internet. Rather than acknowledge or fix the security problem,

Diebold elected to send “takedown” notices in an effort to have the

embarrassing “copyrighted” material removed from the Internet.

Something more central to political discourse than the

susceptibility of voting machines to fraud is hard to imagine. To

allow this speech to be repressed in the name of “copyright” is

frightening.

 

Perhaps this sounds cliched and exaggerated – a kind of

“leftist college kids” over-reactive propaganda. In keeping with

this tone here is a college story about the leaked documents, and

how the Diebold and the DMCA helped to teach our future

generations about the first amendment.

 

Last fall, a group of civic-minded students at Swarthmore

[... came] into possession of some 15,000 e-mail messages

and memos – presumably leaked or stolen – from Diebold

Election Systems, the largest maker of electronic voting

machines in the country. The memos featured Diebold

employees’ candid discussion of flaws in the company’s

software and warnings that the computer network was

poorly protected from hackers. In light of the chaotic 2000

presidential election, the Swarthmore students decided that

this information shouldn’t be kept from the public. Like

aspiring Daniel Ellsbergs with their would-be Pentagon

Papers, they posted the files on the Internet, declaring the

act a form of electronic whistle-blowing. Unfortunately for

the students, their actions ran afoul of the 1998 Digital

Millennium Copyright Act (D.M.C.A.), [...] Under the law,

if an aggrieved party (Diebold, say) threatens to sue an

Internet service provider over the content of a subscriber’s

Web site, the provider can avoid liability simply by

removing the offending material. Since the mere threat of a

lawsuit is usually enough to scare most providers into

submission, the law effectively gives private parties veto

power over much of the information published online — as

the Swarthmore students would soon learn.

 

Not long after the students posted the memos, Diebold sent

letters to Swarthmore charging the students with copyright

infringement and demanding that the material be removed

from the students’ Web page, which was hosted on the

college’s server. Swarthmore complied. [...]19

 

The story did not end there, nor did it end too badly. The

controversy went on for a while. The Swarthmore students held

their ground and bravely fought against both Diebold and

Swarthmore. They managed to create enough negative publicity

for Diebold and for their liberal arts college, that Diebold

eventually had to back down and promise not to sue for copyright

infringement. Eventually the memos went back on the net.

All’s well what ends well? When the wise man points at the

moon, the dumb man looks at the finger.

Economists refer to the net benefit to society from an

exchange as “social surplus.” With intellectual property the

innovator collects a share of the social surplus she generates,

without intellectual property the innovator collects a smaller share:

this is the competitive value of an innovation. When such

competitive value is enough to compensate the innovator for the

cost of creation the allocation of resources is efficient, neither too

few nor too many innovations are brought about, and social surplus

is maximized. One can show mathematically that, under a variety

of competitive mechanisms, the private value accruing to an

innovator increases with the social surplus: inventors of better

gadgets make more money. This is true even when the private

value becomes a smaller share of the social surplus as the latter

increases.

 

Notice that we insist on “a share of the social surplus”, not

the entire surplus. Contrary to what many pundits repeat over and

over, there is nothing terrifying about this: even under intellectual

monopoly innovators receives a less than 100% share of the social

surplus from innovation, the rest going to consumers. Under

competition for those innovations that are produced both

consumers and imitators receive a portion of the social surplus an

innovation generates, and such portion is strictly larger than in the

previous case. These pundits use the jargon “uncompensated

spillovers” to refer to the social surplus accruing to those besides

the original innovator. There is nothing wrong with such

spillovers, however. That competitive markets do allow for social

surplus to accrue to people other than producers is, indeed, one of

their most valuable features, at least from a social perspective; it is

what makes capitalism a good system also for the not-so-

successful among us. The goal of economic efficiency is not that of

making monopolists as rich as possible, in fact: it is almost the

opposite. The goal of economic efficiency is that of making us all

as well off as possible. To accomplish this producers must be

compensated for their costs, thereby providing them with the

economic incentive of doing what they are best at doing. But they

do not need to be compensated more than this. If, by selling her

original copy of the idea in a competitive market and thereby

establishing the root of the tree from which copies will come, the

innovator earns her opportunity cost, that is: she earns as much or

more than she could have earned while doing the second best thing

she knows how to do, then efficient innovation is achieved, and we

should all be happy.

 

This no copyright at all is interesting. Notice how it instantly solves all problems with sampling. Under a for profit copyright only, sampling is difficult to deal with.

Consider the problem of automobiles and air pollution.

When I drive my car, I do not have to pay you for the harm the

poison in my exhaust does to your health. So naturally, people

drive more than is socially desirable and there is too much air

pollution. Economists refer to this as a negative externality, and we

all agree it is a problem. Even conservative economists usually

agree that government intervention of some sort is required.

 

We propose the following solution to the problem of

automobile pollution: the government should grant us the exclusive

right to sell automobiles. Naturally, as a monopolist, we will insist

on charging a high price for automobiles, fewer automobiles will

be sold, there will be less driving, and so less pollution. The fact

that this will make us unspeakably rich is of course beside the

point; the sole purpose of this policy is to reduce air pollution. This

is of course all logically correct – but so far we don’t think anyone

has had the chutzpah to suggest that this is a good solution to the

problem of air pollution.

 

If someone were to make a serious suggestion along these

lines, we would simply point out that this “solution” has actually

been tried. In Eastern Europe, under the old communist

governments, each country did in fact have a government

monopoly over the production of automobiles. As the theory

predicts, this did indeed result in expensive automobiles, fewer

automobiles sold, and less driving. It is not so clear, however, that

it actually resulted in less pollution. Sadly, the automobiles

produced by the Eastern European monopolists were of such

miserably bad quality that for each mile they were driven they

created vastly more pollution than the automobiles driven in the

competitive West. And, despite their absolute power, the

monopolies of Eastern Europe managed to produce a lot more

pollution per capita than the West.

 

Arguments in favor of intellectual monopoly often have a

similar flavor. They may be logically correct, but they tend to defy

common sense. Ed Felten suggests applying what he calls the

“pizzaright” test. The pizzaright is the exclusive right to sell pizza

and makes it illegal to make or serve pizza without a license from

the pizzaright owner.1 We all recognize, of course, that this would

be a foolhardy policy and that we should allow the market to

decide who can make and sell pizza. The pizzaright test says that

when evaluating an argument in favor of intellectual monopoly, if

your argument serves equally well as an argument for a pizzaright,

then your argument is defective – it proves too much. Whatever

your argument is, it had better not apply to pizza.

 

Heh

While replacing secrecy with legal monopoly may have

some impact on the direction of innovation, there is little reason to

believe that it actually succeeds in making important secrets public

and easily accessible to other innovators. For most innovations, it

is the details that matter, not the rather vague descriptions required

in patent applications. Take for example, the controversial Amazon

one-click patent, U.S. Patent 5,960,411. The actual idea is rather

trivial, and there are a variety of ways in which one-click purchase

can be implemented by computer, any one of which can be coded

by a competent programmer given a modest investment of time

and effort. For the record, here is the detailed description of the

invention from the patent application:

 

The present invention provides a method and system for

single-action ordering of items in a client/server

environment. The single-action ordering system of the

present invention reduces the number of purchaser

interactions needed to place an order and reduces the

amount of sensitive information that is transmitted between

a client system and a server system. In one embodiment, the

server system assigns a unique client identifier to each

client system. The server system also stores purchaser-

specific order information for various potential purchasers.

The purchaser-specific order information may have been

collected from a previous order placed by the purchaser.

The server system maps each client identifier to a

purchaser that may use that client system to place an order.

The server system may map the client identifiers to the

purchaser who last placed an order using that client

system. When a purchaser wants to place an order, the

purchaser uses a client system to send the request for

information describing the item to be ordered along with its

client identifier. The server system determines whether the

client identifier for that client system is mapped to a

purchaser. If so mapped, the server system determines

whether single-action ordering is enabled for that

purchaser at that client system. If enabled, the server

system sends the requested information (e.g., via a Web

page) to the client computer system along with an

indication of the single action to perform to place the order

for the item. When single-action ordering is enabled, the

purchaser need only perform a single action (e.g., click a

mouse button) to order the item. When the purchaser

performs that single action, the client system notifies the

server system. The server system then completes the order

by adding the purchaser-specific order information for the

purchaser that is mapped to that client identifier to the item

order information (e.g., product identifier and quantity).

Thus, once the description of an item is displayed, the

purchaser need only take a single action to place the order

to purchase that item. Also, since the client identifier

identifies purchaser-specific order information already

stored at the server system, there is no need for such

sensitive information to be transmitted via the Internet or

other communications medium.28

 

As can be seen, the “secret” that is revealed is, if anything, less

informative than the simple observation that the purchaser buys

something by means of a single click. Information that might

actually be of use to a computer programmer – for example the

source code to the specific implementation used by Amazon – is

not provided as part of the patent, nor is it required to be. In fact,

the actual implementation of the one-click procedure consists of a

complicated system of subcomponents and modules requiring a

substantial amount of human capital and of specialized working

time to be assembled. The generic idea revealed in the patent is

easy to understand and “copy,” but of no practical value

whatsoever. The useful ideas are neither revealed in the patent nor

easy to imitate without reinventing them from scrap, which is what

lots of other people beside Amazon’s direct competitors (books are

not the only thing sold on the web, after all) would have done to

everybody’s else benefit, had the U.S. Patent 5,960,411 not

prevented them from actually doing so. Certainly it is hard to argue

that the social cost of giving Amazon a monopoly over purchasing

by clicking a single button is somehow offset by the social benefit

of the information revealed in the patent application.

What we have argued so far may not sound altogether

incredible to the alert observer of the economics of innovation.

Theory aside, what have we shown, after all? That thriving

innovation has been and still is commonplace in the absence of

intellectual monopoly and that intellectual monopoly leads to

substantial and well-documented reductions in economic freedom

and general prosperity. However, while expounding the theory of

competitive innovation, we also recognized that under perfect

competition some socially desirable innovations will not be

produced because the indivisibility involved with introducing the

first copy or implementation of the new idea is too large, relative

to the size of the underlying market. When this is the case,

monopoly power may generate the necessary incentive for the

putative innovator to introduce socially valuable goods. And the

value for society of these goods could dwarf the social losses we

have documented. In fact, were standard theory correct so that

most innovators gave up innovating in a world without intellectual

property, the gains from patents and copyright would certainly

dwarf those losses. Alas, as we noted, standard theory is not even

internally coherent, and its predictions are flatly violated by the

facts reported in chapters 2 and 3.

 

Nevertheless, when in the previous chapter we argued

against all kinds of theoretical reasons brought forward to justify

intellectual monopoly on “scientific grounds”, we carefully

avoided stating that it is never the case the fixed cost of innovation

is too large to be paid for by competitive rents. We did not argue it

as a matter of theory because, as a matter of theory, fixed costs can

be so large to prevent almost anything from being invented. So, by

our own admission, it is a theoretical possibility that intellectual

monopoly could, at the end of the day, be better than competition.

But does intellectual monopoly actually lead to greater innovation

than competition?

 

From a theoretical point of view the answer is murky. In

the long-run, intellectual monopoly provides increased revenues to

those that innovate, but also makes innovation more costly.

Innovations generally build on existing innovations. While each

individual innovator may earn more revenue from innovating if he

has an intellectual monopoly, he also faces a higher cost of

innovating: he must pay off all those other monopolists owning

rights to existing innovations. Indeed, in the extreme case when

each new innovation requires the use of lots of previous ideas, the

presence of intellectual monopoly may bring innovation to a

screeching halt.1

 

Difficult indeed to say on theoretical grounds alone. Only empirical data can show.

On the problem of measuring innovation.

 

One important difficulty is in determining the level of

innovative activity. One measure is the number of patents, of

course, but this is meaningless in a country that has no patents, or

when patent laws change. Petra Moser gets around this problem by

examining the catalogs of innovations from 19th century World

Fairs. Of the catalogued innovations, some are patented, some are

not, some are from countries with patent systems, and some are

from countries without. Moser catalogues over 30,000 innovations

from a variety of industries.

 

Mid-nineteenth century Switzerland [a country without

patents], for example, had the second highest number of

exhibits per capita among all countries that visited the Crystal

Palace Exhibition. Moreover, exhibits from countries without

patent laws received disproportionate shares of medals for

outstanding innovations.7

 

Moser does, however, find a significant impact of patent law on

the direction of innovation

 

The analysis of exhibition data suggests that patent laws may

be an important factor in determining the direction of

innovative activity. Exhibition data show that countries without

patents share an exceptionally strong focus on innovations in

two industries: scientific instruments and food processing. At

the Crystal Palace, every fourth exhibit from a country without

patent laws is a scientific instrument, while no more than one

seventh of other countries innovations belong to this category.

At the same time, the patentless countries have significantly

smaller shares of innovation in machinery, especially in

machinery for manufacturing and agricultural machinery.

After the Netherlands abolished her patent system in 1869 for

political reasons, the share of Dutch innovations that were

devoted to food processing increased from 11 to 37 percent.8

 

Moser then goes on to say that

 

Nineteenth-century sources report that secrecy was

particularly effective at protecting innovations in scientific

instruments and in food processing. On the other hand,

patenting was essential to protect and motivate innovations in

machinery, especially for large-scale manufacturing.9

 

Evidence that secrecy was important for scientific instruments

and food processing is provided, but no evidence is given that

patenting was actually essential to protect and motivate

innovations in machinery. Notice that in an environment in which

some countries provide patent protection, and others do not, bias

caused by the existence of patent laws will be exaggerated.

Countries with patent laws will tend to specialize in innovations

for which secrecy is difficult, while those without will tend to

specialize in innovations for which secrecy is easy. This means

that variations of patent protection would have different effects in

different countries.

 

It is interesting also that patent laws may reflect the state of

industry and innovation in a country

 

Anecdotal evidence for the late nineteenth and for the twentieth

century suggests that a country’s choice of patent laws was

often influenced by the nature of her technologies. In the

1880s, for example, two of Switzerland’s most important

industries chemicals and textiles were strongly opposed to the

introduction of a patent system, as it would restrict their use of

processes developed abroad.10

 

The 19th century type of innovation – small process innovations

– are of the type for which patents may be most socially beneficial.

Despite this and the careful study of economic historians, it is

difficult to conclude that patents played an important role in

increasing the rate of 19th and early 20th century innovation.

 

More recent work by Moser,11 exploiting the same data set

from two different angles, strengthens this finding – that is, that

patents did not increase the level of innovation. In her words:

“Comparisons between Britain and the United States suggest that

even the most fundamental differences in patent laws failed to raise

the proportion of patented innovations.”12 Her work appears to

confirm two of the stylized facts we have often repeated in this

book. First that, as we just mentioned in discussing the work of

Sokoloff, Lamoreaux and Khan, innovations that are patented tend

to be traded more than those that are not, and therefore to disperse

geographically farther away from the original area of invention.

Based on data for the period 1841-1901, innovation for industries

in which patents are widely used is not higher but more dispersed

geographically than innovation in industries in which patents are

not or scarcely used. Second, when the “defensive patenting”

motive is absent, as it was in 1851, an extremely small percentage

of inventors (less than one in five) chooses patents as a method for

maximizing revenues and protect intellectual property.

 

Summing up: careful statistical analyses of the 19th century’s

available data, carried out by distinguished economic historians,

uniformly shows two things. Patents neither increase the rate of

innovation, nor are the best instrument to maximizes inventors’

revenue. Patents create a market in patents and in the legal and

technical services required to trade and enforce them.

 

Very interesting data.

Quoting this for linguistic reasons…

Nevertheless, the core idea of a unified European patent

system was not abandoned and continued to be pursued in various

forms, first under the leadership of the European Commission, and

then under the European Union. In 2000 a Community Patent

Regulation proposal was approved, which was considered a major

step toward the final establishment of a European Patent. Things,

nevertheless, did not proceed as expeditiously as the supporters of

a E.U. Patent had expected. As of 2007 the project is still, in the

words of E.U. Commissioner Charlie McCreevy, “stuck in the

mud”13 and far from being finalized. Interestingly the obstacles are

neither technical nor due to a particularly strong political

opposition to the establishment of a continent-wide form of

intellectual monopoly. The obstacles are purely due to rent-seeking

by interest groups in the various countries involved, the number of

which notoriously keeps growing. Current intellectual monopolists

(and their national lawyers) would rather remain monopolists

(legal specialists) for a bit longer in their own smaller markets than

risk the chance of loosing everything to a more powerful

monopolist (or to a foreign firm with more skilled lawyers) in the

bigger continental market.

 

That feel when reading academic books in revised editions… and they still fail to do lose vs. loose distinction. Useless distinction. At least, they chose the most sensible spelling. The spelling loose still has a pointless and silent e in the end.

It could be, and sometimes is, argued that the modern

pharmaceutical industry is substantially different from the

chemical industry of the last century. In particular, it is argued that

the most significant cost of developing new drugs lies in testing

numerous compounds to see which ones work. Insofar as this is

true, it would seem that the development of new drugs is not so

dependent on the usage and knowledge of old drugs. However, this

is not the case according to the chief scientific officer at Bristol

Myers Squib, Peter Ringrose, who

 

told The New York Times that there were ‘more than 50

proteins possibly involved in cancer that the company was

not working on because the patent holders either would not

allow it or were demanding unreasonable royalties.18

 

Truth-telling remarks by pharmaceutical executives aside,

there is a deeper reason why the pharmaceutical industry of the

future will be more and more characterized by complex innovation

chains: biotechnology. As of 2004, already more than half of the

research projects carried out in the pharmaceutical industry had

some biomedical foundation. In biomedical research gene

fragments are, in more than a metaphorical sense, the initial link of

any valuable innovation chain. Successful innovation chains depart

from, and then combine, very many gene fragments, and cannot do

without at least some of them. As gene fragments are in finite

number, patenting them is equivalent to artificially fabricating

what scientists in this area have labeled an “anticommons”

problem. So it seems that the impact of patent law in either

promoting or inhibiting research remains, even in the modern

pharmaceutical industry.19

A few additional facts may help the reader get a better

understanding of why, at the end, we reach the conclusion we do.

Sales are growing, fast; at about 12% a year for most of the 1990s,

and still now at around 8% a year; R&D expenditure during the

same period has been rising of only 6%. A company such as

Novartis (a big R&D player, relative to industry’s averages) spends

about 33% of sales on promotion, and 19% on R&D. The industry

average for R&D/sales seems to be around 16-17%, while

according to the CBO [1998] report the same percentage was

approximately 18% for American pharmaceuticals in 1994;

according to PhRMA [2007] it was 19% in 2006. The point here is

not that the pharmaceutical companies are spending “too little” in

R&D – no one has managed (and we doubt anyone could manage)

to calculate what the socially optimal amount of pharmaceutical

R&D is. The point here is that the top 30 firms spend about twice

as much in promotion and advertising as they do in R&D; and the

top 30 are where private R&D expenditure is carried out, in the

industry.

 

Next we note that no more than 1/3 – more likely 1/4 – of

new drug approvals are considered by the FDA to have therapeutic

benefit over existing treatments, implying that, under the most

generous hypotheses, only 25-30% of the total R&D expenditure

goes toward new drugs. The rest, as we will see better in a

moment, goes toward the so called “me-too” drugs. Related to this,

is the more and more obvious fact that the amount of price

discrimination carried out by the top 30 firms between North

America, Europe and Japan is dramatically increasing, with price

ratios for identical drugs reaching values as high as two or three.

The designated victims, in this particular scheme, are apparently

the U.S. consumers and, to a lesser extent, the Northern European

and the Swiss. At the same time, operating margins in the

pharmaceutical industry run at about 25% against 15% or less for

other consumer goods, with peaks, for US market-based firms, as

high as 35%. The U.S. pharmaceutical industry has been topping

the list of the most profitable sectors in the U.S. economy for

almost two decades, never dropping below third place; an

accomplishment unmatched by any other manufacturing sector.

Price discrimination, made possible by monopoly power, does

have its rewards.

 

Summing up and moving forward, here are the symptoms

of the malaise we should investigate further.

• There is innovation, but not as much as one might think

there is, given what we spend.

• Pharmaceutical innovation seems to cost a lot and

marketing new drugs even more, which makes the final

price for consumers very high and increasing.

• Some consumers are hurt more than others, even after the

worldwide extension of patent protection.

 

Very interesting data. Perhaps some kind of government sponsorship cud do better?

Where do Useful Drugs Come From?

Useful new drugs seem to come in a growing percentage

from small firms, startups and university laboratories. But this is

not an indictment of the patent system as, probably, such small

firms and university labs would have not put in all the effort they

did without the prospect of a patent to be sold to a big

pharmaceutical company.

 

Next there is the not so small detail that most of those

university laboratories are actually financed by public money,

mostly federal money flowing through the NIH. The

pharmaceutical industry is much less essential to medical research

than their lobbyists might have you believe. In 1995, according to

a study by two well reputed University of Chicago economists, the

U.S. spent about $25 billion on biomedical research. About $11.5

billion came from the Federal government, with another $3.6

billion of academic research not funded by the feds. Industry spent

about $10 billion.26 However, industry R&D is eligible for a tax

credit of about 20%, so the government also picked up about $2

billion of the cost of “industry” research. That was then, but are

things different now? They do not appear to be. According to

industry’s own sources27

, total research expenditure by the industry

was, in 2006, about $57 billion while the NIH budget in the same

year (the largest but by no means the only source of public funding

for biomedical research) reached $28.5 bn. So, it seems, things are

not changing: private industry pays for only about 1/3rd of

biomedical R&D. By way of contrast, outside of the biomedical

area, private industry pays for more than 2/3rds of R&D.

Many infected with HIV can still recall the 1980s when no

effective treatment for AIDS was available, and being HIV

positive was a slow death sentence. Not unnaturally many of these

individuals are grateful to the pharmaceutical industry for bringing

to market drugs that – if they do not eliminate HIV – make life

livable.

 

the “evil” pharmaceutical companies are, in fact, among

the most beneficent organizations in the history of mankind

and their research in the last couple of decades will one

day be recognized as the revolution it truly is. Yes, they’re

motivated by profits. Duh. That’s the genius of capitalism -

to harness human improvement to the always-reliable yoke

of human greed. Long may those companies prosper. I owe

them literally my life.28

 

But it is wise to remember that the modern “cocktail” that is used

to treat HIV was not invented by a large pharmaceutical company.

It was invented by an academic researcher: Dr. David Ho.

The bottom line is rather simple: even today, more than

thirty years after Germany, Italy and Switzerland adopted patents

on drugs and a good half a century after pharmaceutical companies

adopted the policy of patenting anything they could develop, more

than half of the top selling medicines around the world do not owe

their existence to pharmaceutical patents. Are we still so certain

that valuable medicines would stop to be invented if drug patents

were either abolished or drastically curtailed?

 

This is not particularly original news, though. Older

American readers may remember of the Kefauver Committee of

1961, which investigated monopolistic practices in the

pharmaceutical industry.33 Among the many interesting findings

reported, the study showed that 10 times as many basic drug

inventions were made in countries without product patents as were

made in nations with them. It also found that countries that did

grant product patents had higher prices than those who did not,

again something we seem to be well aware of.

 

The next question then is, if not in fundamental new

medical discoveries, where does all that pharmaceutical R&D

money go?

Rent-Seeking and Redundancy

There is much evidence of redundant research on

pharmaceuticals. The National Institutes of Health Care

Management reveals that over the period 1989-2000, 54% of FDA-

approved drug applications involved drugs that contained active

ingredients already in the market. Hence, the novelty was in

dosage form, route of administration, or combination with other

ingredients. Of the new drug approvals, 35% were products with

new active ingredients, but only a portion of these drugs were

judged to have sufficient clinical improvements over existing

treatments to be granted priority status. In fact, only 238 out of

1035 drugs approved by the FDA contained new active ingredients

and were given priority ratings on the base of their clinical

performances. In other words, about 77% percent of what the FDA

approves is “redundant” from the strictly medical point of view.34

The New Republic, commenting on these facts, pointedly

continues

 

If the report doesn’t convince you, just turn on your

television and note which drugs are being marketed most

aggressively. Ads for Celebrex may imply that it will enable

arthritics to jump rope, but the drug actually relieves pain

no better than basic ibuprofen; its principal supposed

benefit is causing fewer ulcers, but the FDA recently

rejected even that claim. Clarinex is a differently packaged

version of Claritin, which is of questionable efficacy in the

first place and is sold over the counter abroad for vastly

less. Promoted as though it must be some sort of elixir, the

ubiquitous “purple pill,” Nexium, is essentially

AstraZeneca’s old heartburn drug Prilosec with a minor

chemical twist that allowed the company to extend its

patent. (Perhaps not coincidentally researchers have found

that purple is a particularly good pill color for inducing

placebo effects.)35

 

Sad but ironically true, me-too or copycat drugs are largely

the only available tool capable of inducing some kind of

competition in an otherwise monopolized market. Because of

patent protection lasting long enough to make future entry by

generics nearly irrelevant, the limited degree of substitutability and

price competition that copycat drugs bring about is actually

valuable. We are not kidding here, and this is a point that many

commentators often miss in their “anti Big Pharma” crusade.

Given the institutional environment pharmaceutical companies are

currently operating in, me-too drugs are the obvious profit

maximizing tools, and there is nothing wrong with firms

maximizing profits. They also increase the welfare of consumers,

if ever so slightly, by offering more variety of choice and a bit

lower prices. Again, they are an anemic and pathetic version of the

market competition that would take place without patents, but

competition they are. The ironic aspect of me-too drugs, obviously,

is that they are very expensive because of patent protection, and

this cost we have brought upon ourselves for no good reason.

 

Very interesting. One thing i want to point out, tho, is that it may be worth it to develop drugs that work via a different route or with a slightly different form. Even tho to many people these differences make no difference medically, they can increase comfort by being administered by a difference route. Compare orally taking a pill vs. getting a shot vs. suppositories. It might also be the case that some patients cannot use, for medical reasons, a given route of delivery. In such cases it is medically useful to use another route, ofc. Finally, some patients may be allergic to a drug, and in that case having slightly different form may help.

 

But in general, i agree with the authors.

The Bad

Despite the fact that our system of intellectual property is

badly broken, there are those who seek to break it even further.

The first priority must be to stem the tide of rent-seekers

demanding ever greater privilege. Within the United States and

Europe, there is a continued effort to expand the scope of

innovations subject to patent, to extend the length of copyright, and

to impose ever more draconian penalties for intellectual property

violation. Internationally, the United States – as a net exporter of

ideas – has been negotiating dramatic increases in protection of

U.S. intellectual monopolists as part of free trade agreements; the

recent Central American Free Trade Agreement (CAFTA) is an

outstanding example of this bad practice.

 

There seems to be no end to the list of bad proposals for

strengthening intellectual monopoly. To give a partial list starting

with the least significant

 

# Extend the scope of patent to include sports moves and plays.2

# Extend the scope of copyright to include news clips, press

releases and so forth.3

# Allow for patenting of story lines – something the U.S. Patent

Office just did by awarding a patent to Andrew Knight for his

“The Zombie Stare” invention.4

# Extend the level of protection copyright offers to databases,

along the lines of the 1996 E.U. Database Directive, and of the

subsequent WIPO’s Treaty proposal.5

# Extend the scope of copyright and patents to the results of

scientific research, including that financed by public funds;

something already partially achieved with the Bayh-Dole Act.6

# Extend the length of copyright in Europe to match that in the

U.S. – which is most ironic, as the sponsors of the CTEA and

the DMCA in the USA claimed they were necessary to match

… new and longer European copyright terms.7

# Extend the set of circumstances in which “refusal to license” is

allowed and enforced by anti-trust authorities. More generally,

turn around the 1970’s Antitrust Division wisdom that lead to

the so called “Nine No-No’s” to licensing practices. Previous

wisdom correctly saw such practices as anticompetitive

restraints of trade in the licensing business. Persistent and

successful, lobbying from the beneficiaries of intellectual

monopoly has managed to turn the table around, portraying

such monopolistic practices as “necessary” or even “vital”

ingredients for a well functioning patents’ licensing market.8

# Establish, as a relatively recent U.S. Supreme Court ruling in

the case of Verizon vs Trinko did, that legally acquired

monopoly power and its use to charge higher prices is not only

admissible, it “is an important element of the free-market

system” because “it induces innovation and economicgrowth.”9

# Impose legal restrictions on the design of computers forcing

them to “protect” intellectual property.10

# Make producers of software used in P2P exchanges directly

liable for any copyright violation carried out with the use of

their software, something that may well be in the making after

the Supreme Court ruling in the Grokster case.11

# Allow the patenting of computer software in Europe – this we

escaped, momentarily, due to a sudden spark of rationality by

the European Parliament.12

# Allow the patenting of any kind of plant variety outside of the

United States, where it is already allowed.13

# Allow for generalized patenting of genomic products outside of

the United States, where it is already allowed.14

# Force other countries, especially developing countries, to

impose the same draconian intellectual property laws as the

U.S., the E.U. and Japan.15

 

Pharmaceuticals

Handling properly the pharmaceutical industry constitutes

the litmus test for the reform process we are advocating. Simple

abolition, or even a progressive scaling down of patent term, would

not work in this sector for the reasons outlined earlier. Reforming

the system of intellectual property in the pharmaceutical industry is

a daunting task that involves multiple dimensions of government

intervention and regulation of the medical sector. While we are

perfectly aware that knowledgeable readers and practitioners of the

pharmaceutical and medical industry will probably find the

statements that follow utterly simplistic, when not arrogantly

preposterous, we will try nevertheless. In sequential order, here is

our list of desiderata.

 

• Free the pharmaceutical industry of the stage II and III

clinical trials’ costs, which are the cost-intensive ones.

Have them financed by the NIH, on a competitive basis:

pharmaceutical companies that have completed stage I

trials, submit applications to the NIH for having stages II

and III financed. In parallel, medical clinics and university

hospitals submit competitive bids to the NIH to have the

approved trials assigned to them. Match the winning drugs

to the best bids, and use public choice common sense to

minimize the most obvious risks of capture. Clinical trial

results become public goods and are available, possibly for

a fee covering administrative and maintenance costs, to all

that request them. This would not prevent drug companies

from deciding that, for whatever reason, they carry out their

clinical trials privately and pay for them; that is their

choice. Nevertheless, allowing the public financing of

stages II and III of clinical trials – by far the largest

component of the private fixed cost associated with the

development of new drugs – would remove the biggest

(nay, the only) rationale for allowing drugs’ patents longer

than a handful of years.

 

• Begin reducing the term of pharmaceutical patents

proportionally. Should we take pharmaceuticals’ claims at

their face value, our reform eliminates between 70% and

80% of the private fixed cost. Hence, patent length should

be lowered to 4 years, instead of the current 20, without

extension. Recall that, again according to the industry,

effective patent terms are currently around 12 years from

the first day the drug is commercialized, hence we are

proposing to cut them down by 2/3, which is less than the

proportional cost reduction. To compensate for the fact that

NIH-related inefficiencies may slow down the clinical trial

process, start patent terms from the first day in which

commercialization of the drug is authorized. A ten year

transition period would allow enough time to prepare for

the new regulatory environment.

 

• Sizably reduce the number of drugs that cannot be sold

without medical prescription. For many drugs this is less a

protection of otherwise well informed consumers than a

way of enforcing monopolistic control over doctors’

prescription patterns, and to artificially increase distribution

costs, with rents accruing partly to pharmaceutical

companies and partly to the inefficient local monopolies

called pharmacies.

 

• Allow for simultaneous or independent discovery, along the

lines of Gallini and Scotchmer.29 Further, because patent

terms should be running from the start of

commercialization, applications should be filed (but not

disclosed) earlier, and mandatory licensing of “idle” or

unused active chemical component and drugs should be

introduced. In other words, make certain the following

monopolistic tactic becomes unfeasible: file a patent

application for entire families of compounds, and then

develop them sequentially over a long period of time,

postponing clinical trials and production of some

compounds until patents on earlier members of the same

family have been fully exploited.