Archive for December, 2012

So, after posting the previus post i started reading about seasteading. its really interesting. here ar som mor links

en.wikipedia.org/wiki/Seasteading

useful overview, lots of links to sources

 

www.wired.com/techbiz/startups/magazine/17-02/mf_seasteading?currentPage=1

mainstream introduction to the subject

 

gramlich.net/projects/oceania/seastead1.html

a pragmatic approach to seasteading. very much worth reading. about 25 pages.

 

en.wikipedia.org/wiki/Blueseed

a commerciel approach

 

www.seasteading.org/book/seasteading-book-beta/

the book about seasteading, currently in a beta. apparently only in web format, which makes it annoying to read.

 

www.seasteading.org/2012/10/miguel-lamas-pardo-presents-seasteading-dissertation-at-university-francisco-marroquin/

seasteading.wpengine.netdna-cdn.com/wp-content/uploads/2012/04/Miguel-Lamas-Establishment-of-Autonomous-Ocean-Communities-English.pdf

much mor like it! a phd thesis that analyses seasteading. looks very promising. 300 pages.

 

from the thesis:

PREFACE

The idea surrounding floating cities is a topic that has been part of the collective

imagination since the nineteenth century. It has been addressed by diverse fields both in science

and in the arts (engineering, architecture and literature) particularly during the twentieth century,

when it was realized that the technology had been developed to take on such a challenge.

Nonetheless, in many instances, the proposals lacked realistic foundations, and appeared

to be motivated simply to seek media attention for their proponents.

This dissertation seeks to address this by providing a framework on the topic regarding the

concept of “floating cities” by questioning why it is that humanity has sought to establish such

cities.

We avoid the media-coined term “Floating Cities” and instead use a different term with a

wider context, “Oceanic Colonization”, which we have defined as “the establishment of offshore

autonomous communities aboard artificial platforms.”

Additionally, we have distinguished four types of oceanic colonization for four different and

distinct objectives:

1) expansion of landholdings; 2) mobile settlements; 3) semipermanent mobile settlements

to access marine resources; 4) and the creation of micronations.

It is this fourth category that will guide the review of the whole issue of ocean colonization.

The dissertation’s objective is to “analyze possible (current and future) options available to

the discipline of Naval and Oceanic Engineering for the establishment of offshore autonomous

communities that would allow for the creation of oceanic micronations.

At the same time, we shall attempt to explore the future evolution of the three other

objectives of oceanic colonization.

In Part I- State o f the Art, we seek to review the most ambitious oceanic colonization

projects espoused toward the creation of oceanic micronation (such as the Principality of Sealand)

as well as those proposed by professionals outside of the Naval and Oceanic disciplines with

apparently media-seeking proposals (such as the “Green Float” espoused by Shimizu Corporation).

We shall point out that these vain attempts have failed as they have not taken into account

a series of requirements which shall be examined in Part II of this dissertation.

In Part II, Set-up and Challenges, we develop four essential requirements that need to be

fulfilled by any oceanic settlement:

1) economic and commercial, 2) technical specifications surrounding the platforms, 3) legal

and external relations, and 4) self-government and internal relations.

These requirements are common to all four forms of oceanic colonization though the steps

to achieving them are distinct and different for each one.

The research behind this dissertation is focused on the technical and legal requirements

(requirements 2 and 3) to create a micronation in the oceans.

To this effect, we researched existing platforms.

Thus, in Part III-Results, we present the review performed on the various platforms used in

the three first forms of oceanic colonization identified and that best conform to the creation of

oceanic micronation including the legal nuances related to them.

The platform types reviewed included cruise ships and residential offshore and inshore

flotels; also those termed as Very Large Floating Structures or VLFS and the offshore concrete-

based structures.

At the conclusion of this section, we shall analyze the legal and regulatory requirements of

oceanic colonization from the perspective of maritime law.

In Part IV- Results Analysis, we shall examine future trends of the four forms of oceanic

colonization postulated.

We allocate greater detail to the review of oceanic colonization to form micronations

based on the various platforms reviewed, and we provide a proposal of timelines and hypotheses

as to how we see this form of colonization evolving.

Lastly, in Part V- Conclusions, we shall conclude that the oceanic colonization and the

creation of micronations in the future is a result of the evolution of the other three forms of

oceanic colonization:

1) expansion of land holdings, where the solution via VLFS appears to be a viable alternative,

2) mobile settlements (where the primary venue shall be cruise ships that will be converted into

mobile-floating-ship-cities and 3) the establishment of permanent oceanic settlements to access

marine resources that will require permanent floating cities in order to best extract them.

How to Start Your Own Country – Erwin S. Strauss free ebook download pdf

 

Its a short easily read book about how to start ur own country. IMO the theoretical chapters wer the most interesting, altho som of the case studies wer interesting as well. i got interested in the topic after having heard about Sealand, and after seeing this TED talk:

en.wikipedia.org/wiki/Seasteading

3D-printing plays along extremely well with floating countries, since manufactoring items usually require large factories. this is now no longer needed as one can just print whatever is needed. bitcoins makes it possible to trade over the internet. and the internet makes it possible to work over the internet. this makes it possible to hav a floating city that is self-sustaining economically without having to rely on gambling, drug trade, tax-evasion companies, pirate radio etc. to make money. altho these are promising areas as well. especially the area of inventions and online entrepreneurship is an easy area – all one needs to do is get fast internet connections (via satelite to begin with, perhaps, or via airborne drones?), and then dont hav any patent or copyright laws.

——-

 

The requirements for a new country to be considered to have

achieved the traditional status of a sovereign nation are con­

ventionally thought of in terms of such things as membership in

the United Nations, exchange of ambassadors with other sovereign

nations, acceptance of its passports at international boundaries,

and so on. Actually, few nations completely achieve these goals.

Many nations (Switzerland, for example) are not members of the

United Nations. And for any given country, there are a number of

others that, for one reason or another, do not choose to recognize

it. But a nation that achieves a certain level of these tokens of

recognition is generally regarded as having achieved traditional

sovereignty. At any time, there are usually some entities that are

borderline cases. For example, as of this writing the Republic of

South Africa has declared that certain areas that were hitherto

parts of the republic are now independent sovereign nations

(Transkei, Bophuthatswana and Venda). However, no country

besides South Africa has yet recognized them as such, and the

status of persons holding passports from these nations is unclear.

Their principal source of income appears to be the operation of

gambling resorts in the parts of their territories closest to major

South African cities (gambling is prohibited in the Republic of

South Africa). By the way, this sort of activity shouldn’t be

overlooked as a source of income for any new country.

 

I looked up the info about Switzerland. it was true when the author wrote this.

en.wikipedia.org/wiki/Foreign_relations_of_Switzerland#United_Nations

On September 10, 2002, Switzerland became a full member of the United Nations, after a referendum supporting full membership won in a close vote six months earlier; Swiss voters had rejected membership by a 3-to-1 margin in 1986. The 2002 vote made Switzerland the first country to join based on a popular vote.

 

-

 

The key requirement for sovereignty is that the country must

have some territory that it calls its own, and hold on to it against

all comers. Traveling potentates may well have what is called

“extraterritorial status,” meaning that whatever premises they

occupy are, for the duration of their occupation, the sovereign

territory of their country. This is certainly convenient. However,

the country through which the potentates are traveling must agree

to this status, and such agreement is rarely forthcoming unless a

potentate’s government holds some territory of its own some­

where. One class of exceptions are the embassies of the Baltic

countries (Latvia, Lithuania and Estonia) in the United States.

The United States has never recognized the annexation of these

countries by the Soviet Union during World War II. The

ambassadors from those lands who were accredited to the

government in Washington at the time of the annexation continue

to be recognized as such, since no competent authority (competent

in the eyes of the United States, that is) has relieved them of their

position. Whatever premises they occupy are the (only) sovereign

territory of these nations. But this status is based on the home

governments having held their own territory prior to World War

II. Thus the precedent they set is of little use to the new-country

organizer, whose country has never held any territory of its own.

 

interesting, altho not the case anymore, since the USSR has collapsed.

 

-

 

The subclass of international territory covers much of the

seabed (although individual countries are always expanding their

claims to territorial waters, shrinking the international area of the

deep oceans), outer space, and a part of Antarctica. Speaking of

Antarctica, it is a popular misconception that the Antarctic treaty

signed in the 1950’s made all of Antarctica an international zone.

All of the countries who had previously made claims on the

continent merely agreed to hold them in abeyance until the end of

the century, making no further claims and not attempting to

implement existing ones. But for the next century, they have

reserved the right to resume the prosecution of their claims. They

agreed to the treaty essentially because they realized that their

claims would have little practical value until then, and that there

was no sense wasting a lot of time and energy pressing claims until

then, as long as it could be assured that nobody else would use the

hiatus to steal a march on them.

 

i wonder what happened.

 

en.wikipedia.org/wiki/Antarctica#Politics

 

-

 

en.wikipedia.org/wiki/Clipperton_Island

 

-

 

Another temptation is to declare that all settlers will participate

in making decisions about how the new country is to be run.

This approach may recruit a large number of people, but tends to

attract lots of chiefs and few Indians. The people spend all their

time and energy in debating every little point of policy, rather than

in establishing the businesses and other institutions that are to be

the backbone of the new country. Such groups sometimes compare

themselves to the citizens of ancient Athens. But it should be kept

in mind that only a minority of the people of Athens were actually

citizens. While they debated the great issues, their slaves and other

non-citizens took care of the day-to-day business of making the

community work. In turn, the citizens’ common interest in

maintaining their privileged position vis-a-vis the others acted as

an incentive to reduce the factionalism into which such participa­

tory decision-making institutions are prone to degenerate.

great expression! many chiefs and few indians :D

 

-

 

So far we have looked mainly at the problems involved in

getting a new country started and running smoothly. But what

then? What can you look forward to for your children, and your

children’s children? Can you expect them to carry on the work you

have started? Or will the world change so much that your efforts

become meaningless?

 

Human history changed dramatically when agriculture was

invented. The minority of the people that could be freed from

immediate food production found that the most profitable

investment for this new-found leisure was the conquest of other

people, and control of their agricultural surplus. This has been the

pattern for the past 10,000 years: conquer and tax, tax and

conquer some more. However, in the industrial age war has

become so costly, even for the victors, that the opportunities are

limited for conquest that can produce enough pelf to pay off the

costs involved and finance the next wave of conquest. As weapons

of mass destruction get cheaper, the costs of war to the “victor”

will spiral even higher.

 

But one shouldn’t be too complacent that this will mean a world

in which nation lives with nation in peace and harmony. The

resulting peace may well be the peace of the grave. In the coming

centuries, it will likely be possible to build doomsday machines

that can destroy all life on Earth. For example, a small rocket

motor on an asteroid a few miles in diameter could change the

planetoid’s orbit just enough to hit the Earth, and effectively

homogenize the outer few miles of the Earth’s crust. From an

astronomical point of view, this might be a minor event. But for

the sentient life on Earth, it could be essentially equivalent to

atomizing the entire planet.

 

Once such means of destruction become generally available, it

can only be a matter of time until some individual or group is faced

with the collapse of their position — an Adolf Hitler, an Idi Amin,

a terrorist group like the IRA or PLO, or even a business firm.

People in such positions commonly contemplate suicide. Com­

pared to this, threatening to play the role of Samson in the Temple

if the world does not accede to their demands seems eminently

reasonable, if the means are available. The first few people trying

this can be appeased. But eventually the demands from such

blackmailers will become too numerous, too large, and too

contradictory to be completely satisfied. Many desperate people

committing suicide have tried to take as many people with them as

possible. As the weapons available to them increase in power, it

can only be a matter of time before they are able to fulfill their

ambitions of bringing the whole world down with them.

 

If humankind is to survive, I see no alternative to expanding

outward into space. And this doesn’t mean just settling on other

planets and moons. They will be just as vulnerable to doomsday

weapons as the Earth, and there aren’t enough of them to insure

that some will survive an Armageddon. Only a large number of

communities well dispersed in the volume of space seems likely to

have a chance to escape the fury of a frustrated blackmailer or a

suicidal grudge holder. Such people will be able to destroy a few

communities, just as today terrorists can fairly easily destroy an

airplane with hundreds of people aboard. Such an act is a disaster

for those on the plane, and is hardly cause for celebration by their

friends and relatives and other supporters of the things they stood

for. But the human race survives. The continuity of the cultures of

the world is not broken.

 

he is right about this. we see the beginnings with 3D-printed weapons. that will becom possible soon. thus making gun control laws rather moot. after that, mor powerful weapons will be able to be made, explosivs cant be too far off in the future.

 

it is easier to destroy than to create, and as the power of technology inevitable rises, this will becom a larger and larger threat if humanity is populated densly.

 

-

 

CALLAWAY, KINGDOM OF

During the American Civil War, the county of Callaway in the

state of Missouri sympathized with the Confederacy, but was

facing occupation by an overwhelming Union force. Col. Jefferson

Jones mounted an impressive display of force, complete with a

dummy cannon of wood painted black. Unaware that Jones had

only 300 old men and boys, Union Gen. John B. Henderson signed

a mutual non-aggression treaty with Callaway, which then became

known as the Kingdom of Callaway. Of course, as soon as the

Union decided it was time to move into the area, the treaty meant

nothing. This reinforces Machiavelli’s dictum, “Put not your faith

in Princes” — nor in their scraps of paper.

 

tru story

www.kchsoc.org/legend.html

 

-

 

CONCH REPUBLIC

This is a mouse-that-roared operation on Key West in Florida.

Because of the high incidence of illegal immigration and drug

smuggling into the United States in that area, a roadblock was set

up in April of 1982. This caused a 19-mile-long traffic jam, and

incensed the local tourist industry. On April 23, 1982, they

declared themselves to be the Conch (pronounced “konk”)

Republic. A silver commemorative medal was produced, and the

first anniversary of independence was celebrated by a Festival

Weekend. Conch shells were sent out to the media to promote the

event. The spokesman seems to be George Tregaskis, of Key West

FL 33040.

 

very funny! continues to this day!

en.wikipedia.org/wiki/Conch_Republic

 

-

 

 

The One World Schoolhouse – Salman Khan ebook free download pdf

This is a short, easy to read, nonacademic (few references) book. it has som shortcomings on matters dealing with test taking and intelligence tests, but isnt that important for the main topics of the book. this book shud be read by anyone interested in public policy regarding education.

 

 

As always, quotes and comments below. quotes ar in red.

 

—-

 

I was born in Metairie, Louisiana, a residential area within

metro New Orleans. My father, a pediatrician, had moved

there from Bangladesh for his medical residency at LSU and,

later, his practice at Charity Hospital. In 1972, he briefly

returned to Bangladesh and came back with my mother—who

was born in India. It was an arranged marriage, very traditional

(my mother tried to peek during the ceremony to make sure

she was marrying the brother she thought she was). Over the

next several years, five of my mother’s brothers and one cousin

came to visit, and they all fell in love with the New Orleans

area. I believe that they did this because Louisiana was as close

to South Asia as the United States could get; it had spicy food,

humidity, giant cockroaches, and a corrupt government. We

were a close family—even though, at any given moment, half

of my relatives weren’t speaking to the other half.

 

 

Chuckle

 

-

 

Let me be clear—I think it’s essential for everything that

follows—that at the start this was all an experiment, an impro­

visation. I ’d had no teacher training, no Big Idea for the most

effective way to teach. I did feel that I understood math intu­

itively and holistically, but this was no guarantee that I ’d be

effective as a teacher. I ’d had plenty of professors who knew

their subject cold but simply weren’t very good at sharing what

they knew. I believed, and still believe, that teaching is a sepa­

rate skill—in fact, an art that is creative, intuitive, and highly

personal.

 

i think he is right about that. so, it makes no sens to me when danish politicians focus on having research-based education. this means that the teacher must be a researcher himself. but given the nonperfect and perhaps low (?) correlation between teaching ability and researcher ability, that seems like at best at bad idea, and at worst, a dangerusly bad idea.

 

-

 

It ignores several basic facts about how people actually learn.

People learn at different rates. Some people seem to catch on to

things in quick bursts of intuition; others grunt and grind their

way toward comprehension. Quicker isn’t necessarily smarter

and slower definitely isn’t dumber. Further, catching on quickly

isn’t the same as understanding thoroughly. So the pace of

learning is a question of style, not relative intelligence. The tor­

toise may very well end up with more knowledge—more use­

ful, lasting knowledge—than the hare.

 

it pains me to read stuff like this. u gotta into g mr. Khan.

 

-

 

Let me emphasize this difference, because it is central to

everything I argue for in this book. In a traditional academic

model, the time allotted to learn something is fixed while the

comprehension of the concept is variable. Washburne was

advocating the opposite. What should be fixed is a high level

of comprehension and what should be variable is the amount of

time students have to understand a concept.

 

obvius, but apparently ignored by those that support the current one-size fits all system (based on age). well almost one size. ther is special education for those simply too stupid or too unruly or too handicapped to learn somthing in a normal class.

 

-

 

The findings of Kandel and other neuroscientists have much

to say about how we actually learn; unfortunately, the standard

classroom model tends to ignore or even to fly in the face of these

fundamental biological truths. Stressing passivity over activity is

one such misstep. Another, equally important, is the failure of

standard education to maximize the brain’s capacity for associa­

tive learning—the achieving of deeper comprehension and more

durable memory by relating something newly learned to some­

thing already known. Let’s take a moment to consider this.

 

yes, this is very important. hence why mem-based learning works really well (an online learning site, www.memrise.com, is based on this idea, and it works very well!). also think of how memory techniqs work – they ar based on associations as well. cf. en.wikipedia.org/wiki/Memorization#Techniques

 

recently, quite a few books hav been written on this subject. probably becus of the recent interest in memory as a sport disciplin. cf. en.wikipedia.org/wiki/World_Memory_Championships

 

-

 

Active learning, owned learning, also begins with giving

each student the freedom to determine where and when the

learning will occur. This is the beauty of the Internet and the

personal computer. I f someone wants to study the quadratic

equation on his back porch at 3 a.m., he can. I f someone thinks

best in a coffee shop or on the sideline of a soccer field, no prob­

lem. Haven’t we all come across kids who seem bright and alert

except when they’re in class? Isn’t it clear that there are morning

people and night people? The radical portability of Internet-

based education allows students to learn in accordance with

their own personal rhythms, and therefore most efficiently.

 

good application to fix the morningness vs. eveningsness problem (in DA: a-menneske vs. b-mennesker). cf. en.wikipedia.org/wiki/Morningness-eveningness_questionnaire and en.wikipedia.org/wiki/Chronotype

 

-

 

Tests say little or nothing about a student’s potential to learn

a subject. At best, they offer a snapshot of where the student

stands at a given moment in time. Since we have seen that stu­

dents learn at widely varying rates, and that catching on faster

does not necessarily imply understanding more deeply, how

meaningful are these isolated snapshots?

 

yes they do. achievement tests correlate well with g factor.

 

-

 

And all of this might have happened because of one snapshot

test, administered on one morning in the life of a twelve-year-

old girl—a test that didn’t even test what it purported to be

testing! The exam, remember, claimed to be measuring math

potential—that is, future performance. Nadia did poorly on it

because of one past concept that she’d misunderstood. She has

cruised through every math class she’s ever taken since (she

took calculus as a sophomore in high school). What does this

say about the meaningfulness and reliability of the test? Yet

we look to exams like this to make crucial, often irreversible,

and deceptively “objective” decisions regarding the futures of

our kids.

 

it implies that it isnt a perfectly valid test. no one claims that such tests hav perfect validity.

 

it doesnt say anything about reliability afaict.

 

-

 

What will make this goal attainable is the enlightened use of technology. Let me stress ENLIGHTENED use. Clearly, I believe that technology-enhanced teaching and learning is our best chance for an affordable and equitable educational future. But the key question is how the technology is used. It’s not enough to put a bunch of computers and smartboards into classrooms. The idea is to integrate the technology into how we teach and learn; without meaningful and imaginative integration, technology in the classroom could turn out to be just one more very expensive gimmick.

 

[had to type it off, apparently, the OCR cudnt handle bold text???]

 

Surely mr. Khan is right about this.

 

-

 

I happen to believe that every student, given the tools and

the help that he or she needs, can reach this level of profi­

ciency in basic math and science. I also believe it is a disservice

to allow students to advance without this level of proficiency,

because they’ll fall on their faces sometime later.

 

living in a dream world. good luck teaching math to the mentally retarded.

lesson: this is why NOT to use words like <every> and <all>. it is not possible to raise everybody to full mastery of basic math and science. but it is surely possible to lift most people to new heights with better teaching etc.

 

-

 

It turned out that Peninsula Bridge used the video lessons

and software at three of its campuses that summer. Some of

the ground rules were clear. The Academy would be used in

addition to, not in place of, a traditional math curriculum. The

videos would only be used during “computer time,” a slot that

was shared with learning other tools such as Adobe Photoshop

and Illustrator. Even within this structure, however, there were

some important decisions to be made; the decisions, in turn,

transformed the Peninsula Bridge experience into a fascinating

and in some ways surprising test case.

 

The first decision was the question of where in math the kids

should start. The Academy math curriculum began, literally,

with 1 + 1=2. But the campers were mainly sixth to eighth

graders. True, most of them had serious gaps in their under­

standing of math and many were working below their grade

level. Still, wouldn’t it be a bit insulting and a waste of time to

start them with basic addition? I thought so, and so I proposed

beginning at what would normally be considered fifth-grade

material, in order to allow for some review. To my surprise,

however, two of the three teachers who were actually imple­

menting the plan said they preferred to start at the very begin­

ning. Since the classes had been randomly chosen, we thereby

ended up with a small but classic controlled experiment.

 

The first assumption to be challenged was that middle-

school students would find basic arithmetic far too easy. Among

the groups that had started with 1 + 1, most of the kids, as

expected, rocketed through the early concepts. But some didn’t.

A few got stuck on things as fundamental as two-digit subtrac­

tion problems. Some had clearly never learned their multiplica­

tion tables. Others were lacking basic skills regarding fractions

or division. I stress that these were motivated and intelligent

kids. But for whatever reason, the Swiss cheese gaps in their

learning had started creeping in at a distressingly early stage,

and until those gaps were repaired they had little chance of

mastering algebra and beyond.

 

The good news, however, is that once identified, those gaps

could be repaired, and that when the shaky foundation had been

rebuilt, the kids were able to advance quite smoothly.

 

This was in vivid and unexpected contrast to the group that

had started at the fifth-grade level. Since they’d begun with

such a big head start, I assumed that by the end of the six-week

program they would be working on far more advanced con­

cepts than the other group. In fact just the opposite happened.

As in the classic story of the tortoise and the hare, the 1 + 1

group plodded and plodded and eventually passed them right

by. Some of the students in the “head start” group, on the other

hand, hit a wall and just couldn’t seem to progress. There were

sixth- and seventh-grade concepts that they simply couldn’t

seem to master, presumably because of gaps in earlier concepts.

In comparing the performance of the two groups, the conclu­

sion seemed abundantly clear: Nearly all the students needed

some degree of remediation, and the time spent on finding and

fixing the gaps turned out both to save time and deepen learning in

the longer term.

 

if that is really true, thats a HUGELY important finding. any replications of this?

 

-

 

As we settled into the MIT routine, Shantanu and I began

independently to arrive at the same subversive but increasingly

obvious conclusion: The giant lecture classes were a monu­

mental waste of time. Three hundred students crammed into

a stifling lecture hall; one professor droning through a talk he

knew by heart and had delivered a hundred times before. The

sixty-minute talks were bad enough; the ninety-minute talks

were torture. What was the point? Was this education or an

endurance contest? Was anybody actually learning anything?

Why did students show up at all? Shantanu and I came up with

two basic theories about this. Kids went to the lectures either

because their parents were paying x number of dollars per, or

because many of the lecturers were academic celebrities, so

there was an element of show business involved.

 

i feel exactly the same about my university classes. i want to learn goddamit, not sit in class waiting for it to end.

 

-

 

Then there are the standardized tests to which students are

subjected from third grade straight on through to grad school.

As I ’ve said, I am not antitesting; I believe that well-conceived,

well-designed, and fairly administered tests constitute one of

our few real sources of reliable and relatively objective data

regarding students’ preparedness. But note that I say prepared­

ness, not potential. Well-designed tests can give a pretty solid

idea of what a student has learned, but only a very approximate

picture of what she can learn. To put it in a slightly different

way, tests tend to measure quantities of information (and some­

times knowledge) rather than quality of minds—not to men­

tion character. Besides, for all their attempts to appear precise

and comprehensive, test scores seldom identify truly notable

ability. I f you’re the admissions director at Caltech or in charge

of hiring engineers at Apple, you’re going to see a heck of a lot

of candidates who had perfect scores on their math SATs. They

are all going to be fairly smart people, but the scores tell you

little about who is truly unique.

 

mr. Khan obvisuly knows little about intelligence tests. sure, SAT, ACT, GRE tests are achievement tests, but those correlate moderately to strongly with g factor, so they are okay to decent intelligence tests. and ofc, IQ tests like RPM are really good at measuring g factor. they really can measure a students potential, in that it measures the students ability very well, and that is closely related to the students potential.

 

-

 

For me personally, the biggest discovery has been how hun­

gry students are for real understanding. I sometimes get push-

back from people saying, “Well, this is all well and good, but it

will only work for motivated students.” And they say it assum­

ing that maybe 20 percent of students fall into that category. I

probably would have agreed with them seven years ago, based

on what I’d seen in my own experience with the traditional aca­

demic model. When I first started making videos, I thought I

was making them only for some subset of students who cared—

like my cousins or younger versions of myself. What was truly

startling was the reception the lessons received from students

whom people had given up on, and who were about to give up

on themselves. It made me realize that if you give students the

opportunity to learn deeply and to see the magic of the universe

around them, almost everyone will be motivated.

 

it will be interesting to see just how many students care.

 

-

 

Is Khan Academy, along with the intuitions and ideas that

underpin it, our best chance to move toward a better educa­

tional future? That’s not for me to say. Other people of vision

and goodwill have differing approaches, and I fervently hope

that all are given a fair trial in the wider world. But new and

bold approaches do need to be tried. The one thing we cannot

afford to do is to leave things as they are. The cost of inac­

tion is unconscionably high, and it is counted out not in dol­

lars or euros or rupees but in human destinies. Still, as both an

engineer and a stubborn optimist, I believe that where there are

problems, there are also solutions. I f Khan Academy proves to

be even part of the solution to our educational malaise, I will

feel proud and privileged to have made a contribution.

 

indeed, never trying anything new implies no progress.

 

reminds me of another book i want to read soon.

www.goodreads.com/book/show/13237711-uncontrolled?auto_login_attempted=true

 

-

 

 

I was researching a different topic and came across this paper. I was rewatching the Everything is a remix series. Then i looked up som mor relevant links, and came across these videos. One of them mentioned this article.

Complex to the ear but simple to the mind (Nicholas J Hudson)

Abstract:

Background: The biological origin of music, its universal appeal across human cultures and the cause of its beauty
remain mysteries. For example, why is Ludwig Van Beethoven considered a musical genius but Kylie Minogue is
not? Possible answers to these questions will be framed in the context of Information Theory.
Presentation of the Hypothesis: The entire life-long sensory data stream of a human is enormous. The adaptive
solution to this problem of scale is information compression, thought to have evolved to better handle, interpret
and store sensory data. In modern humans highly sophisticated information compression is clearly manifest in
philosophical, mathematical and scientific insights. For example, the Laws of Physics explain apparently complex
observations with simple rules. Deep cognitive insights are reported as intrinsically satisfying, implying that at some
point in evolution, the practice of successful information compression became linked to the physiological reward
system. I hypothesise that the establishment of this “compression and pleasure” connection paved the way for
musical appreciation, which subsequently became free (perhaps even inevitable) to emerge once audio
compression had become intrinsically pleasurable in its own right.
Testing the Hypothesis: For a range of compositions, empirically determine the relationship between the
listener’s pleasure and “lossless” audio compression. I hypothesise that enduring musical masterpieces will possess
an interesting objective property: despite apparent complexity, they will also exhibit high compressibility.
Implications of the Hypothesis: Artistic masterpieces and deep Scientific insights share the common process of
data compression. Musical appreciation is a parasite on a much deeper information processing capacity. The
coalescence of mathematical and musical talent in exceptional individuals has a parsimonious explanation. Musical
geniuses are skilled in composing music that appears highly complex to the ear yet transpires to be highly simple
to the mind. The listener’s pleasure is influenced by the extent to which the auditory data can be resolved in the
simplest terms possible.

Interesting, but it is way too short on data. its not that difficult to acquire som data to test this hypothesis. varius open source lossless compressors ar freely available, im thinking particularly of FLAC compressors. then one needs a juge library of music, and som sort of ranking of the music related to the quality of it. if the hypothesis is correct, then the best music shud com out on top, at least relativly within genres, or within bands etc. i think i will test this myself.

Background

Lately I’ve been interested in cluster analysis and factor analysis. These two families of analyses have extremely many practical data-related uses. So far I’ve begun cluster analyzing Wikipedia to get an overall idea about the structure of human knowledge (how cool is that?). I’ve also read Arthur Jensen’s The g Factor to get an idea about how factor analysis works with regards to intelligence testing, and other psychometrics and biometrics (like the proposed f factor).

Today I was reading a book about the future of schooling, Salman Khan’s The One World Schoolhouse (I will post my review soon). In the book he mentions some stuff about homework. I was curious and looked up his sources. That got me reading a meta-analysis (another kind of analysis! I love analysis) about the effects of homework. While reading that I got a new idea for an analysis.

The idea

Citation indexes already exist. With such an index, one can look up a particular paper and find other papers that cite that paper. Or one can look up an author and see which papers he has published and who cites those papers and so on. However, these tools have no or poor graphical representations of the data. It is a shame, since graphical representations of data are so much more useful and cool. One need only watch a couple of TED talks about the subject to be convinced:



There are various things that one can show graphically in a very illustrative way. My idea is to have each paper as a node and have lines between them that indicate who cites who. These lines would normally be one-directional, since it is difficult to cite a paper that will be published in the future (but it happens that papers cite other papers that are “in press”, so in a sense it’s not unheard of). My idea is that one of y-axis (or x-axis if one prefers that) time is showed. In this way one can follow the citations of a papers over time. More interestingly, one can follow the citations between the other papers that cite the first paper over time. A web that becomes more complex over time, or perhaps dies of, if the academic community loses interest in that particular subject (academic interest is a bit like fashion).

Here’s a fictive example that I have made to show off the general idea:

(Proposal A graphical tool to explore relationships between academic papers)

In the example above, there are 20 papers marked for interest. All the citations between them are then found, and shown with lines. Optimally, the direction of the relationships should also be shown, perhaps by small arrows on the lines. Also optimally, the authors or names or both of the papers should be shown in a very small font on top of the papers or something like that. It should be enlarged when the mouse is on top of the nodes, with links to the actual papers, and the abstract ready to be read.

It it also possible to color the nodes after authors or research groups. In the example above, there are two lines of authors, or research groups, or research programs. The left one publishes more papers than the right one. One can employ various coloring schemes to make such features salient in the graphical representation. One can also see how the two lines interrelate; they do cite each others papers, just not as frequent as they cite their own papers.

One can also change the nodes with respect to other information than the authors. One can control their size relative to the papers individual citation count, for instance. This makes it easier for an outsider to locate the papers that gathered the most cites (either in general, or in the pool of papers of interest), and hence, most likely the most interest from fellow researchers. If one wants, one can also do the opposite, and look for hidden gems of insight in the literature that have been missed by other authors.

Even better, given the problems with replications, especially direct replications in some fields of science, especially psychology, one can color nodes after whether they are replications of previous papers or not. One could also have special arrows for replications. Similarly, literature reviews, meta-reviews, systematic reviews could have their own node shape or color so that one can locate them more easily. Surely, something like this is the proper view of evaluating the influence of scientific papers.

What next?

Two things. Improve the ideas, and add to them. Then 1) Find programmers, and convince them that the project is cool and that they should invest their time in it! 2) Find other people that have more prestige and hopefully access to funding that can be used to hire programming people to convert the ideas to reality.

The g factor, the science of mental ability – Arthur R. Jensen, ebook download pdf free

 

This is a very interesting book. Without a doubt the best about intelligence that i hav read so far. I definitely recommend reading it if one is interested in psychometrics. It can serve as a long, good, but a bit dated introduction to the subject. For shorter introductions, probably Gottfredson’s why g matters is better.

 

 

Quotes and comments below. Red text = quotes.

——-

 

Galton had no tests for obtaining direct measurements of cognitive ability.

Yet he tried to estimate the mean levels of mental capacity possessed by different

racial and national groups on his interval scale of the normal curve. His esti­

mates—many would say guesses—were based on his observations of people of

different races encountered on his extensive travels in Europe and Africa, on

anecdotal reports of other travelers, on the number and quality of the inventions

and intellectual accomplishments of different racial groups, and on the percent­

age of eminent men in each group, culled from biographical sources. He ven­

tured that the level of ability among the ancient Athenian Greeks averaged “ two

grades” higher than that of the average Englishmen of his own day. (Two grades

on Galton’s scale is equivalent to 20.9 IQ points.) Obviously, there is no pos­

sibility of ever determining if Galton’s estimate was anywhere near correct. He

also estimated that African Negroes averaged “ at least two grades” (i.e., 1.39a,

or 20.9 IQ points) below the English average. This estimate appears remarkably

close to the results for phenotypic ability assessed by culture-reduced IQ tests.

Studies in sub-Saharan Africa indicate an average difference (on culture-reduced
nonverbal tests of reasoning) equivalent to 1.43a, or 21.5 IQ points between

blacks and whites.8 U.S. data from the Armed Forces Qualification Test (AFQT),

obtained in 1980 on large representative samples of black and white youths,

show an average difference of 1.36a (equivalent to 20.4 IQ points)—not far

from Galton’s estimate (1.39a, or 20.9 IQ points).9 But intuition and informed

guesses, though valuable in generating hypotheses, are never acceptable as ev­

idence in scientific research. Present-day scientists, therefore, properly dismiss

Galton’s opinions on race. Except as hypotheses, their interest is now purely

biographical and historical.

 

yes there is. first, one can check the historical record to look for dysgenic effects. if the british are less smart than the ancient greeks, there wud probably hav been som dysgenic effects somwher in history. still, this is not a good method, since the population groups are somwhat different.

 

second, soon we will know the genes that cause different levels of intelligence. we can then analyze the remains of ancient greeks to see which genes they had. this shud giv a pretty good estimate, altho not perfect since, that 1) new mutations hav com by since then, 2) som gene variants hav perhaps disappeared, 3) the difficulty of getting a representativ sample of ancient greeks to test from, 4) the problems with getting good enuf quality DNA to run tests on. still, i dont think these are impossible to overcom, and i predict that som decent estimate can be made.

 

-

 

A General Factor Is Not Inevitable. Factor analysis is not by its nature

bound to produce a general factor regardless of the nature of the correlation

matrix that is analyzed. A general factor emerges from a hierarchical factor

analysis if, and only if, a general factor is truly latent in the particular correlation

matrix. A general factor derived from a hierarchical analysis should be based

on a matrix of positive correlations that has at least three latent roots (eigen­

values) greater than 1.

For proof that a general factor is not inevitable, one need only turn to studies

of personality. The myriad of inventories that measure various personality traits

have been subjected to every type of factor analysis, yet no general factor has

ever emerged in the personality domain. There are, however, a great many first-

order group factors and several clearly identified second-order group factors, or

“ superfactors” (e.g., introversion-extraversion, neuroticism, and psychoticism),

but no general factor. In the abilities domain, on the other hand, a general factor,

g, always emerges, provided the number and variety of mental tests are sufficient

to allow a proper factor analysis. The domain of body measurements (including

every externally measurable feature of anatomy) when factor analyzed also

shows a large general factor (besides several small group factors). Similarly, the

correlations among various measures of athletic ability show a substantial gen­

eral factor.

 

 

Jensen was wrong about this, altho the significance of that is disputed afaict. see:

How important is the General Factor of Personality? A General Critique (William Revelle and Joshua Wilt), PDF

 

-

 

In jobs where assurance of competence is absolutely critical, however, such

as airline pilots and nuclear reactor operators, government agencies seem to have

recognized that specific skills, no matter how well trained, though essential for

job performance, are risky if they are not accompanied by a fairly high level of

g. For example, the TVA, a leader in the selection and training of reactor op­

erators, concluded that results of tests of mechanical aptitude and specific job

knowledge were inadequate for predicting an operator’s actual performance on

the job. A TVA task force on the selection and training of reactor operators

stated: “ intelligence will be stressed as one of the most important characteristics

of superior reactor operators.. . . intelligence distinguishes those who have

merely memorized a series of discrete manual operations from those who can

think through a problem and conceptualize solutions based on a fundamental

understanding of possible contingencies.” 161 This reminds one of Carl Bereiter’s

clever definition of “ intelligence” as “ what you use when you don’t know

what to do.”

 

funny and true

 

-

 

The causal underpinnings of mental development take place at the neurolog­

ical level even in the absence of any specific environmental inputs such as those

that could possibly explain mental growth in something like figure copying in

terms of transfer from prior learning. The well-known “ Case of Isabel” is a

classic example.181 From birth to age six, Isabel was totally confined to a dimly

lighted attic room, where she lived alone with her deaf-mute mother, who was

her only social contact. Except for food, shelter, and the presence of her mother,

Isabel was reared in what amounted to a totally deprived environment. There

were no toys, picture books, or gadgets of any kind for her to play with. When

found by the authorities, at age six, Isabel was tested and found to have a mental

age of one year and seven months and an IQ of about 30, which is barely at

the imbecile level. In many ways she behaved like a very young child; she had

no speech and made only croaking sounds. When handed toys or other unfa­

miliar objects, she would immediately put them in her mouth, as infants nor­

mally do. Yet as soon as she was exposed to educational experiences she

acquired speech, vocabulary, and syntax at an astonishing rate and gained six

years of tested mental age within just two years. By the age of eight, she had

come up to a mental age of eight, and her level of achievement in school was

on a par with her age-mates. This means that her rate of mental development—

gaining six years of mental age in only two years—was three times faster than

that of the average child. As she approached the age of eight, however, her

mental development and scholastic performance drastically slowed down and

proceeded thereafter at the rate of an average child. She graduated from high

school as an average student.

 

What all this means to the g controversy is that the neurological basis of

information processing continued developing autonomously throughout the six

years of Isabel’s environmental deprivation, so that as soon as she was exposed

to a normal environment she was able to learn those things for which she was

developmentally “ ready” at an extraordinarily fast rate, far beyond the rate for

typically reared children over the period of six years during which their mental

age normally increases from two to eight years. But the fast rate of manifest

mental development slowed down to an average rate at the point where the level

of mental development caught up with the level of neurological development.

Clearly, the rate of mental development during childhood is not just the result

of accumulating various learned skills that transfer to the acquisition of new

skills, but is largely based on the maturation of neural structures.

 

this reminds me of the person who suggested that we delay teaching math in schools for the same reason. it is simply more time-effective, and time is costly, both for the child who has limited freedom in the time spent in school, and for soceity becus that time cud hav been spent on teaching somthing else, or not spent at all and thus saved money on teachers.

 

the idea is that som math subjects takes very long to teach, say, 8 year olds, but can rapidly to taught to 12 year olds. so, using som invented numbers, the idea is that instead of spending 10 hours teaching long division to 8 year olds, we cud spend 2 hours teaching long division to 12 year olds, thus saving 8 eights that can be either used on somthing else that can be taught easily to 8 year olds, or simply freeing up the time for non-teaching activities.

 

see: www.inference.phy.cam.ac.uk/sanjoy/benezet/ for the original papers

 

-

 

Perhaps the most problematic test of overlapping neural elements posited by

the sampling theory would be to find two (or more) abilities, say, A and B, that

are highly correlated in the general population, and then find some individuals

in whom ability A is severely impaired without there being any impairment of

ability B. For example, looking back at Figure 5.2, which illustrates sampling

theory, we see a large area of overlap between the elements in Test A and the

elements in Test B. But if many of the elements in A are eliminated, some of

its elements that are shared with the correlated Test B will also be eliminated,

and so performance on Test B (and also on Test C in this diagram) will be

diminished accordingly. Yet it has been noted that there are cases of extreme

impairment in a particular ability due to brain damage, or sensory deprivation

due to blindness or deafness, or a failure in development of a certain ability due

to certain chromosomal anomalies, without any sign of a corresponding deficit

in other highly correlated abilities.22 On this point, behavioral geneticists Will-

erman and Bailey comment: “ Correlations between phenotypically different

mental tests may arise, not because of any causal connection among the mental

elements required for correct solutions or because of the physical sharing of

neural tissue, but because each test in part requires the same ‘qualities’ of brain

for successful performance. For example, the efficiency of neural conduction or

the extent of neuronal arborization may be correlated in different parts of the

brain because of a similar epigenetic matrix, not because of concurrent func­

tional overlap.” 22 A simple analogy to this would be two independent electric

motors (analogous to specific brain functions) that perform different functions

both running off the same battery (analogous to g). As the battery runs down,

both motors slow down at the same rate in performing their functions, which

are thus perfectly correlated although the motors themselves have no parts in

common. But a malfunction of one machine would have no effect on the other

machine, although a sampling theory would have predicted impaired perform­

ance for both machines.

 

i know its only an analogy, but whether ther ar one or two motors tapping from one battery might hav an effect on their speed. that depends on the setup, i think.

 

-

 

Gc is most highly loaded in tests based on scholastic knowledge and cultural

content where the relation-eduction demands of the items are fairly simple. Here

are two examples of verbal analogy problems, both of about equal difficulty in

terms of percentage of correct responses in the English-speaking general pop­

ulation, but the first is more highly loaded on G f and the second is more highly

loaded on Gc.

 

1. Temperature is to cold as Height is to

(a) hot (b) inches (c) size (d) tall (e) weight

2. Bizet is to Carmen as Verdi is to

(a) Aida (b) Elektra (c) Lakme (d) Manon (e) Tosca

 

first one, i wanted to answer <small>, since <cold> is on the bottum of the scale of temperature, so i wanted somthing that was on the bottom of the scale of height. but ther is no such option, but tall is also on the scale of height, just as cold is on the scale of temperature. with no other better option, i went with (d), which was correct.

 

second one, however, made no sense to me. i did look for patterns in spelling, vowels, length, etc., found nothing. i then googled it. its composers and their operas.

en.wikipedia.org/wiki/Georges_Bizet

en.wikipedia.org/wiki/Carmen

en.wikipedia.org/wiki/Giuseppe_Verdi

en.wikipedia.org/wiki/Aida

 

-

 

Another blood variable of interest is the amount of uric acid in the blood

(serum urate level). Many studies have shown it to have only a slight positive

correlation with IQ. But it is considerably more correlated with measures of

ambition and achievement. Uric acid, which has a chemical structure similar to

caffeine, seems to act as a brain stimulant, and its stimulating effect over the

course of the individual’s life span results in more notable achievements than

are seen in persons of comparable IQ, social and cultural background, and gen­

eral life-style, but who have a lower serum urate level. High school students

with elevated serum urate levels, for example, obtain higher grades than their

IQ-matched peers with an average or below-average serum urate level, and,

amusingly, one study found a positive correlation between university professors’

serum urate levels and their publication rates. The undesirable aspect of high

serum urate level is that it predisposes to gout. In fact, that is how the association

was originally discovered. The English scientist Havelock Ellis, in studying the

lives and accomplishments of the most famous Britishers, discovered that they

had a much higher incidence of gout than occurs in the general population.

Asthma and other allergies have a much-higher-than-average frequency in

children with higher IQs (over 130), particularly those who are mathematically

gifted, and this is an intrinsic relationship. The intellectually gifted show some

15 to 20 percent more allergies than their siblings and parents. The gifted are

also more apt to be left-handed, as are the mentally retarded; the reason seems

to be that the IQ variance of left-handed persons is slightly greater than that of

the right-handed, hence more of the left-handed are found in the lower and upper

extremes of the normal distribution of IQ.

 

Then there are also a number of odd and less-well-established physical cor­

relates of IQ that have each shown up in only one or two studies, such as vital

capacity (i.e., the amount of air that can be expelled from the lungs), handgrip

strength, symmetrical facial features, light hair color, light eye color, above-

average basic metabolic rate (all these are positively correlated with IQ), and

being unable to taste the synthetic chemical phenylthiocarbamide (nontasters are

higher both in g and in spatial ability than tasters; the two types do not differ

in tests of clerical speed and accuracy). The correlations are small and it is not

yet known whether any of them are within-family correlations. Therefore, no

causal connection with g has been established.

 

Finally, there is substantial evidence of a positive relation between g and

general health or physical well-being.[36] In a very large national sample of high

school students (about 10,000 of each sex) there was a correlation of +.381

between a forty-three-item health questionnaire and the composite score on a

large number of diverse mental tests, which is virtually a measure of g. By

comparison, the correlation between the health index and the students’ socio­

economic status (SES) was only +.222. Partialing out g leaves a very small

correlation ( + .076) between SES and health status. In contrast, the correlation

between health and g when SES is partialed out is +.326.

 

how very curius!

 

-

 

Certainly psychometric tests were never constructed with the intention of

measuring inbreeding depression. Yet they most certainly do. At least fourteen

studies of the effects of inbreeding on mental ability test scores—mostly IQ—

have been reported in the literature.132′ Without exception, all of the studies show

inbreeding depression both of IQ and of IQ-correlated variables such as scho­

lastic achievement. As predicted by genetic theory, the IQ variance of the inbred

is greater than that of the noninbred samples. Moreover, the degree to which

IQ is depressed is an increasing monotonic function of the coefficient of in-

breeding. The severest effects are seen in the offspring of first-degree incestuous

matings (e.g., father-daughter, brother-sister); the effect is much less for first-

cousin matings and still less for second-cousin matings. The degree of IQ de­

pression for first cousins is about half a standard deviation (seven or eight IQ

points).

 

In most of these studies, social class and other environmental factors are well

controlled. Studies in Muslim populations in the Middle East and India are

especially pertinent. Cousin marriages there are more prevalent in the higher

social classes, as a means of keeping wealth in family lines, so inbreeding and

high SES would tend to have opposite and canceling effects. The observed effect

of inbreeding depression on IQ in the studies conducted in these groups,

therefore, cannot be attributed to the environmental effects of SES that are often

claimed to explain IQ differences between socioeconomically advantaged and

disadvantaged groups.

 

These studies unquestionably show inbreeding depression for IQ and other

single measures of mental ability. The next question, then, concerns the extent

to which g itself is affected by inbreeding. Inbreeding depression could be

mainly manifested in factors other than g, possibly even in each test’s specificity.

To answer this question, we can apply the method of correlated vectors to in-

breeding data based on a suitable battery of diverse tests from which g can be

extracted in a hierarchical factor analysis. I performed these analyses1331 for the

several large samples of children born to first-and second-cousin matings in

Japan, for whom the effects of inbreeding were intensively studied by geneticists

William Schull and James Neel (1965). All of the inbred children and compa­

rable control groups of noninbred children were tested on the Japanese version

of the Wechsler Intelligence Scale for Children (WISC). The correlations among

the eleven subtests of the WISC were subjected to a hierarchical factor analysis,

separately for boys and girls, and for different age groups, and the overall av­

erage g loadings were obtained as the most reliable estimates of g for each

subtest. The analysis revealed the typical factor structure of the WISC—a large

g factor and two significant group factors: Verbal and Spatial (Performance).

(The Memory factor could not emerge because the Digit Span subtest was not

used.) Schull and Neel had determined an index of inbreeding depression on

each of the subtests. In each subject sample, the column vector of the eleven

subtests’ g loadings was correlated with the column vector of the subtests’ index

of inbreeding depression (ID). (Subtest reliabilities were partialed out of these

correlations.) The resulting rank-order correlation between subtests’ g loadings

and their degree of inbreeding depression was + .79 (p < .025). The correlation

of ID with the Verbal factor loadings (independent of g) was +.50 and with the

Spatial (or Performance) factor the correlation was —.46. (The latter two cor­

relations are nonsignificant, each with p < .05.) Although this negative corre­

lation of ID with the spatial factor (independent of g) falls short of significance,

the negative correlation was found in all four independent samples. Moreover,

it is consistent with the hypothesis that spatial visualization ability is affected

by an X-linked recessive allele.34 Therefore, it is probably not a fluke.

 

A more recent study1351 of inbreeding depression, performed in India, was

based entirely on the male offspring of first-cousin parents and a control group

of the male offspring of genetically unrelated parents. Because no children of

second-cousin marriages were included, the degree of inbreeding depression was

considerably greater than in the previous study, which included offspring of

second-cousin marriages. The average inbreeding effect on the WISC-R Full

Scale IQ was about ten points, or about two-third of a standard deviation.1361

The inbreeding index was reported for the ten subtests of the WISC-R used in

this study. To apply the method of correlated vectors, however, the correlations

among the subtests for this sample are needed to calculate their g loadings.

Because these correlations were not reported, I have used the g loadings obtained

from a hierarchical factor analysis of the 1,868 white subjects in the WISC-R

standardization sample.1371 The column vector of these g loadings and the column

vector of the ID index have a rank-order correlation (with the tests’ reliability

coefficients partialed out) of +.83 (p < .01), which is only slightly larger than

the corresponding correlation between the g and ID vectors in the Japanese

study.

 

In sum, then, the g factor significantly predicts the degree to which perform­

ance on various mental tests is affected by inbreeding depression, a theoretically

predictable effect for traits that manifest genetic dominance. The larger a test’s

g loading, the greater is the depression of the test scores of the inbred offspring

of consanguineous parents, as compared with the scores of noninbred persons.

The evidence in these studies of inbreeding rules out environmental variables

as contributing to the observed depression of test scores. Environmental differ­

ences were controlled statistically, or by matching the inbred and noninbred

groups on relevant indices of environmental advantage.

 

pretty large effects. the footnote with the 14 studies mentioned is:

 

Adams & Neel, 1967; Afzal, 1988; Afzal & Sinha, 1984; Agrawal et al., 1984;

Badaruddoza & Afzil, 1993; Bashi, 1977; Book, 1957; Carter, 1967; Cohen et al., 1963;

Inbaraj & Rao, 1978; Neel, et al., 1970; Schull & Neel, 1965; Seemanova, 1971; Slatis

& Hoene, 1961.

 

-

 

Semantic Verification Test. The SVT uses the binary response console (Fig­

ure 8.3) and a computer display screen. Following the preparatory “ beep,” a

simple statement appears on the screen. The statement involves the relative

positions of the three letters A, B, C as they may appear (equally spaced) in a

horizontal array. Each trial uses one of the six possible permutations of these

three letters chosen at random. The statement appears on the screen for three

seconds, allowing more than enough time for the subject to read it. There are

fourteen possible statements of the following types: “ A after B,” “ C before

A,” “ A between B and C,” “ B first,” “ B last,” “ C before A and B,” “ C

after B and A” ; and the negative form of each of these statements, for instance,

“ A not after B.” Following the three-second appearance of one of these state­

ments, the screen goes blank for one second and then one of the permutations

of the letters A B C appears. The subject responds by pressing either the TRUE

or FALSE button, depending on whether the positions of the letters does or does

not agree with the immediately previous statement.

 

Although the SVT is the most complex of the many ECTs that have been

tried in my lab, the average RT for university students is still less than 1 second.

The various “ problems” differ widely in difficulty, with average RTs ranging

from 650 msec to 1,400 msec. Negative statements take about 200 msec longer

than the corresponding positive statements. MT, on the other hand, is virtually

constant across conditions, indicating that it represents something other than

speed of information processing.

 

The overall median RT and RTSD as measured in the SVT each correlates

about —.50 with scores on the Raven’s Advanced Progressive Matrices given

without time limit. The average RT on the SVT also shows large differences

between Navy recruits and university students,1201 and between academically

gifted children and their less gifted siblings.1211 The fact that there is a within-

families correlation between RT and IQ indicates that these variables are intrin­

sically and functionally related.

 

One study20 reveals that the average processing time for each of the fourteen

types of SVT statements in university students predicts the difficulty level of

the statements (in terms of error responses) in children (third-graders) who were

given the SVT as a nonspeeded paper-and-pencil test. While the SVT is of such

trivial difficulty for college students that individual differences are much more

reliably reflected by RT rather than by errors, the SVT items are relatively

difficult for young children. Even when they take the SVT as a nonspeeded

paper-and-pencil test, young children make errors on about 20 percent of the

trials. (The few university students who made even a single error under these

conditions, given as a pretest, were screened out.) The fact that the rank order

of the children’s error rates on the various types of SVT statements closely

corresponds to the rank order of the college students’ average RTs on the same

statements indicates that item difficulty is related to speed of processing, even

when the test is nonspeeded.

 

It appears that if information exceeds a critical level of complexity for the in­

dividual, the individual’s speed of processing is too slow to handle the infor­

mation all at once; the system becomes overloaded and processing breaks

down, with resulting errors, even for nonspeeded tests on which subjects are

told to take all the time they need. There are some items in Raven’s Advanced

Matrices, for example, that the majority of college students cannot solve with

greater than chance success, even when given any amount of time, although the

problems do not call for the retrieval of any particular knowledge. As already

noted, the scores on such nonspeeded tests are correlated with the speed of in­

formation processing in simple ECTs that are easily performed by all subjects

in the study.

 

interesting test. the threshold hypothesis is also interesting for makers of IQ tests.

 

-

 

There are many other kinds of simple tasks that do not resemble the con­

tents of conventional psychometric tests but that have significant correlations

with IQ. Many studies have confirmed Spearman’s finding that pitch discrim­

ination is g-loaded, and other musical discriminations, in duration, timbre,

rhythmic pattern, pitch interval, and harmony, are correlated with IQ, indepen­

dently of musical training.28 The strength of certain optical illusions is also

significantly related to IQ.1291 Surprisingly, higher-IQ subjects experience cer­

tain illusions more strongly than subjects with lower IQ, probably because

seeing the illusion implies a greater amount of mental transformation of the

stimulus, and tasks that involve transformation of information (e.g., backward

digit span) are typically more g loaded than tasks involving less transforma­

tion of the input (e.g., forward digit span). The positive correlation between

IQ and susceptibility to illusions is consistent with the fact that susceptibility

to optical illusions also increases with age, from childhood to maturity, and

then decreases in old age—the same trajectory we see for raw-score perform­

ance on IQ tests and for speed and intraindividual consistency of RT in ECTs.

The speed and consistency of information processing generally show an in­

verted U curve across the life span.

 

interesting.

 

-

 

Jensen mentions the en.wikipedia.org/wiki/Yerkes-Dodson_law

interesting. i link to Wikipedia since i think its explanation of the law is better than Jensens, who just briefly mentions it.

 

-

 

[...Localized damage to the brain

areas that normally subserve one of these group factors can leave the person

severely impaired in the expression of the abilities loaded on the group factor,

but with little or no impairment of abilities that are loaded on other group factors

or on g.]

 

A classic example of this is females who are born with a chromosomal anom­

aly known as Turner’s syndrome.1701 Instead of having the two normal female

sex chromosomes (designated XX), they lack one X chromosome (hence are

designated XO). Provided no spatial visualization tests are included in the IQ

battery, the IQs of these women (and presumably their levels of g) are normally

distributed and virtually indistinguishable from that of the general population.

Yet their performance on all tests that are highly loaded on the spatial-

visualization factor is extremely low, typically borderline retarded, even in

Turner’s syndrome women with verbal IQs above 130. It is as if their level of

g is almost totally unreflected in their level of performance on spatial tasks.

 

It is much harder to imagine the behavior of persons who are especially

deficient in all abilities involving g and all of the major group factors, but have

only one group factor that remains intact. In our everyday experience, persons

who are highly verbal, fluent, articulate, and use a highly varied vocabulary,

speaking with perfect syntax and appropriate expression, are judged to be of at

least average or probably superior IQ. But there is a rare and, until recently,

little-known genetic anomaly, Williams syndrome,1711 in which the above-listed

characteristics of high verbal ability are present in persons who are otherwise

severely mentally deficient, with IQs averaging about 50. In most ways, Wil­

liams syndrome persons appear to behave with no more general capability of

getting along in the world than most other persons with similarly low IQs. As

adults, they display only the most rudimentary scholastic skills and must live

under supervision. Only their spoken verbal ability has been spared by this

genetic defect. But their verbal ability appears to be “ hollow” with respect to

g. They speak in complete, often complex, sentences, with good syntax, and

even use unusual words appropriately. (They do surprisingly well on the Pea­

body Picture Vocabulary Test.) In response to a series of pictures, they can tell

a connected and fully elaborated story, accompanied by appropriate, if somewhat

exaggerated, emotional expression. Yet they have exceedingly little ability to

reason, or to explain or summarize the meaning of what they say. On most

spatial ability tests they generally perform on a par with Down syndrome persons

of comparable IQ, but they also differ markedly from Down persons in peculiar

ways. Williams syndrome subjects are more handicapped than IQ-matched

Down subjects in figure copying and block designs.

 

Comparing Turner’s syndrome with Williams syndrome obviously suggests

the generalization that a severe deficiency of one group factor in the presence

of an average level of g is far less a handicap than an intact group factor in the

presence of a very low level of g.

 

never heard of Williams syndrome befor.

 

en.wikipedia.org/wiki/Williams_syndrome

 

-

 

The correlation of IQ with grades and achievement test scores is highest (.60

to .70) in elementary school, which includes virtually the entire child population

and hence the full range of mental ability. At each more advanced educational

level, more and more pupils from the lower end of the IQ distribution drop out,

thereby restricting the range of IQs. The average validity coefficients decrease

accordingly: high school (.50 to .60), college (.40 to .50), graduate school (.30

to .40). All of these are quite high, as validity coefficients go, but they permit

far less than accurate prediction of a specific individual. (The standard error of

estimate is quite large for validity coefficients in this range.)

 

interesting. one thing that i hav been thinking about is that my GPA thruout my life has always been a bit abov average, but not close to the top. given that the intelligence requirement for each new step on the way thru the school system increases, one wud hav expected a drop in GPA, but no such thing happened. in fact, its the other way around. my GPA is the danish elementary school is 9.3 (9th grade) the average is ~8.1. this includes grades from non-intellectual subjects such as the ‘subject’ of having a nice hand-writing (yes seriusly). in 10th grade my average was 8.7, and the average is ~6.6. the max is 13 in all cases, altho normally grades abov 11 wer not given.

 

in gymnasiet (high school equiv.ish), my GPA was 7.8 and the average is 7.0. the slightly slower grades is becus the system was changed from a 13-step to a 7-step scale. and for comparison reasons, one can note that i went to HTX which has lower grades. the percentile level is 65th.

 

my university grades befor dropping out of filosofy were rather good, lots of 10′s, but i dont know the average, so cant compare. i suspect they were abov average again.

 

-

 

Unless an individual has made the transition from word reading to reading

comprehension of sentences and paragraphs, reading is neither pleasurable nor

practically useful. Few adults with an IQ of eighty (the tenth percentile of the

overall population norm) ever make the transition from word reading skill to

reading comprehension. The problem of adult illiteracy (defined as less than a

fourth-grade level of reading comprehension) in a society that provides an ele­

mentary school education to virtually its entire population is therefore largely a

problem of the lower segment of the population distribution of g. In the vast

majority of people with low reading comprehension, the problem is not word

reading per se, but lack of comprehension. These individuals score about the

same on tests of reading comprehension even if the test paragraphs are read

aloud to them by the examiner. In other words, individual differences in oral

comprehension and in reading comprehension are highly correlated.12’1

 

80.. but the american black average is only about 85. is it really true that ~37% of them ar too dull to learn to read properly? compared with ~10% of whites.

 

-

 

Virtually every type of work calls for behavior that is guided by cognitive

processes. As all such processes reflect g to some extent, work proficiency is g

loaded. The degree depends on the level of novelty and cognitive complexity

the job demands. No job is so simple as to be totally without a cognitive com­

ponent. Several decades of empirical studies have shown thousands of correla­

tions of various mental tests with work proficiency. One of the most important

conclusions that can be drawn from all this research is that mental ability tests

in general have a higher success rate in predicting job performance than any

other variables that have been researched in this context, including (in descend­

ing order of average predictive validity) skill testing, reference checks, class

rank or grade-point average, experience, interview, education, and interest meas­

ures.1221 In recent years, one personality constellation, characterized as “ consci­

entiousness,” has emerged near the top of the list (just after general mental

ability) as a predictor of occupational success.

 

reminds me that i ought to look into this field of psychology. its called I/O psychology. som time back i talked with a phd (i think) on 4chan who studied that area. he said that if he had his way, he wud just rely on g alone to predict job performance, training etc. he recommended me a textbook, which i found on the internet.

 

Psychology Applied to Work, An Introduction to Industrial and Organizational Psychology – Paul M. Muchinsky

 

it seems decent.

 

-

 

A person cannot perform a job successfully without the specific knowledge

required by the job. Possibly such job knowledge could be acquired on the job

after a long period of trial-and-error learning. For all but the very simplest jobs,

however, trial-and-error learning is simply too costly, both in time and in errors.

Job training inculcates the basic knowledge much more efficiently, provided that

later on-the-job experience further enhances the knowledge or skills acquired in

prior job training. Because knowledge and skill acquisition depend on learning,

and because the rate of learning is related to g, it is a reasonable hypothesis that

g should be an effective predictor of individuals’ relative success in any specific

training program.

 

The best studies for testing this hypothesis have been performed in the armed

forces. Many thousands of recruits have been selected for entering different

training programs for dozens of highly specialized jobs based on their perform­

ance on a variety of mental tests. As the amount of time for training is limited,

efficiency dictates assigning military personnel to the various training schools

so as to maximize the number who can complete the training successfully and

minimize the number who fail in any given specialized school. When a failed

trainee must be rerouted to a different training school better suited to his apti­

tude, it wastes time and money. Because the various schools make quite differing

demands on cognitive abilities, the armed services employ psychometric re­

searchers to develop and validate tests to best predict an individual’s probability

of success in one or another of the various specialized schools.

 

 

one is tempted to say ”common sense”, but apparently, only the military dares to do such things.

 

-

 

A rough analogy may help to make the essential point. Suppose that for some

reason it was impossible to measure persons’ heights directly in the usual way,

with a measuring stick. However, we still could accurately measure the length

of the shadow cast by each person when the person is standing outdoors in the

sunlight. Provided everyone’s shadow is measured at the same time of day, at

the same day of the year, and at the same latitude on the earth’s surface, the

shadow measurements would show exactly the same correlations with persons’

weight, shoe size, suit or dress size, as if we had measured everyone directly

with a yardstick; and the shadow measurements could be used to predict per­

fectly whether or not a given person had to stoop when walking through a door

that is only 5 ‘/2 -feet high. However, if one group of persons’ shadows were

measured at 9:00 a .m . and another group’s at 10:00 a .m ., the pooled measure­

ments would show a much smaller correlation with weight and other factors

than if they were all measured at the same time, date, and place, and the meas­

urements would have poor validity for predicting which persons could walk

through a 5 ‘/2 -foot door without stooping. We would say, correctly, that these

measurements are biased. In order to make them usefully accurate as predictors

of a person’s weight and so forth, we would have to know the time the person’s

shadow was measured and could then add or subtract a value that would adjust

the measurement so as to make it commensurate with measurements obtained

at some other specific time, date, and location. This procedure would permit the

standardized shadow measurements of height, which in principle would be as

good as the measurements obtained directly with a measuring stick.

 

Standardized IQs are somewhat analogous to the standardized shadow meas­

urements of height, while the raw scores on IQ tests are more analogous to the

raw measurements of the shadows themselves. If we naively remain unaware

that the shadow measurements vary with the time of day, the day of the year,

and the degrees of latitude, our raw measurements would prove practically

worthless for comparing individuals or groups tested at different times, dates,

or places. Correlations and predictions could be accurate only within each unique

group of persons whose shadows were measured at the same time, date, and

place. Since psychologists do not yet have the equivalent of a yardstick for

measuring mental ability directly, their vehicles of mental measurement—IQ

scores—are necessarily “ shadow” measurements, as in our height analogy, al­

beit with amply demonstrated practical predictive validity and construct validity

within certain temporal and cultural limits.

 

 

interesting. however, biologically based tests shud allow for absolut measurement, say tests based on RT in ECTs, or tests based on the amount of mylianation in the brain, or brain ph levels, brain size via brain imaging scans if we can make them better measurements of g, etc.

 

-

 

Many possible factors determine whether a person passes or fails a particular

test item. Does the person understand the item at all (e.g., “What is the sum of

all the latent roots of a 7 X 7 R matrix?” )? Has the person acquired the specific

knowledge called for by the item (e.g., “Who wrote Faust?”), or perhaps has

he acquired it in the past and has since forgotten it? Did the person really know

the answer, but just couldn’t recall it at the moment of being tested? Does the

item call for a cognitive skill the person either never acquired or has forgotten

through disuse (e.g., “ How much of a whole apple is two-thirds of one-half of

the apple?” )? Does the person understand the problem and know how to solve

it, but is unable to do it within the allotted time limit (e.g., substituting the

corresponding letter of the alphabet for each of the numbers from one to twenty-

six listed in a random order in one minute)? Or even when there is a liberal

time limit does the person give up on the item or just guess at the answer

prematurely, perhaps because the item looks too complicated at first glance (e.g.,

“ If it takes six garden hoses, all running for three hours and thirty minutes to

fill a tank, how many additional hoses would be needed to fill the tank in thirty

minutes?” )?

 

1) dunno

2) Goethe

3) 2/3*1/2=4/6*3/6=12/36=1/3

4) #hose*time=tank size

6*3.5=21

21 is the size of the tank

21=0.5*#hose, solve #hose

42=#hose

42-6=36

36 more hoses

 

-

 

The only study I have found that investigated whether there has been a secular

change (over thirty years) in the heritability of g-loaded test scores concluded

that “ the results revealed no unambiguous evidence for secular trends in the

heritability of intelligence test scores.” 1351 However, the heritability coefficients

(based on twenty-two same-age cohort samples of MZ and DZ male twins born

in Norway between 1930 and 1960) showed some statistically reliable nonlinear

trends over the thirty-year period, as shown in Figure 10.2. The overall trend

line goes equally down-up-down-up with heritability coefficients ranging from

slightly above .80 to slightly below .40. The heritability coefficient was the same

for the cohort born in 1930 as for the cohort born in 1960 (for both, h2 = .80).

The authors offer only weak ad hoc speculations about possible causes of this

erratic fluctuation of h2 across 22 points in time.

 

the hole is the german occupation of norway. the data from the 30s make sense to me, the depression wud result in civil unrest and the changing up of society. after a period of such, heritabilities shud stabilize again, as seen in the after war period. i dont understand the 50s down swing in heritability.

 

so, i thought it might be somthing economic. i gathered GDP data, and looked at the data. nope, not true.

 

www.norges-bank.no/pages/77409/p1_c6.xlsx

 

data from 1901 to 2000 looks like this:

gdp norway 50s

 

doesnt fit with the GDP hypothesis at all, except for missing data in the war.

 

i dunno, perhaps www.newsinenglish.no/2010/06/16/the-50s-in-norway-werent-so-nifty/

 

the authors of the study that found the drop in heritability also dont know ”We are, however, quite at a loss in explaining the dip from about 1950 to 1954. Thus, we feel that the best strategy at present is to leave the issue of secular trends open. ”

On the question of secular trends in the heritability of intelligence scores A study of Norwegian twins

-

 

Head Start. The federal preschool intervention known as Head Start, which

has been in continual existence now since 1964, is undoubtedly the largest-

scale, though not the most intensive, educational intervention program ever un­

dertaken, with an annual expenditure over $2 billion. The program is aimed at

improving the health status and the learning and social skills of preschoolers

from poor backgrounds so they can begin regular school more on a par with

children from more privileged backgrounds. The intervention is typically short­

term, with various programs lasting anywhere from a few months to two years.

 

The general conclusion of the hundreds of studies based on Head Start data

is that the program has little, if any, effect on IQ or scholastic achievement that

endures beyond more than two to three years after exposure to Head Start. The

program does, however, have some potential health benefits, such as inoculations

of enrollees against common childhood diseases and improved nutrition (by

school-provided breakfast or lunch). The documented behavioral effects are less

retention-in-grade and lower dropout rates. The cause(s) of these effects are

uncertain. Because eligible children were not randomly enrolled in Head Start,

but were selected by parents and program administrators, these scholastic cor­

relates of Head Start are uninterpretable from a causal standpoint. Selection,

rather than direct causation by the educational intervention itself, could be the

explanation of Head Start’s beneficial outcomes.

 

crazy amount of money spent for som slight health benefits. perhaps ther is a cheaper way to get such benefits.

 

-

 

The Milwaukee Project. Aside from Head Start, this is the most highly

publicized of all intervention experiments. It was the most intensive and exten­

sive educational intervention ever conducted for which the final results have

been published.55 It was also the most costly single experiment in the history of

psychology and education—over $14 million. In terms of the highest peak of

IQ gains for the seventeen children in the treatment condition (before the gains

began to vanish), the cost was an estimated $23,000 per IQ point per child.

 

holy shit. even tho i think iv seen this figur befor (in The g Factor by Chris Brand).

 

Jensen also doesnt mention the end of the project, but Wikipedia does:

en.wikipedia.org/wiki/Milwaukee_Project

 

The Milwaukee Project’s claimed success was celebrated in the popular media and by famous psychologists. However, later in the project Rick Heber, the principal investigator, was discharged from the University of Wisconsin–Madison and convicted and imprisoned for large-scale abuse of federal funding for private gain. Two of Heber’s colleagues in the project were also convicted for similar abuses. The project’s results were not published in any refereed scientific journals, and Heber did not respond to requests from colleagues for raw data and technical details of the study. Consequently, even the existence of the project as described by Heber has been called into question. Nevertheless, many college textbooks in psychology and education have uncritically reported the project’s results.[3][4]

 

this reminds me why open data is necessary in science.

 

-

 

[The Abecedarian Early Intervention Project.]

Both the T and C groups (each with about fifty subjects) were given age-

appropriate mental tests (Bayley, Stanford-Binet, McCarthy, WPPSI) at

six-month intervals from age six months to sixty months. The important com­

parisons here are the mean T-C differences at each testing. (Because the test

scores do not have the same factor composition across this wide age range,

the absolute scores of the T group alone are not as informative of the efficacy

of the intervention as are the mean T-C differences.) At every testing from six

months to five years of age, the T group outperformed the C group, and the

overall average T-C difference (103.3 — 95.5 = 7.8 IQ points) was highly

significant (p < .001). Peculiarly, however, the largest T-C differences (aver­

aging fifteen IQ points) occurred between eighteen and thirty-six months of

age and then declined during the last two years of intervention. At sixty

months, the average T-C difference was 7.5 IQ points. This decrease might

simply reflect the fact that with the children’s increasing age the tests become

increasingly more g-Ioaded. The tests used before two or three years of age

measure mainly perceptual-motor functions that have relatively little g satura­

tion. Only later does g becomes the predominant component of variance in

IQ. In follow-up studies at eight and twelve years of age, the T-C difference

on the WISC-R was about five IQ points,1571 a difference that has remained up

to age fifteen. At the last reported testing, the T-C difference was 4.6 IQ

points, or a difference of 0.35ct. Scholastic achievement test scores showed a

somewhat larger effect of the intervention up to age fifteen.1571 The interven­

tion effect on other criteria of the project’s success was demonstrated by the

decreased percentage of children who repeated at least one grade by age

twelve (T = 28 percent, C = 55 percent) and the percentage of children with

borderline or retarded intelligence (IQ < 85) (T = 12.8 percent, C = 44.2

percent).1561

 

Thus this five-year program of intensive intervention beginning in early in­

fancy increased IQ (at age fifteen years) by about five points. Judging from a

comparable gain in scholastic achievement, the effect had broad transfer, sug­

gesting that it probably raised the level of g to some extent. The finding that

the T subjects did better than the C subjects on a battery of Piaget’s tests of

conservation, which reflect important stages in mental development, is further

evidence. The Piagetian tests are not only very different in task demands from

anything in the conventional IQ tests used in the conventional assessments, but

are also highly g loaded.1571 The mean T-C difference on the Piagetian conser­

vation tests was equal to 0.33a (equivalent to five IQ points). Assuming that

the instructional materials in the intervention program did not closely resemble

Piaget’s tests, it is a warranted conclusion that the intervention appreciably

raised the Level of g.

 

im still skeptical as to the g effects. id like to see the data about them as adults, and a larger sample size.

 

again, Wikipedia has mor on the issue, both positiv and negativ:

en.wikipedia.org/wiki/Abecedarian_Early_Intervention_Project

Significant findings

Follow-up assessment of the participants involved in the project has been ongoing. So far, outcomes have been measured at ages 3, 4, 5, 6.5, 8, 12, 15, 21, and 30.[5] The areas covered were cognitive functioning, academic skills, educational attainment, employment, parenthood, and social adjustment. The significant findings of the experiment were as follows:[6][7]

Impact of child care/preschool on reading and math achievement, and cognitive ability, at age 21:

  • An increase of 1.8 grade levels in reading achievement
  • An increase of 1.3 grade levels in math achievement
  • A modest increase in Full-Scale IQ (4.4 points), and in Verbal IQ (4.2 points).

Impact of child care/preschool on life outcomes at age 21:

  • Completion of a half-year more of education
  • Much higher percentage enrolled in school at age 21 (42 percent vs. 20 percent)
  • Much higher percentage attended, or still attending, a 4-year college (36 percent vs. 14 percent)
  • Much higher percentage engaged in skilled jobs (47 percent vs. 27 percent)
  • Much lower percentage of teen-aged parents (26 percent vs. 45 percent)
  • Reduction of criminal activity

Statistically significant outcomes at age 30:

  • Four times more likely to have graduated from a four-year college (23 percent vs. 6 percent)
  • More likely to have been employed consistently over the previous two years (74 percent vs. 53 percent)
  • Five times less likely to have used public assistance in the previous seven years (4 percent vs. 20 percent)
  • Delayed becoming parents by average of almost two years

(Most recent information from Developmental Psychology, January 18, 2012, cited in uncnews.unc.edu, January 19, 2012)

The project concluded that high quality, educational child care from early infancy was therefore of utmost importance.

Other, less intensive programs, notably the Head Start Program, but also others, have not been as successful. It may be that they provided too little too late compared with the Abecedarian program.[4]

Criticisms

Some researchers have advised caution about the reported positive results of the project. Among other things, they have pointed out analytical discrepancies in published reports, including unexplained changes in sample sizes between different assessments and publications. It has also been noted that the intervention group’s reported 4.6 point advantage in mean IQ at age 15 was not statistically significant. Herman Spitz has noted that a mean IQ difference of similar magnitude to the final difference between the intervention and control groups was apparent already at age six months, indicating that “4 1/2 years of massive intervention ended with virtually no effect.” Spitz has suggested that the IQ difference between the intervention and control groups may have been present from the outset due to faulty randomization.[8]

 

not quite sure what to think. the sample sizes ar still kind small, and if Spitz is right in his criticism, the studies hav not shown much.

 

the reason that im skeptical to begin with is that the modern twin studies show, that shared environment, which is what these studies change to a large degree, has no effect on adult IQ.

 

in any case, if it requires so expensiv spendings to get slightly less dumb kids, its hard to justify as a public policy. at the very least, id like to see the calculation that finds that this has a net positiv benefit for society. it is possible, for instance, becus crime rates ar (supposedly) down, and job retention up which leads to mor taxes being paid, and so on.

 

-

 

Error distractors in multiple-choice answers are of interest as a method of

discovering bias. When a person fails to select the correct answer but instead

chooses one of the alternative erroneous responses (called “ distractors” ) offered

for an item in a multiple-choice test, the person’s incorrect choice is not random,

but is about as reliable as is the choice of the correct answer. In other words,

error responses, like correct responses, are not just a matter of chance, but reflect

certain information processes (or the failure of certain crucial steps in infor­

mation processing) that lead the person to choose not just any distractor, but a

particular one. Some types of errors result from a solution strategy that is more

naive or less sophisticated than other types of errors. For example, consider the

following test item:

 

If you mix a pint of water at 50° temperature with two pints of water at 80°

measured on the same thermometer, what will be the temperature of the mix­

ture? (a) 65°, (b) 70°, (c) 90°, (d) 130°, (e) Can’t say without knowing

whether the temperatures are Centigrade or Fahrenheit.

 

We see that the four distractors differ in the level of sophistication in mental

processing that would lead to their choice. The most naive distractor, for ex­

ample, is D, which is arrived at by simple addition of 50° and 80°. The answer

A at least shows that the subject realized the necessity for averaging the tem­

peratures. The answer 90° is the most sophisticated distractor, as it reveals that

the subject had a glimmer of the necessity for a weighted average (i.e., 50° +

8072 = 90°) but didn’t know how to go about calculating it. (The correct

answer, of course, is B, because the weighted average is [1 pint X 50° + 2

pints X 80°]/3 pints = 70°.) Preference for selecting different distractors changes

across age groups, with younger children being attracted to the less sophisticated

type of distractor, as indicated by comparing the percentage of children in dif­

ferent age groups that select each distractor. The kinds of errors made, therefore,

appear to reflect something about the children’s level of cognitive development.

 

interesting.

 

-

 

What is termed a cline results where groups overlap at their fuzzy boundaries

in some characteristic, with intermediate gradations of the phenotypic charac­

teristic, often making the classification of many individuals ambiguous or even

impossible, unless they are classified by some arbitrary rule that ignores biology.

The fact that there are intermediate gradations or blends between racial groups,

however, does not contradict the genetic and statistical concept of race. The

different colors of a rainbow do not consist of discrete bands but are a perfect

continuum, yet we readily distinguish different regions of this continuum as

blue, green, yellow, and red, and we effectively classify many things according

to these colors. The validity of such distinctions and of the categories based on

them obviously need not require that they form perfectly discrete Platonic cat­

egories.

 

while the rainbow analogy works to som extent, it is not that good. the reason is that with rainbows, all the colors (groups) ar on a continuum in such a way that ther isnt a blend between every two colors (groups). this is not how races work, as ther is always the possibility of a blend between any two groups, even odd groups such as amerindians and aboriginals.

 

-

 

Of the approximately 100,000 human polymorphic genes, about 50,000 are

functional in the brain and about 30,000 are unique to brain functions.[12] The

brain is by far the structurally and functionally most complex organ in the human

body and the greater part of this complexity resides in the neural structures of

the cerebral hemispheres, which, in humans, are much larger relative to total

brain size than in any other species. A general principle of neural organization

states that, within a given species, the size and complexity of a structure reflect

the behavioral importance of that structure. The reason, again, is that structure

and function have evolved conjointly as an integrated adaptive mechanism. But

as there are only some 50,000 genes involved in the brain’s development and

there are at least 200 billion neurons and trillions of synaptic connections in the

brain, it is clear that any single gene must influence some huge number of

neurons— not just any neurons selected at random, but complex systems of

neurons organized to serve special functions related to behavioral capacities.

 

It is extremely improbable that the evolution of racial differences since the

advent of Homo sapiens excluded allelic changes only in those 50,000 genes

that are involved with the brain.

 

the same point was made, altho less technically, in Hjernevask. ther is no good apriori reason to think that natural selection for som reason only worked on non-brain, non-behavioral genes. it simply makes no sense at all to suppose that.

 

-

 

Bear in mind that, from the standpoint of natural selection, a larger brain

size (and its corresponding larger head size) is in many ways decidedly disad­

vantageous. A large brain is metabolically very expensive, requiring a high-

calorie diet. Though the human brain is less than 2 percent of total body weight,

it accounts for some 20 percent of the body’s basal metabolic rate (BMR). In

other primates, the brain accounts for about 10 percent of the BMR, and for

most carnivores, less than 5 percent. A larger head also greatly increases the

difficulty of giving birth and incurs much greater risk of perinatal trauma or

even fetal death, which are much more frequent in humans than in any other

animal species. A larger head also puts a greater strain on the skeletal and

muscular support. Further, it increases the chances of being fatally hit by an

enemy’s club or missile. Despite such disadvantages of larger head size, the

human brain, in fact, evolved markedly in size, with its cortical layer accom­

modating to a relatively lesser increase in head size by becoming highly con­

voluted in the endocranial vault. In the evolution of the brain, the effects of

natural selection had to have reflected the net selective pressures that made an

increase in brain size disadvantageous versus those that were advantageous. The

advantages obviously outweighed the disadvantages to some degree or the in­

crease in hominid brain size would not have occurred.

 

this brain must hav been very useful for somthing. if som of this use has to do with non-social things, like environment, one wud expect to see different levels of ‘brain adaptation’ due to the relative differences in selection pressure in populations that evolved in different environments.

 

-

 

How then can the default hypothesis be tested empirically? It is tested exactly

as is any other scientific hypothesis; no hypothesis is regarded as scientific unless

predictions derived from it are capable of risking refutation by an empirical test.

Certain predictions can be made from the default hypothesis that are capable of

empirical test. I f the observed result differs significantly from the prediction, the

hypothesis is considered disproved, unless it can be shown that the tested pre­

diction was an incorrect deduction from the hypothesis, or that there are artifacts

in the data or methodological flaws in their analysis that could account for the

observed result. If the observed result does in fact accord with the prediction,

the hypothesis survives, although it cannot be said to be proven. This is because

it is logically impossible to prove the null hypothesis, which states that there is

no difference between the predicted and the observed result. If there is an al­

ternative hypothesis, it can also be tested against the same observed result.

 

For example, if we hypothesize that no tiger is living in the Sherwood Forest

and a hundred people searching the forest fail to find a tiger, we have not proved

the null hypothesis, because the searchers might have failed to look in the right

places. I f someone actually found a tiger in the forest, however, the hypothesis

is absolutely disproved. The alternative hypothesis is that a tiger does live in

the forest; finding a tiger clearly proves the hypothesis. The failure of searchers

to find the tiger decreases the probability of its existence, and the more search­

ing, the lower is the probability, but it can never prove the tiger’s nonexistence.

 

Similarly, the default hypothesis predicts certain outcomes under specified

conditions. If the observed outcome does not differ significantly from the pre­

dicted outcomes, the default hypothesis is upheld but not proved. If the predic­

tion differs significantly from the observed result, the hypothesis must be

rejected. Typically, it is modified to accord better with the existing evidence,

and then its modified predictions are empirically tested with new data. If it

survives numerous tests, it conventionally becomes a “ fact.” In this sense, for

example, it is a “ fact” that the earth revolves around the sun, and it is a “ fact”

that all present-day organisms have evolved from primitive forms.

 

meh, mediocre or bad filosofy of science.

 

-

 

 

 

the problem with this data is that the women were not don having children. the data is from women aged 34. since especially smart women (and so mor whites) hav children later than that age, their fertility estimates ar spuriusly low. see also the data in Intelligence: A Unifying Construct for the Social Sciences (Richard Lynn and Tatu Vanhanen, 2012).

 

-

 

Whites perform significantly better than blacks on the subtests called Com­

prehension, Block Design, Object Assembly, and Mazes. The latter three tests

are loaded on the spatial visualization factor of the WISC-R. Blacks perform

significantly better than whites on Arithmetic and Digit Span. Both of these tests

are loaded on the short-term memory factor of the WISC-R. (As the test of

arithmetic reasoning is given orally, the subject must remember the key elements

of the problem long enough to solve it.) It is noteworthy that Vocabulary is the

one test that shows zero W-B difference when g is removed. Along with Infor­

mation and Similarities, which even show a slight (but nonsignificant) advantage

for blacks, these are the subtests most often claimed to be culturally biased

against blacks. The same profile differences on the WISC-R were found in

another study|8lbl based on 270 whites and 270 blacks who were perfectly

matched on Full Scale IQ.

 

seems inconsistent with typical environment only theories.

 

-

 

 

Intelligence and semen quality are positively correlated

Human cognitive abilities inter-correlate to form a positive matrix, from which a large first
factor, called ‘Spearman’s g’ or general intelligence, can be extracted. General intelligence itself
is correlated with many important health outcomes including cardio-vascular function and
longevity. However, the important evolutionary question of whether intelligence is a fitness-
related trait has not been tested directly, let alone answered. If the correlations among cognitive
abilities are part of a larger matrix of positive associations among fitness-related traits, then
intelligence ought to correlate with seemingly unrelated traits that affect fitness—such as
semen quality. We found significant positive correlations between intelligence and 3 key
indices of semen quality: log sperm concentration (r=.15, p=.002), log sperm count (r=.19,
pb.001), and sperm motility (r=.14, p=.002) in a large sample of US Army Veterans. None
was mediated by age, body mass index, days of sexual abstinence, service in Vietnam, or use of
alcohol, tobacco, marijuana, or hard drugs. These results suggest that a phenotype-wide fitness
factor may contribute to the association between intelligence and health. Clarifying whether a
fitness factor exists is important theoretically for understanding the genomic architecture of
fitness-related traits, and practically for understanding patterns of human physical and
psychological health.

Odd.

Biological Sex Differences in the Workplace Reports of the End of Men Are Greatly Exaggerated (As Are Claims of Women’s Continued Subordination)

decent paper about sex differences in jobs.

abstract:

Common examples of what is perceived as workplace
inequality–such as the “glass ceiling,” the “gender gap” in
compensation, and occupational segregation–cannot be well
understood if the explanation is limited exclusively to such social
causes as discrimination and sexist socialization. Males and females
have, on average, different sets of talents, tastes, and interests, which
cause them to select somewhat different occupations and exhibit
somewhat different workplace behaviors. Some of these sex
differences have biological roots. Temperamental sex differences are
found in competitiveness, dominance-seeking, risk-taking, and
nurturance, with females tending to be more “person-oriented” and
males more “thing-oriented.” The sexes also differ in a variety of
cognitive traits, including various spatial, verbal, mathematical, and
mechanical abilities. Although social influences can be important,
these social influences operate on (and were in fact created by)
sexually dimorphic minds.
It is almost axiomatic that substantial changes in the environment
of a complex organism will result in changes in its behavior.
Therefore, we should not be surprised when changes in the economy
or changes in the nature of work are followed by changes in
workforce behavior and hence changes in workplace outcomes. For
those keeping track of the “numbers,” these changes may be
characterized as either increasing or decreasing equality, depending
upon the particular definition of equality selected. Whether one views
a particular outcome as a harbinger of “the end of men” or a
reflection of continued sexual inequality of women may be a
consequence of whether the focus is on group averages or the tail end
of distributions, as it may turn out, for example, that even if women
may do better as a group on some measures, men may still dominate
at the top.

Fashionable Nonsense, Postmodern Intellectuals’ Abuse of Science – Alan Sokal, Jean Bricmont ebook download pdf free

 

The book contains the best single chapter on filosofy of science that iv com across. very much recommended, especially for those that dont like filosofers’ accounts of things. alot of the rest of the book is devoted to long quotes full of nonsens, and som explanations of why it is nonsens (if possible), or just som explanatory remarks about the fields invoked (say, relativity).

 

as such, this book is a must read for ppl who ar interested in the study of seudoscience, and those interested in meaningless language use. basically, it is a collection of case studies of that.

 

 

———-

 

 

[footnote] Bertrand Russell (1948, p. 196) tells the following amusing story: “I once received a

letter from an eminent logician, Mrs Christine Ladd Franklin, saying that she was a

solipsist, and was surprised that there were not others”. We learned this reference

from Devitt (1997, p. 64).

 

LOL!

 

-

 

The answer, of course, is that we have no proof; it is simply

a perfectly reasonable hypothesis. The most natural way to ex­

plain the persistence of our sensations (in particular, the un­

pleasant ones) is to suppose that they are caused by agents

outside our consciousness. We can almost always change at will

the sensations that are pure products of our imagination, but we

cannot stop a war, stave off a lion, or start a broken-down car

by pure thought alone. Nevertheless— and it is important to em­

phasize this—this argument does not refute solipsism. If anyone

insists that he is a “harpsichord playing solo” (Diderot), there is

no way to convince him of his error. However, we have never

met a sincere solipsist and we doubt that any exist.52 This illus­

trates an important principle that we shall use several times in

this chapter: the mere fact that an idea is irrefutable does not

imply that there is any reason to believe it is true.

 

i wonder how that epistemological point (that arguments from ignorance ar no good) works with intuitionism in math/logic?

 

-

 

The universality of Humean skepticism is also its weakness.

Of course, it is irrefutable. But since no one is systematically

skeptical (when he or she is sincere) with respect to ordinary

knowledge, one ought to ask why skepticism is rejected in that

domain and why it would nevertheless be valid when applied

elsewhere, for instance, to scientific knowledge. Now, the rea­

son why we reject systematic skepticism in everyday life is

more or less obvious and is similar to the reason we reject solip­

sism. The best way to account for the coherence of our experi­

ence is to suppose that the outside world corresponds, at least

approximately, to the image of it provided by our senses.54

 

54 4This hypothesis receives a deeper explanation with the subsequent development of

science, in particular of the biological theory of evolution. Clearly, the possession of

sensory organs that reflect more or less faithfully the outside world (or, at least,

some important aspects of it) confers an evolutionary advantage. Let us stress that

this argument does not refute radical skepticism, but it does increase the coherence

of the anti-skeptical worldview.

 

the authors ar surprisingly sofisticated filosofically, and i agree very much with their reasoning.

 

-

 

For my part, I have no doubt that, although progressive changes

are to be expected in physics, the present doctrines are likely to be

nearer to the truth than any rival doctrines now before the world.

Science is at no moment quite right, but it is seldom quite wrong,

and has, as a rule, a better chance of being right than the theories

of the unscientific. It is, therefore, rational to accept it

hypothetically.

—Bertrand Russell, My Philosophical Development

(1995 [1959], p. 13)

 

yes, the analogy is that: science is LIKE a limit function that goes towards 1 [approximates closer to truth] over time. at any given x, it is not quite at y=1 yet, but it gets closer. it might not be completely monotonic either (and i dont know if that completely breaks the limit function, probably doesnt).

 

plato.stanford.edu/entries/scientific-progress/#Tru

 

for a quick grafical illustration, try the function f(x)=1-(-1/x) on the interval [1;∞]. The truth line is f(x)=1 on the interval [0;∞]. in reality, the graf wud be mor unsteady and not completely monotonic corresponding to the varius theories as they com and go in science. it is not only a matter of evidence (which is not an infallible indicator of truth either), but it is primarily a function of that.

 

-

 

Once the general problems of solipsism and radical skepti­

cism have been set aside, we can get down to work. Let us sup­

pose that we are able to obtain some more-or-less reliable

knowledge of the world, at least in everyday life. We can then

ask: To what extent are our senses reliable or not? To answer

this question, we can compare sense impressions among them­

selves and vary certain parameters of our everyday experience.

We can map out in this way, step by step, a practiced rationality.

When this is done systematically and with sufficient precision,

science can begin.

 

For us, the scientific method is not radically different from

the rational attitude in everyday life or in other domains of hu­

man knowledge. Historians, detectives, and plumbers—indeed,

all human beings—use the same basic methods of induction,

deduction, and assessment of evidence as do physicists or bio­

chemists. Modem science tries to carry out these operations in

a more careful and systematic way, by using controls and sta­

tistical tests, insisting on replication, and so forth. Moreover,

scientific measurements are often much more precise than

everyday observations; they allow us to discover hitherto un­

known phenomena; and they often conflict with “common

sense”. But the conflict is at the level of conclusions, not the

basic approach.55 56

 

55For example: Water appears to us as a continuous fluid, but chemical and physical

experiments teach us that it is made of atoms.

 

56Throughout this chapter, we stress the methodological continuity between scientific

knowledge and everyday knowledge. This is, in our view, the proper way to respond

to various skeptical challenges and to dispel the confusions generated by radical

interpretations of correct philosophical ideas such as the underdetermination of

theories by data. But it would be naive to push this connection too far. Science—

particularly fundamental physics— introduces concepts that are hard to grasp

intuitively or to connect directly to common-sense notions. (For example: forces

acting instantaneously throughout the universe in Newtonian mechanics,

electromagnetic fields “vibrating” in vacuum in Maxwell’s theory, curved space-time

in Einstein’s general relativity.) And it is in discussions about the meaning o f these

theoretical concepts that various brands of realists and anti-realists (e.g.,

intrumentalists, pragmatists) tend to part company. Relativists sometimes tend to fall

back on instrumentalist positions when challenged, but there is a profound difference

between the two attitudes. Instrumentalists may want to claim either that we have no

way of knowing whether “unobservable” theoretical entities really exist, or that their

meaning is defined solely through measurable quantities; but this does not imply that

they regard such entities as “subjective” in the sense that their meaning would be

significantly influenced by extra-scientific factors (such as the personality of the

individual scientist or the social characteristics o f the group to which she belongs).

Indeed, instrumentalists may regard our scientific theories as, quite simply, the most

satisfactory way that the human mind, with its inherent biological limitations, is

capable of understanding the world.

 

right they ar

 

-

 

Having reached this point in the discussion, the radical skep­

tic or relativist will ask what distinguishes science from other

types of discourse about reality—religions or myths, for exam­

ple, or pseudo-sciences such as astrology—and, above all, what

criteria are used to make such a distinction. Our answer is nu-

anced. First of all, there are some general (but basically nega­

tive) epistemological principles, which go back at least to the

seventeenth century: to be skeptical of a priori arguments, rev­

elation, sacred texts, and arguments from authority. Moreover,

the experience accumulated during three centuries of scientific

practice has given us a series of more-or-less general method­

ological principles—for example, to replicate experiments, to

use controls, to test medicines in double-blind protocols—that

can be justified by rational arguments. However, we do not

claim that these principles can be codified in a definitive way,

nor that the list is exhaustive. In other words, there does not

exist (at least at present) a complete codification of scientific ra­

tionality, and we seriously doubt that one could ever exist. After

all, the future is inherently unpredictable; rationality is always

an adaptation to a new situation. Nevertheless—and this is the

main difference between us and the radical skeptics—we think

that well-developed scientific theories are in general supported

by good arguments, but the rationality of those arguments must

be analyzed case-by-case.60

 

60 It is also by proceeding on a case-by-case basis that one can appreciate the

immensity of the gulf separating the sciences from the pseudo-sciences.

 

Sokal and Bricmont might soon becom my new favorit filosofers of science.

 

-

 

Obviously, every induction is an inference from the observed to

the unobserved, and no such inference can be justified using

solely deductive logic. But, as we have seen, if this argument

were to be taken seriously—if rationality were to consist only

of deductive logic— it would imply also that there is no good

reason to believe that the Sun will rise tomorrow, and yet no one

really expects the Sun not to rise.

 

id like to add, like i hav don many times befor, that ther is no reason to think that induction shud be proveable with deduction. why require that? but now coms the interesting part. if one takes induction as the basis instead of deduction, one can inductivly prove deduction. <prove> in the ordinary, non-mathetical/logical sens. the method is enumerativ induction, which i hav discussed befor.

emilkirkegaard.dk/en/?p=3219

 

-

 

But one may go further. It is natural to introduce a hierarchy

in the degree of credence accorded to different theories, de­

pending on the quantity and quality of the evidence supporting

them.95 Every scientist—indeed, every human being—proceeds

in this way and grants a higher subjective probability to the

best-established theories (for instance, the evolution of species

or the existence of atoms) and a lower subjective probability to

more speculative theories (such as detailed theories of quantum

gravity). The same reasoning applies when comparing theories

in natural science with those in history or sociology. For exam­

ple, the evidence of the Earth’s rotation is vastly stronger than

anything Kuhn could put forward in support of his historical

theories. This does not mean, of course, that physicists are more

clever than historians or that they use better methods, but sim­

ply that they deal with less complex problems, involving a

smaller number of variables which, moreover, are easier to mea­

sure and to control. It is impossible to avoid introducing such a

hierarchy in our beliefs, and this hierarchy implies that there is

no conceivable argument based on the Kuhnian view of history

that could give succor to those sociologists or philosophers who

wish to challenge, in a blanket way, the reliability of scientific

results.

 

Sokal and Bricmont even get the epistemological point about the different fields right. color me very positivly surprised.

 

-

 

Bruno Latour and His Rules of Method

The strong programme in the sociology of science has found

an echo in France, particularly around Bruno Latour. His works

contain a great number of propositions formulated so ambigu­

ously that they can hardly be taken literally. And when one re­

moves the ambiguity— as we shall do here in a few

examples— one reaches the conclusion that the assertion is ei­

ther true but banal, or else surprising but manifestly false.

 

sound familiar? its the good old two-faced sentences again, those that Swartz and Bradley called Janus-sentences. they yield two different interpretations, one trivial and true, one nontrivial and false. their apparent plausibility is becus of this fact.

 

quoting from Possible Worlds:

 

Janus-faced sentences

The method of possible-worlds testing is not only an invaluable aid towards resolving ambiguity; it is also an effective weapon against a particular form of-linguistic sophistry.

Thinkers often deceive themselves and others into supposing that they have discovered a profound

truth about the universe when all they have done is utter what we shall call a “Janus-faced

sentence”. Janus, according to Roman mythology, was a god with two faces who was therefore able

to ‘face’ in two directions at once. Thus, by a “Janus-faced sentence” we mean a sentence which, like “In the evolutionary struggle for existence just the fittest species survive”, faces in two directions. It is ambiguous insofar as it may be used to express a noncontingent proposition, e.g., that in the struggle for existence just the surviving species survive, and may also be used to express a contingent proposition, e.g., the generalization that just the physically strongest species survive.

 

If a token of such a sentence-type is used to express a noncontingently true proposition then, of

course, the truth of that proposition is indisputable; but since, in that case, it is true in all possible

worlds, it does not tell us anything distinctive about the actual world. If, on the other hand, a token

of such a sentence-type is used to express a contingent proposition, then of course that proposition

does tell us something quite distinctive about the actual world; but in that case its truth is far from

indisputable. The sophistry lies in supposing that the indisputable credentials of the one proposition

can be transferred to the other just by virtue of the fact that one sentence-token might be used to

express one of these propositions and a different sentence-token of one and the same sentence-type

might be used to express the other of these propositions. For by virtue of the necessary truth of one

of these propositions, the truth of the other — the contingent one — can be made to seem

indisputable, can be made to seem, that is, as if it “stands to reason” that it should be true.

 

-

 

We could be accused here of focusing our attention on an

ambiguity of formulation and of not trying to understand what

Latour really means. In order to counter this objection, let us go

back to the section “Appealing (to) Nature” (pp. 94-100) where

the Third Rule is introduced and developed. Latour begins by

ridiculing the appeal to Nature as a way of resolving scientific

controversies, such as the one concerning solar neutrinos[121]:

A fierce controversy divides the astrophysicists who calcu­

late the number o f neutrinos coming out o f the sun and Davis,

the experimentalist who obtains a much smaller figure. It is

easy to distinguish them and put the controversy to rest. Just

let us see for ourselves in which camp the sun is really to be

found. Somewhere the natural sun with its true number o f

neutrinos will close the mouths o f dissenters and force them

to accept the facts no matter how well written these papers

were. (Latour 1987, p. 95)

 

 

Why does Latour choose to be ironic? The problem is to know

how many neutrinos are emitted by the Sun, and this question

is indeed difficult. We can hope that it will be resolved some day,

not because “the natural sun will close the mouths of dis­

senters”, but because sufficiently powerful empirical data will

become available. Indeed, in order to fill in the gaps in the cur­

rently available data and to discriminate between the currently

existing theories, several groups of physicists have recently

built detectors of different types, and they are now performing

the (difficult) measurements.122 It is thus reasonable to expect

that the controversy will be settled sometime in the next few

years, thanks to an accumulation of evidence that, taken to­

gether, will indicate clearly the correct solution. However, other

scenarios are in principle possible: the controversy could die

out because people stop being interested in the issue, or be­

cause the problem turns out to be too difficult to solve; and, at

this level, sociological factors undoubtedly play a role (if only

because of the budgetary constraints on research). Obviously,

scientists think, or at least hope, that if the controversy is re­

solved it will be because of observations and not because of

the literary qualities of the scientific papers. Otherwise, they

will simply have ceased to do science.

 

the footnode 121 is:

The nuclear reactions that power the Sun are expected to emit copious quantities

of the subatomic particle called the neutrino. By combining current theories of solar

structure, nuclear physics, and elementary-particle physics, it is possible to obtain

quantitative predictions for the flux and energy distribution of the solar neutrinos.

Since the late 1960s, experimental physicists, beginning with the pioneering work of

Raymond Davis, have been attempting to detect the solar neutrinos and measure their

flux. The solar neutrinos have in fact been detected; but their flux appears to be less

than one-third o f the theoretical prediction. Astrophysicists and elementary-particle

physicists are actively trying to determine whether the discrepancy arises from

experimental error or theoretical error, and if the latter, whether the failure is in the

solar models or in the elementary-particle models. For an introductory overview, see

Bahcall (1990).

 

this problem sounded familiar to me.

en.wikipedia.org/wiki/Solar_neutrino_problem:

The solar neutrino problem was a major discrepancy between measurements of the numbers of neutrinos flowing through the Earth and theoretical models of the solar interior, lasting from the mid-1960s to about 2002. The discrepancy has since been resolved by new understanding of neutrino physics, requiring a modification of the Standard Model of particle physics – specifically, neutrino oscillation. Essentially, as neutrinos have mass, they can change from the type that had been expected to be produced in the Sun’s interior into two types that would not be caught by the detectors in use at the time.

 

science seems to be working. Sokal and Bricmont predicted that it wud be resolved ”in the next few years”. this was written in 1997, about 5 years befor the data Wikipedia givs for the resolution. i advice one to read the Wiki article, as it is quite good.

 

-

 

In this quote and the previous one, Latour is playing con­

stantly on the confusion between facts and our knowledge of

them.123 The correct answer to any scientific question, solved or

not, depends on the state of Nature (for example, on the num­

ber of neutrinos that the Sun really emits). Now, it happens that,

for the unsolved problems, nobody knows the right answer,

while for the solved ones, we do know it (at least if the accepted

solution is correct, which can always be challenged). But there

is no reason to adopt a “relativist” attitude in one case and a “re­

alist” one in the other. The difference between these attitudes is

a philosophical matter, and is independent of whether the prob­

lem is solved or not. For the relativist, there is simply no unique

correct answer, independent of all social and cultural circum­

stances; this holds for the closed questions as well as for the

open ones. On the other hand, the scientists who seek the cor­

rect solution are not relativist, almost by definition. Of course

they do “use Nature as the external referee”: that is, they seek to

know what is really happening in Nature, and they design ex­

periments for that purpose.

 

the footnote 123 is:

An even more extreme example o f this confusion appears in a recent article by

Latour in La Recherche, a French monthly magazine devoted to the popularization of

science (Latour 1998). Here Latour discusses what he interprets as the discovery in

1976, by French scientists working on the mummy of the pharaoh Ramses II, that his

death (circa 1213 B.C.) was due to tuberculosis. Latour asks: “How could he pass

away due to a bacillus discovered by Robert Koch in 1882?” Latour notes, correctly,

that it would be an anachronism to assert that Rainses II was killed by machine-gun

fire or died from the stress provoked by a stock-market crash. But then, Latour

wonders, why isn’t death from tuberculosis likewise an anachronism? He goes so far

as to assert that “Before Koch, the bacillus has no real existence.” He dismisses the

common-sense notion that Koch discovered a pre-existing bacillus as “having only the

appearance o f common sense”. Of course, in the rest o f the article, Latour gives no

argument to justify these radical claims and provides no genuine alternative to the

common-sense answer. He simply stresses the obvious fact that, in order to discover

the cause of Ramses’ death, a sophisticated analysis in Parisian laboratories was

needed. But unless Latour is putting forward the truly radical claim that nothing we

discover ever existed prior to its “discovery”— in particular, that no murderer is a

murderer, in the sense that he committed a crime before the police “discovered” him

to be a murderer— he needs to explain what is special about bacilli, and this he has

utterly failed to do. The result is that Latour is saying nothing clear, and the article

oscillates between extreme banalities and blatant falsehoods.

 

?!

 

-

 

a quote from one of the crazy ppl:

 

The privileging o f solid over fluid mechanics, and indeed the

inability o f science to deal with turbulent flow at all, she at­

tributes to the association o f fluidity with femininity. Whereas

men have sex organs that protrude and become rigid, women

have openings that leak menstrual blood and vaginal fluids.

Although men, too, flow on occasion— when semen is emit­

ted, for example— this aspect o f their sexuality is not empha­

sized. It is the rigidity o f the male organ that counts, not its

complicity in fluid flow. These idealizations are reinscribed in

mathematics, which conceives o f fluids as laminated planes

and other modified solid forms. In the same way that women

are erased within masculinist theories and language, existing

only as not-men, so fluids have been erased from science, ex­

isting only as not-solids. From this perspective it is no wonder

that science has not been able to arrive at a successful model

for turbulence. The problem o f turbulent f low cannot be

solved because the conceptions o f fluids (and o f women)

have been formulated so as necessarily to leave unarticulated

remainders. (Hayles 1992, p. 17)

 

u cant make this shit up

 

-

 

Over the past three decades, remarkable progress has been

made in the mathematical theory of chaos, but the idea that

some physical systems may exhibit a sensitivity to initial con­

ditions is not new. Here is what James Clerk Maxwell said in

1877, after stating the principle of determinism ( “the same

causes will always produce the same effects”):

 

but thats not what determinism is. their quote seems to be from Hume’s Treatise.

 

en.wikipedia.org/wiki/Causality#After_the_Middle_Ages

 

it is mentioned in his discussion of causality, which is related to but not the same as, determinism.

 

Wikipedia givs a fine definition of <determinism>: ”Determinism is a philosophy stating that for everything that happens there are conditions such that, given those conditions, nothing else could happen.”

 

also SEP: Causal determinism is, roughly speaking, the idea that every event is necessitated by antecedent events and conditions together with the laws of nature.”

 

-

 

[T]he first difference between science and philosophy is their

respective attitudes toward chaos. Chaos is defined not so

much by its disorder as by the infinite speed with which every

form taking shape in it vanishes. It is a void that is not a noth­

ingness but a virtual, containing all possible particles and

drawing out all possible forms, which spring up only to dis­

appear immediately, without consistency or reference, with­

out consequence. Chaos is an infinite speed o f birth and dis­

appearance. (Deleuze and Guattari 1994, pp. 117-118, italics

in the original)

 

???

 

-

 

For what it’s worth, electrons, unlike photons, have a non-zero

mass and thus cannot move at the speed of light, precisely

because of the theory of relativity of which Virilio seems so

fond.

 

i think the authors did not mean what they wrote here. surely, relativity theory is not the reason why electrons cannot move at the speed of light. relativity theory is an explanation of how nature works, in this case, how objects with mass and velocity/speed works.

 

-

 

We met in Paris a student who, after having brilliantly fin­

ished his undergraduate studies in physics, began reading phi­

losophy and in particular Deleuze. He was trying to tackle

Difference and Repetition. Having read the mathematical ex­

cerpts examined here (pp. 161-164), he admitted he couldn’t

see what Deleuze was driving at. Nevertheless, Deleuze’s repu­

tation for profundity was so strong that he hesitated to draw the

natural conclusion: that if someone like himself, who had stud­

ied calculus for several years, was unable to understand these

texts, allegedly about calculus, it was probably because they

didn’t make much sense. It seems to us that this example should

have encouraged the student to analyze more critically the rest

of Deleuze’s writings.

 

i think the epistemological conditions of this kind of inference ar very interesting. under which conditions shud one conclude that a text is meaningless?

 

-

 

7. Ambiguity as subterfuge. We have seen in this book nu­

merous ambiguous texts that can be interpreted in two differ­

ent ways: as an assertion that is true but relatively banal, or as

one that is radical but manifestly false. And we cannot help

thinking that, in many cases, these ambiguities are deliberate.

Indeed, they offer a great advantage in intellectual battles: the

radical interpretation can serve to attract relatively inexperi­

enced listeners or readers; and if the absurdity of this version is

exposed, the author can always defend himself by claiming to

have been misunderstood, and retreat to the innocuous inter­

pretation.

 

mor on Janus-sentences.

 

-

 

 

Exam paper for Danish and Languages of the world