From researchgate: www.researchgate.net/post/What_is_the_actual_difference_between_1st_order_and_higher_order_logic

What is the actual difference between 1st order and higher order logic?
Yes, I know. They say, the 2nd order logic is more expressive, but it is really hard to me to see why. If we have a domain X, why can’t we define the domain X’ = X u 2^X and for elements of x in X’ define predicates:
SET(x)
ELEMENT(x)
BELONGS_TO(x, y) – undefined (or false) when ELEMENT(y)
etc.
Now, we can express sentences about subsets of X in the 1st-order logic!
Similarly we can define FUNCTION(x), etc. and… we can express all 2nd-order sentences in the 1st order logic!
I’m obviously overlooking something, but what actually? Where have I made a mistake?

My answer:

In many cases one can reduce a higher order formalization to a first-order, but it will come at the price of complexity of the formalization.

For instance, formalize the follow argument in both first order and second order logic:
All things with personal properties are persons. Being kind is a personal property. Peter is kind. Therefore, Peter is a person.

One can do this with either first or second order, but it is easier in second-order.

First-order formalization:
1. (∀x)(PersonalProperty(x)→((∀y)(HasProperty(y,x)→Person(y)))
2. PersonalProperty(kind)
3. HasProperty(peter,kind)
⊢ 4. Person(peter)

Second-order formalization
1. (∀Φ)(PersonalProperty(Φ)→(∀x)(Φx→Person(x)))
2. PersonalProperty(IsKind)
3. IsKind(peter)
⊢ 4. Person(peter)

where Φ is a second-order variable. Basically, whenever one uses first order to formalize arguments like this, one has to use a predicate like “HasProperty(x,y)” so that one can treat variables as properties indirectly. This is unnecessary in second-order logics.

www.goodreads.com/book/show/2404700.The_IQ_Controversy_the_Media_and_Public_Policy

The I. Q. Controversy The Media and Public Policy Stanley Rothman 323p_0887381510

 

I read this becus i want to do a follow-up study like this. Both analyzing media output and doing another expert survey.

 

 

I had been thinking about using PCA on political questions to see any obvious underlying structure. Basically, I want to do OKC questions style. Gather lots of questions, have lots of ppl answer them. Do PCA, see what results are.

Political perspective was assessed in two ways. First, respondents stated their agreement or disagreement with a series of six political statements. The statements dealing with U.S. economic exploitation, the fairness of the private enterprise system, affirmative action, the desirability of socialism, alienation caused by the structure of society, and the propriety of extramarital sexual relations. Responses to these statements were discovered, in a previous investigation incorporating many more such statements, to load highly on a factor representing overall political perspective.6o Agreement was assessed on a 4- point scale, where I was “Strongly agree” and 4 was “Strongly disagree.” For four of the six statements, the mean response is approximately at indifference. Respondents are somewhat more likely to disagree that “The United States would be better off if it moved toward socialism” and that “The structure of our society causes most people to feel alienated.” The second measure of political perspective asked experts to indicate their global political perspective on a 7-point scale, where I was “Very liberal” and 7 was “Very conservative.” Mean self-assessment on this scale is 3.19 (s.d.: 1.28, r.r.:95.6%), putting this expert population slightly to the left of center.

Factor analysis of responses to the six statements and the global rating reveal that all questions, with the exception of the statement about extramarital affairs, load highly on a single factor (i.e., are highly correlated). The five statements and the global rating were therefore normalized and combined to form a political perspective supervariable. It is this variable that is used as a measure of overall political perspective. Note that the liberal position on the five included statements (e.9., belief in socialism, affirmative action, economic exploitation) can all be characterized as placing a higher value on equality of outcome than on economic efficiency.

This tactic has been used before, even if only on a limited set of political opinions.

-

While few would argue that intelligence and aptitude test scores do nor affect self-esteem and motivation, the magnitude of this influence is difficult to measure. There have been many reports of significant positive correlations between test scores and self-concept, motivation, or expectancy, but causality remains ambiguous.rs rhe evidence seems to indicate, however, that the influence of test scores on these affective variables is probably not large. (Causation in the opposite direction may not be very significant either, as the correlation may reflect the influence of a third variable, students’actual level ofability and success in school.) Brim and his associates found that high school students tended to greatly overestimate their own intelligence, as measured by test scores. This was particularly true of students with low scores. Fifty percent of students thought their scores were too low relative to their actual level of ability, while 45 percent thought their scores were accurate. only 7 percent ofthe students reported lowering their self-estimates of intelligence as a result of their test scores, while 24 percent raised their estimates.16

Dunning Kruger, but much earlier.

Reference 16 is: Orville G. Brim, Jr., ‘American Attitudes Towards Intelligence Tests,” American Psychologrsl 20 (1965):125-130; Brim et al. 17. Goslin, p. 133

openpsych.net/ODP/2014/04/criminality-among-norwegian-immigrant-populations/

Abstract
A previous study found that criminality among immigrant groups in Denmark was highly predictable by their countries of origin’s prevalence of Muslims, IQ, GDP and height. This study replicates the study for Norway with similar results.

Keywords: Crime, national IQ, group differences, country of origin

Download paper.
Forum thread and supplementary material.

www.goodreads.com/book/show/19824663-an-introduction-to-toxicology

 

gen.lib.rus.ec/book/index.php?md5=3b495b4fd548e7574a0ec998f22afac6&open=0

 

Friend of mine wants me to write a book chapter or two in his book advocating nuclear energy. Specifically, a chapter about toxicology and dose response models. I felt less than competent and so i decided to increase my toxicological knowledge with a textbook. I wanted an up to date one, so i searched libgen for “toxicology” and sorted by year of publication. Then i found this one. Googling it did not reveal any obvious evidence of low quality, so it seemed worth reading.

 

As it turned out, this book did not deal much with dose response models! It focused on mechanistic toxicology. For studying this my chemistry knowledge was not sufficient, so there was some content i did not fully understand.

 

-

 

 

 

 

For reasons that are not entirely clear, two disreputable businessmen in Boston,

Harry Gross and Max Reisman, hit upon the idea of adulterating their Ginger Jake

product with the plasticiser tri-O-cresyl phosphate (TOCP ), then manufactured by

the Eastman Kodak company for use in lacquers and varnishes. Unaware of its toxic

properties, Gross and Reisman purchased 135 gal of TOCP and added it to Ginger

Jake batches that were used to fill hundreds of thousands of bottles. The product was

then sold throughout the continental USA. The resulting delayed-onset neurotoxic

syndrome seen in users of the product was nicknamed ‘Jake Walk’ due to the paralysing loss of leg muscle tone that progressed to the point where victim’s feet

flopped like those of a marionette (Fig. 1.5). Nationwide, around 40,000–50, 000

people were affected in a disaster that unfolded rapidly: in Wichita, Kansas, around

500 patients manifested signs of TOCP intoxication in a single night alone. Although

partial recovery sometimes occurred, many victims were permanently incapacitated, spending the remainder of their lives in charitable institutions or county asylums. The epidemic also left its stamp on Southern popular culture, with at least a

dozen references to ‘Jake Walk’ in commercial phonograph recordings by jazz

musicians of the time.

 

Neat illustration of a black market effect on alcohol.

 

en.wikipedia.org/wiki/TOCP

 

-

 

In addition to synthetic substances, the term xenobioticcovers naturally occurring

chemicals to which humans are regularly exposed via consumption of plant- based

foodstuffs, botanical beverages and herbal remedies. While many of these substances

are likely harmless or even beneficial to human health, some xenobiotics of natural

origin can be very harmful indeed. As a rule, modern toxicology does not concur

with the popular belief that foreign or synthetic chemicals are inherently more toxic

than naturally occurring substances or even endobiotics. Many of the most toxic

substances known to toxicology are of natural origin – a point that will be reinforced

throughout this book. Nevertheless, synthetic chemicals of human origin typically

attract the greatest attention in modern toxicology simply because they are used on a

vast scale in today’s industrial societies. So while nature may produce some highly

potent toxins, they are rarely produced on a comparable scale to modern synthetic

substances. Another factor that maximises interest in synthetic xenobiotics is their

frequent possession of physicochemical features that ensure they are long lived

within biological systems or the wider environment. Since we have been exposed to

natural chemicals throughout human history, our bodies are better adapted to coping

with their presence compared to some synthetic substances of modern origin that

may contain unusual chemical properties that render them resistant to metabolism.

 

Although it is handy to classify chemicals according to whether they are of natural or synthetic origin, this distinction is often artificial. With the development of

sensitive analytical instruments for the detection and quantitation of chemicals in

body fluids or tissues, we now know that many chemicals – even some we once

assumed were entirely of synthetic origin and would only be encountered in the factory or industrial workplace – are actually formed at low levels within the body.

Acrolein, for example, is a highly toxic carbonyl compound used during the manufacture of plastics and other synthetic chemicals (Fig. 2.1). It is also a major environmental pollutant, formed during the combustion of organic matter including

tobacco, fossil fuels and forest vegetation. Acrolein also forms during cooking

processes and can attain high airborne concentrations in kitchens if deep fried foods

are prepared over a poorly ventilated stovetop. Yet in recent decades, our assumption that acrolein is mainly ingested from these foreign sources has been overturned

by the discovery that it forms endogenously via diverse biochemical processes,

including a phenomenon termed lipid peroxidationwhich we will examine in

Chap. 4 (Sect. 4.4.4). Some scientists suspect that endogenous acrolein participates

in such degenerative diseases of old age as Alzheimer’s dementia. This remains to

be fully proven, and ongoing research is assessing the health significance of these

endogenous exposures. It could well be that for some endogenous exposures, the

high sensitivity of our modern analytical instruments leads us to overestimate their

importance. Nevertheless, the fact that we are exposed to noxious substances from

both external and internal sources poses a conceptual problem: should we categorise a substance like acrolein as a xenobiotic, an endobioticor both (Fig. 2.1)?

 

A handy reference for appeal to nature fallacy.

 

-

 

Idiosyncratic sensitivity sometimes occurs because individuals express mutated

or polymorphic versions of enzymes that cannot properly metabolise toxicants to

facilitate their bodily elimination. In some ethnic populations, mutant xenobiotic-

metabolising genes are so prevalent that they influence prescribing decisions by

physicians. A famous example of this phenomenon involves the tuberculosis drug

isoniazid, which causes liver damage in ~1 % of patients. The conjugative enzyme

N-acetyl transferase 2 (NAT2) plays an important role in isoniazid metabolism, and

studies in a variety of ethnic groups have associated a genetic deficiency in NAT2

(known as ‘slow acetylators’ due to their reduced ability to metabolise isoniazid and

other xenobiotics) with an increased susceptibility to liver injury.

 

-

 

Historically, much attention has been directed to CYP2D6 polymorphisms, due

to the early discovery of patient subgroups that display exaggerated responses to the

cardiovascular drugs debrisoquine and sparteine. The inability to metabolise

debrisoquine was linked to a 2D6 polymorphism that was found to vary in its prevalence in different ethnic groups (e.g. 5–10 % of Caucasians are ‘poor metabolisers

(PM)’, while the incidence in Asian populations is ~ 1 %). Using such techniques as

restriction fragment length polymorphism, PCR and gene sequencing, over 110

polymorphisms were subsequently identified in the CYP2D6 gene. Genetic variants

that exist at the same chromosomal locus are termed alleles. Although the number

of 2D6 alleles is unusually large, allele numbers are typically high for most xenobiotic biotransformation genes compared to other genetic loci.

 

The importance of genomic sequencing and attention to racial groups as these provide a proxy for these values.

 

 

 

 

www.theatlantic.com/international/archive/2014/03/europes-latest-secession-movement-venice/284562/

Venice seems to be tired of Italy. It’s a bad economic trade off for them. They want to return to their former glory. Good! We need more power decentralization.

There was a vote:

Last week, in a move overshadowed by the international outcry over Russia’s annexation of Crimea, Plebiscito.eu, an organization representing a coalition of Venetian nationalist groups, held an unofficial referendum on breaking with Rome. Voters were first asked the main question—”Do you want Veneto to become an independent and sovereign federal republic?”—followed by three sub-questions on membership in the European Union, NATO, and the eurozone. The region’s 3.7 million eligible voters used a unique digital ID number to cast ballots online, and organizers estimate that more than 2 million voters ultimately participated in the poll.

On Friday night, people waving red-and-gold flags emblazoned with the Lion of St. Mark filled the square of Treviso, a city in the Veneto region, as the referendum’s organizers announced the results: 2,102,969 votes in favor of independence—a whopping 89 percent of all ballots cast—to 257,266 votes against. Venetians also said yes to joining NATO, the EU, and the eurozone. The overwhelming victory surprised even ardent supporters of the initiative, as most polls before the referendum estimated only about 65 percent of the region’s voters supported independence.

Someone in the comments makes the following argument:

I don’t understand why it’s so surprising that 89% of respondents in an online, unofficial poll organized by Venetian nationalist groups voted that way. As a proportion of all eligible voters, that comes out to 55-60%, much closer to what you’d expect from neutral sampling.
Self-selection bias is a huge problem with online polling, and I expect that given the methodology of the referendum, that would explain a large part of the discrepancy between the predicted and observed outcomes.

My response:

You are assuming that the entire set of nonvoting citizens would be against it. While there is likely some self-selection, it is NOT likely to be 100%.

I did the math for every 10% incremental. If everybody voted either “yes” or “no”, then the total outcome range is [56.84%-93.05%], a clear majority in any case.

Even given a very strong self-selection effect such that nonvoters are 70% against, the outcome is 67.7% “yes”.

I did the math, and it is here: docs.google.com/spreadsheet/ccc?key=0AoYWmgpqFzdsdDZUSWhOOEctRnFhakVLUjFsbFpWUHc#gid=0

Here’s the takeaway. Venice wants to be independent and it is not a narrow decision, even assuming implausible self-selection.

Abstract
Criminality rates and fertility vary wildly among Danish immigrant populations by their country of origin. Correlational and regression analyses show that these are very predictable (R’s about .85 and .5) at the group level with national IQ, Islam belief, GDP and height as predictors.

Published in our new journal for psychology.

openpsych.net/index.php/diff/article/view/7

Peer review is here: openpsych.net/forum/showthread.php?tid=2&action=lastpost

I was asked to comment on this Reddit thread: www.reddit.com/r/netsec/comments/s1t2c/netsec_how_would_you_design_an_electronic_voting/

 

This post is written with the assumption that a bitcoin-like system is used.

 

Nirvana / perfect solution fallacy

I agree. I don’t think an electronic system needs to solve every problem present in a paper system, it just needs to be better. Right now, for example, one could buy an absentee ballot and be done with it. I think a system that makes it less practical to do something similar is an improvement.

 

As always when considering options, one should choose the best solution, not stubbornly refuse any change that will not give a perfect situation. Paper voting is not perfect either.

 

-

 

Threatening scenarios

The instant you let people vote from remote locations, everything else is up in the air. It doesn’t matter if the endpoints are secure.
Say you can vote by phone. I have my goons “canvass” the area knocking on doors. “Hey, have you voted for Smith yet? You haven’t? Well, go get your phone, we will help you do it right now.”
If you are trying to do secure voting over the Internet, you have already lost.

 

While one cannot bring goons right into the voting boxes, it is quite clearly possible to threaten people to vote in a particular way right now. The reason it is not generally done is that every single vote has very little power and the costs therefore are absurdly high for anyone trying scare tactics.

 

It is also easy to solve by making it possible to change votes after they have been given. This is clearly possible with computer technology but hard with paper.

 

-

 

Viruses that target voting software

This is clearly an issue. However, people can easily check that their votes are correct in the votechain (blockchain analogy). A sophisticated virus might wait until the last minute and then vote, but this can easily be prevented by turning off the computers used.

 

Furthermore, I imagine that one will use specialized software for voting, especially a linux system designed specifically for safety and voting, and rigorously tested by thousands of independent coders. One might also create specialized hardware for voting, i.e. special computers. Specifically, one can have read only memory which makes it impossible to install malacious software on the system. For instance, the hardware might have built in software for voting and a camera for scanning a QR code with one’s private key(s).

 

Lastly, one can use 2FA to enchance security just as one does everywhere else where extra safety is needed on the web.

 

-

 

Anoynmous and veriable voting

You can either have a system where people can verify their vote and take some type of receipt to prove the system recorded their vote wrong, or you can have anonymous voting. You cannot have verifiable voting AND anonymous voting. Someone somewhere has to be able to decrypt or access whatever keys or pins or you are holding a meaningless or login or hash that can’t prove you aren’t lying or didn’t change your vote etc.

 

Yes you can, with pseudonymous voting with a bitcoin-like system. Everybody can verify that no more votes are used than there are eligible voters. But the individuals who control the addresses are not identifiable from the code alone. They can choose announce publicly their address so that people can connect the two. Will will ofc be used to public persons.

 

-

 

Selling votes

This is already possible. It is already possible to verify this as well, as one can easily film the process of voting. This is not generally illegal either.

 

The reason why people do not generally buy or sell votes is that single votes have basically no power and hence are worth nothing.

 

As pointed out in the thread, this is already possible with mail-voting.

 

Lastly, it is generally thought of to be evil or wrong to buy and sell votes, but only when done directly. It is clearly legal indirectly and even if not de jura legal, it is de facto legal. In every modern democracy, it is common for politicians offering certain wealth or income redistribution policies. If people who would benefit from these vote for the politicians they are indirectly receiving money for voting for a given politician/party. For this reason, the buying and selling of votes is a non-issue.

 

-

 

The ease of digital attacks

It seems to me that the real problem is the scalability of the attacks in the digital sphere. Changing votes in our regular system of several thousand human ballot counters looking a pieces of paper is rather costly. A well-planned digital attack can be virtually free of cost (not counting the time it takes to figure out the attack).

 

This is a concern, and that is why one will need tough security and verification technologies. I have suggested several above.

 

-

 

Interceptions of the signal

Whatever, VPN, custom software, browser. It’s the same thing. Malware or even an ISP could intercept and manipulate what is displayed or recorded. The software on the receiving end can also be manipulated but more likely to have some controls of the hardware and software, but again, who inspects this?

 

This could be a problem. It can be reduced by having a nationally free, encrypted VPN/proxy for voting purposes.

 

-

 

Others who were faster than me

Voting could not be more further from any of the simplest banking. The idea behind banking or any “secure” online transaction is that it is not anonymous. Bitcoin might be the only viable anonymous type online voting.

 

-

 

The bitcoin protocol would actually be fantastic for this. I should explain for those unaware: Bitcoin is actually two different things. One: A protocol, and Two: A software implementing the protocol to send ‘coins’ like money to others. I’ll do a writeup a little later, but the gist of it is: the votes would be public for anyone to view, impossible to fake/forge, and still anonymous. This would be done by embedding the voting information into the blockchain.

 

-

 

Strong encryption with distributed verification a la bitcoin. You don’t have to trust the clients; you trust the math. I’m by no means a crypto expert, so don’t look to me for design tips, but I suspect you could map a private key to each valid voter’s SSN then generate a vote (hash) that could be verified by the voter pool.

 

These posts dates to “1 year ago” according to Reddit. Clearly, I was not the first to think the obvious.

 

-

 

Who is going to mine votecoins?

So unless you are actually piggy-backing voting ontop of another currency (like the main bitcoin blockchain), there’s no incentive for ordinary citizens to participate and validate/process the blockchain. What are they mining? More votes?? That seems weird/illegitimate. If you say “well, some government agency can just do all the mining and distribute coins to voters” this would seem to offer no improvement over a straightforward centralized system, and only introduces extra questions like

 

The government and the users who want to help out. Surely citizens have some self interest in getting the election over with. This is a non-issue.

 

If the government started the block chain, mined the correct number of coins, and then put it in the “no more coins mode” then we would have the setup for it. If they could convince one of the major pools to do merged mining with them (i’m not sure what they would exchange for this, but it would only have to be for a week/month) if hiring a pool is out of the question then just realize that the govt spends millions routinely on elections, and $10M should be more than enough to beat most mafias (~9Thash/s which is roughly what the current bitcoin rate is). If someone like the coke brothers tried to overpower this it would be very obvious.

 

Yes, this is the same solution I suggested. Code the system so that the first block gives all votecoins.

 

Another option is making a dual currency system, such that one can help mine votecoins and only get rewarded in rewardcoins. That way the counting is distributed to whoever wants the job.

 

-

 

The prize for the least imagination

The simple answer is that I would not. The risks and downsides of such a system are inherently not worth the only benefit which I can think of (faster results). This should also answer your last question. This hasn’t been done simply because there is no good reason to do it.

 

No other benefits? Like… an infinite variety of other voting systems???

 

-

 

The price of online voting

You’re assuming the cost of an electronic voting system and the time it will take for people to be comfortable using them will outpace paper and pen, which if you ask me is a pretty damn big assumption. Maybe someday, but until a grandma can easily understand and use electronic voting I am loathe to even think about implementing it. A voting system needs to be transparent and easy to understand.

 

In Denmark it costs about 100 million DKK to have a vote. Is he really suggesting this cannot be done cheaper with computers? I can’t take it seriously.

 

-

 

 

I have long had an idea that one needs to be able to generate problems for IQ batteries automatically.

Some things are very easy to generate. Digit spans just require the computer to generate random numbers one at a time and ask the user to input them again.

Others are harder to generate. So far I have figured out how to generate two of the harder ones: 1) vocabulary test, 2) number series tests.

Vocab test

First, one needs a list of words by their frequency.  Such lists can be found for most languages. They can also be generated quickly by taking a large body of text and analyzing it. E.g. download a book, like Harry Potter, and count the occurrences of every word. Then sort the list.

The difficulty of the problem is the rank on the frequency list. The more uncommon words are harder. For testing, one will choose a word at random from the interval 100-1000 most common words, 1000-1500 most common, 1500-2000 most common etc. until one gets to perhaps words in the 30k range, which are pretty rare. Or just how far the one wants to go.

Second, one needs a dictionary with meanings of words. There are lots of online ones for this purpose, e.g. Wiktionary.

To generate a problem, choose N random words in the difficulty category. Get all their definitions from the dictionary. Now u have N words and N definitions. There are multiple ways to do it. One simple way is to select one word at random, and then ask the user to select the correct meaning from the N available.

To make things harder, one can only choose words from the same grammatical category (noun, verb, adverb, adjective).

One can do this for any language where one can find a minable online dictionary and a frequency list (or just make one).

Number series test

Everybody knows these problems. E.g.: list = [1,2,3,4,?] Next number is 5, ofc. list = [1,3,6,10,?] next is 15.

I have succeeded in finding an analytic solution to one kind of these problems, the additive ones at any depth.

Take the second series above. The analysis is to find the difference between any two adjacent numbers. Repeat this all the way down.

For the above, it goes: [[1, 3, 6, 10], [2, 3, 4], [1, 1], [0]]. 3-1=2, 6-3=3, 10-6=4. Then do it for the result too. 3-2=1, 4-3=1. 1-1=0.

When one finds a line with the same number repeated, it means that one has found the depth for this type of problem. The above problem is a 3rd level problem because the repetition is at the third level. For the first problem above: [[1, 2, 3, 4], [1, 1, 1], [0, 0], [0]]. The depth is 2. For the ultra easy, the depth is 1: [[3, 3, 3, 3], [0, 0, 0], [0, 0], [0]].

From this, I have worked out which information is necessary to generate these from the bottom. One needs: 1) the length of the series, 2) the depth of the repetition, 3) the initial numbers each level. For instance, let’s say we choose the seeds [4,5,6], the depth 3 and the length 5. Then we get.

4,x,x,x,x

5,x,x,x

6,x,x

Since we know the depth is 3, we know that the initial must be repeated:

4,x,x,x,x

5,x,x,x

6,6,6

Then we can calculate the bold x above. It’s 5+6=11:

4,x,x,x,x

5,11,x,x

6,6,6

Then we can calculate the next bold x. It’s 6+11. And so on.

4,9,20,37,50

5,11,17,23

6,6,6

The final problem is then: 4, 9, 20, 37, ? with correct answer 50.

The problems can be made at an arbitrary difficulty level:

What is the next number? [-6, -5, -13, -20, -6, ?] (length = 6, depth = 5).

If you didn’t solve it, here’s the analysis: [[-6, -5, -13, -20, -6, 59], [1, -8, -7, 14, 65], [-9, 1, 21, 51], [10, 20, 30], [10, 10]]

They can be made impossibly hard to anyone not familiar with this analysis:

What is the next number? [-6, -15, -16, -10, 12, 51, 97, 126, 82, -171]

Further, one can vary the number range of the random numbers. Negative numbers are harder to think about, and it’s even worse when they cross back and forth around 0. The above problem is really hard. I doubt many could solve it even given unlimited time if they didn’t know the analysis.

The code is here: algorithm

I would have uploaded it to Github, but apparently finding out how to upload files to GitHub was harder than figuring out how to disable the filetype security on my WordPress blog. Fail.

Website: openpsych.net

-

-

From reddit www.reddit.com/r/Khan/comments/1znhcx/khan_academy_gets_rare_partnership_to_close/

Me:

Test prepping does not work very well, so it’s a minor issue. SAT and ACT tests are mainly tests of g and one cannot train g.

Him:

That’s a common misconception, but they’re not general intelligence tests. SAT and ACT test very specific material that can be studied, so test prep actually makes a huge difference in scores.

Me:

No. Try these: pss.sagepub.com/content/15/6/373.short infoproc.blogspot.dk/2012/02/test-preparation-and-sat-scores.html

Him:

I’m a little too tired to look up studies and pick apart the methodology, so I’m just going to make a couple general points. First and foremost, why are the writers of the sat revising the test to make it less susceptible to test preparation (by their own admission) if it’s not affected by test preparation? Why change it at all if it’s an accurate test of general intelligence? Why is family income so strongly correlated to sat scores if the test score can’t be affected by other factors? Why would so many elite colleges be turning sat optional if the score actually represented human intelligence? Are native English speakers naturally smarter than not native speakers since native speakers have higher average scores?

Me:

I’m a little too tired to look up studies and pick apart the methodology, so I’m just going to make a couple general points. First and foremost, why are the writers of the sat revising the test to make it less susceptible to test preparation (by their own admission) if it’s not affected by test preparation? Why change it at all if it’s an accurate test of general intelligence?

Maybe because people like you think like this.

Why is family income so strongly correlated to sat scores if the test score can’t be affected by other factors?

Because high g parents have high g children and high g parents earn more money. Common cause.

You should drop the straw man. I did not say that it was not affected by other causes.

Why would so many elite colleges be turning sat optional if the score actually represented human intelligence?

For political reasons? To easier meet racial quotas?

Are native English speakers naturally smarter than not native speakers since native speakers have higher average scores?

If you use a verbal test on non-natives then you get biased results. That’s why one uses non-verbal tests instead.

Here’s a study for ACT. www.sciencedirect.com/science/article/pii/S0160289607000487

But really, that SAT/ACT are mostly measures of g has been known for decades.

There’s a replication here for the 2004 study: www.sciencedirect.com/science/article/pii/S0191886906000869

Him:

Maybe because people like you think like this.

Not sure what you mean. People with relevant expertise?

You should drop the straw man. I did not say that it was not affected by other causes

It’s not a straw man, I’m challenging an assumption necessary to your argument. You predicated your claim that sat scores can’t be changed by prep on the (dubious) premise that the sat is a test of general intelligence, thereby assuming that tests of g can’t be improved by prep. I’m demonstrating that the scores can be affected by other factors, and positing that a test that can be affected by myriad other factors can be influenced by targeted practice.

All of this boils down to the simple truth that I know for a fact that students’ scores improve with practice. It doesn’t matter how many tangentially related studies you cite, I’ve seen the unequivocal reality hands on. I’m working with 3 students right now. One has gone up 200 points on the sat, one has gone up 300, and the last has gone up 4 points on the act.

I really gotta ask, have you ever taken either test? What did you get? It’s hard for me to imagine that someone who knows the question types actually thinks the sat tests human intelligence. (I won’t go so far as to say that about the act, but it’s still on material that can be practiced)

Me:

It’s not a straw man, I’m challenging an assumption necessary to your argument. You predicated your claim that sat scores can’t be changed by prep on the (dubious) premise that the sat is a test of general intelligence, thereby assuming that tests of g can’t be improved by prep. I’m demonstrating that the scores can be affected by other factors, and positing that a test that can be affected by myriad other factors can be influenced by targeted practice.

It is a strawman. I did not claim that one can’t train SAT scores. I specifically said one could, but not much.

One can improve IQ scores (manifest variable) but not g (latent variable). See: www.sciencedirect.com/science/article/pii/S0160289606000778

All of this boils down to the simple truth that I know for a fact that students’ scores improve with practice. It doesn’t matter how many tangentially related studies you cite, I’ve seen the unequivocal reality hands on. I’m working with 3 students right now. One has gone up 200 points on the sat, one has gone up 300, and the last has gone up 4 points on the act.

en.wikipedia.org/wiki/Anecdotal_evidence

Him:

You’re the only one here talking about g. This is an article about the SAT, not g.

And are you really claiming that the universal consensus of everyone exposed to test prep is simply anecdotal evidence that you, a dilettante, can see right through? Again, ETS, the writers of the SAT, have just stated that it’s too influenced by test preparation. You know more about it than they do?

Me:

You’re the only one here talking about g. This is an article about the SAT, not g.

You’ve also been talking about it above.

And are you really claiming that the universal consensus of everyone exposed to test prep is simply anecdotal evidence that you, a dilettante, can see right through? Again, ETS, the writers of the SAT, have just stated that it’s too influenced by test preparation. You know more about it than they do?

No evidence for any universal consensus presented. One can see that this isn’t so on Wikipedia as well. en.wikipedia.org/wiki/SAT#Preparations

I already stated one can increase SAT scores by training, but not much.

Following the systematic review mentioned indirectly on Wikipedia: onlinelibrary.wiley.com/doi/10.1111/j.1468-2397.2011.00812.x/abstract

The mean gains on V and M were 24 and 33 points. Compare this to the published standard deviations of the subtests of around 114-118, you can see that this training did not do much. About .25 SD increase. Equivalent of 3.75 IQ points (SD15).

media.collegeboard.com/digitalServices/pdf/research/SAT-Percentile-Ranks-2013.pdf

Him:

The Wiley link above is the most recent article you’ve mentioned, and it still implies that data quality in the field is poor. And while I would dispute that the figures they give would be accurate for my work, they still called the gains from test preparation significant. From the article:

“As long as coaching remains inaccessible to some students, we urge universities to reconsider the weight given to SAT scores in the undergraduate admissions process. We challenge the designers of the SAT to redesign the examination to eliminate the possibility of score gains from coaching. Finally, we call for researchers to increase the production of high-quality data in this field to ensure accurate estimates of coaching’s effects are made available to all.”

And I gotta come back to this. Have you ever even seen an SAT?

At this point it seems like he had given up trying to argue, and merely wanted to talk about other stuff.

One can recap it in terms of references given:

Me – 9

Him – 0