Predatory journals

I had my first Twitter controversy. So:

I pointed out in the reply to this, that they don’t actually charge that much normally. The comparison is here. The prices are around 500-3000 USD, with an average (eyeballed) around 2500 USD.

Now, this is just a factual error, so not so bad. However…

If anyone is wondering why he is so emotional, he gave the answer himself:

A very brief history of journals and science

  • Science starts out involving few individuals.
  • They need a way to communicate ideas.
  • They set up journals to distribute the ideas on paper.
  • Printing costs money, so they cost money to buy.
  • Due to limitations of paper space, there needs to be some selection in what gets printed, the resulting system is peer review. In the system, academics write papers, they edit them, and review them. All for free.
  • Fast forward and what happens is that big business takes over the running of the journals so academics can focus on science. As it does, the prices rise becus of monetary interests.
  • Academics are reluctant to give up publishing in and buying journals becus their reputation system is built on publishing in said journals. I.e. the system is inherently conservatively biased (Status quo bias). It is perfect for business to make money from.
  • Now along comes the internet which means that publishing does not need to rely on paper. This means that marginal printing cost is very close to 0. Yet the journals keep demanding high prices becus academia is reliant on them becus they are the source of the reputation system.
  • There is a growing movement in academia that this is a bad situation for science, and that publications shud be openly available (open access movement). New OA journals are set up. However, since they are also either for-profit or crypto for-profit, in order to make money they charge outrageous amounts of money (say, anything above 100 USD) to publish some text+figures on a website. Academics still provide nearly all the work for free, yet they have to pay enormous amounts of money to publish, while the publisher provides a mere website (and perhaps some copyediting etc.).

Who thinks that is a good solution? It is clearly a smart business move. For instance, popular OA metajournal Frontiers are owned by Nature Publishing Group. This company thus very neatly both makes money off their legacy journals and the new challenger journals.

The solution is to set up journals run by academics again now that the internet makes this rather easy and cheap. The profit motive is bad for science and just results in even worse journals.

As for my claim, I stand by it. Altho in retrospect, the more correct term is parasitic. Publishers are a middleman exploiting the the fact that academia relies on established journals for reputation.

Review: The Digital Scholar: How Technology is Transforming Academic Practice (Martin Weller)

www.goodreads.com/book/show/12582388-the-digital-scholar

gen.lib.rus.ec/book/index.php?md5=5343D586EEEFB7BB24DE5B71FBD07C32

Someone posted a nice collection of books dealing with the on-going revolution in science:

So i decided to read some of them. Ironically, many of them are not available for free (contrary to the general idea of openness in them).

The book is short at 200 pages, with 14 chapters covering most aspects of changing educational system. It is at times long-winded. It shud probably have been 20-50 pages shorter. However, it seems fine as a general introduction to the area. The author shud have used more grafs, figures etc. to make points. There are plenty of good figures for these things (e.g. journal revenue increases).

Review: G Is for Genes: The Impact of Genetics on Education and Achievement (Kathryn Asbury, Robert Plomin)

www.goodreads.com/book/show/17015094-g-is-for-genes

gen.lib.rus.ec/book/index.php?md5=97ac0ec914522d3c888679e9c02291c6

So i kept finding references to this book in papers, so i decided to read it. It is a quick read introducing behavior genetics and the results from it to lay readers and perhaps policy makers. The book is overly long (200) for its content, it cud easily have been cut 30 pages. The book itself contains not much new to people familiar with the field (i.e. me), however there are some references that were interesting and unknown to me. It may pay for the expert to simply skim the reference lists for each chapter and read those papers instead.

The main thrust of the book is what policies we shud implement becus of our ‘new’ behavioral genetic knowledge. Basically the authors think that we need to add more choice to schools becus everybody is different and we want to use the gene-environment correlations to improve results. It is hard to disagree with this. They go on about how labeling is bad, but obviously labeling is useful for talking about things.

If one is interested in school policy then reading this book may be worth it, especially if one is a layman. If one is interested in learning behavior genetics, read something else (e.g. Plomin’s 2012 textbook)

Review: Unlearning Liberty (Greg Lukianoff)

www.goodreads.com/book/show/13587028-unlearning-liberty

gen.lib.rus.ec/book/index.php?md5=21a52e674d9a13997e55b8a12581f4a4

I just have to blog about this book. Normally when i review books i have a number of quotes from it. Not this one. It is not becus it isnt quoteable, but becus i wud have to quote excessive amounts from it. Very briefly, this book is a discussion of a number of case studies in freedom of speech related to US universities. Surprisingly even tho the constitution and the supreme court are rather clear (compared with Denmark) about free speech, universities constantly break the law in the area. This book was definitely an eye-opener as to what is going on in US universities, and provides the explanation for why there is so much bias in academia and the products of it.

Costs and benefits of publishing in legacy journals vs. new journals

I recently published a paper in Open Differential Psychology. After it was published, I decided to tell some colleagues about it so that they would not miss it because it is not published in any of the two primary journals in the field: Intell or PAID (Intelligence, Personal and Individual Differences). My email is this:

Dear colleagues,

I wish to inform you about my paper which has just been published in Open Differential Psychology.

Abstract
Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

You can read the full paper at the journal website: openpsych.net/ODP/2014/09/the-international-general-socioeconomic-factor-factor-analyzing-international-rankings/

Regards,
Emil

One researcher responded with:

Dear Emil,
Thanks for your paper.
Why not publishing in standard well established well recognized journals listed in Scopus and Web of Science benefiting from review and
increasing your reputation after publishing there?
Go this way!
Best,
NAME

This concerns the decision of choosing where to publish. I discussed this in a blog post back in March before setting up OpenPsych. To be very short, the benefits of publishing in legacy journals is 1) recognition, 2) indexing in proprietary indexes (SCOPUS, WoS, etc.), 3) perhaps better peer review, 4) perhaps fancier appearance of the final paper. The first is very important if one is an up-and-coming researcher (like me) because one will need recognition from university people to get hired.

I nevertheless decided NOT to publish (much) in legacy journals. In fact, the reason I got into publishing studies so late is that I dislike the legacy journals in this field (and most other fields). Why dislike legacy journals? I made an overview here, but to sum it up: 1) Either not open access or extremely pricey, 2) no data sharing, 3) in-transparent peer review system, 4) very slow peer review (~200 days on average in case of Intell and PAID), 5) you’re supporting companies that add little value to science and charge insane amounts of money for it (for Elsevier, see e.g. Wikipedia, TechDirt has a large number of posts concerning that company alone).

As a person who strongly believes in open science (data, code, review, access), there is no way I can defend a decision to publish in Elsevier journals. Their practices are clearly antithetical to science. I also signed The Cost of Knowledge petition not to publish or review for them. Elsevier has a strong economic interest in keeping up their practices and I’m sure they will. The only way to change science for the better is to publish in other journals.

Non-Elsevier journals

Aside from Elsevier journals, one could publish in PLoS or Frontiers journals. They are open access, right? Yes, and that’s a good improvement. They however are also predatory because they charge exorbitant fees to publish: 1600 € (Frontiers), 1350 US$ (PLoS). One might as well publish in Elsevier as open access for which they charge 1800 US$.

So are there any open access journals without publication fees in this field? There is only one as far as I know, the newly established Journal of Intelligence. However, the journal site states that the lack of a publication fee is a temporary state of affairs, so there seems to be no reason to help them get established by publishing in their journal. After realizing this, I began work on starting a new journal. I knew that there was a lot of talent in the blogosphere with a similar mindset to me who could probably be convinced to review for and publish in the new journal.

Indexing

But what about indexing? Web of Science and SCOPUS are both proprietary; not freely available to anyone with an internet connection. But there is a fast-growing alternative: Google Scholar. Scholar is improving rapidly compared to the legacy indexers and is arguably already better since it indexes a host of grey literature sources that the legacy indexers don’t cover. A recent article compared Scholar to WOS. I quote:

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle become complete in which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for meta-analyses and systematic reviews. [source]

In other words, Scholar covers almost all the articles that WoS covers already and is quickly catching up on the older studies too. In a few years Scholar will cover close to 100% of the articles in legacy indexers and they will be nearly obsolete.

Getting noticed

One thing related to the above is getting noticed by other researchers. Since many researchers read legacy journals, simply being published in them is likely sufficient to get some attention (and citations!). It is however not the only way. The internet has changed the situation here completely in that there are new lots of different ways to get noticed: 1) Twitter, 2) ResearchGate, 3) Facebook/Google+, 4) Reddit, 5) Google Scholar will inform you about new any research by anyone one has cited previously, 6) blogs (own or others’) and 7) emails to colleagues (as above).

Peer review

Peer review in OpenPsych is innovative in two ways: 1) it is forum-style instead of email-based which is better suited for communication between more than 2 persons, 2) it is openly visible which works against biased reviewing. Aside from this, it is also much faster, currently averaging 20 days in review.

Reputation and career

There is clearly a drawback here for publishing in OpenPsych journals compared with legacy journals. Any new journal is likely to be viewed as not serious by many researchers. Most people dislike changes including academics (perhaps especially?). Publishing there will not improve chances of getting hired as much as will publishing in primary journals. So one must weigh what is most important: science or career?

Venice wants to be independent: analysis of possible outcomes given self-selection of the voting population

www.theatlantic.com/international/archive/2014/03/europes-latest-secession-movement-venice/284562/

Venice seems to be tired of Italy. It’s a bad economic trade off for them. They want to return to their former glory. Good! We need more power decentralization.

There was a vote:

Last week, in a move overshadowed by the international outcry over Russia’s annexation of Crimea, Plebiscito.eu, an organization representing a coalition of Venetian nationalist groups, held an unofficial referendum on breaking with Rome. Voters were first asked the main question—”Do you want Veneto to become an independent and sovereign federal republic?”—followed by three sub-questions on membership in the European Union, NATO, and the eurozone. The region’s 3.7 million eligible voters used a unique digital ID number to cast ballots online, and organizers estimate that more than 2 million voters ultimately participated in the poll.

On Friday night, people waving red-and-gold flags emblazoned with the Lion of St. Mark filled the square of Treviso, a city in the Veneto region, as the referendum’s organizers announced the results: 2,102,969 votes in favor of independence—a whopping 89 percent of all ballots cast—to 257,266 votes against. Venetians also said yes to joining NATO, the EU, and the eurozone. The overwhelming victory surprised even ardent supporters of the initiative, as most polls before the referendum estimated only about 65 percent of the region’s voters supported independence.

Someone in the comments makes the following argument:

I don’t understand why it’s so surprising that 89% of respondents in an online, unofficial poll organized by Venetian nationalist groups voted that way. As a proportion of all eligible voters, that comes out to 55-60%, much closer to what you’d expect from neutral sampling.
Self-selection bias is a huge problem with online polling, and I expect that given the methodology of the referendum, that would explain a large part of the discrepancy between the predicted and observed outcomes.

My response:

You are assuming that the entire set of nonvoting citizens would be against it. While there is likely some self-selection, it is NOT likely to be 100%.

I did the math for every 10% incremental. If everybody voted either “yes” or “no”, then the total outcome range is [56.84%-93.05%], a clear majority in any case.

Even given a very strong self-selection effect such that nonvoters are 70% against, the outcome is 67.7% “yes”.

I did the math, and it is here: docs.google.com/spreadsheet/ccc?key=0AoYWmgpqFzdsdDZUSWhOOEctRnFhakVLUjFsbFpWUHc#gid=0

Here’s the takeaway. Venice wants to be independent and it is not a narrow decision, even assuming implausible self-selection.

Some thoughts on online voting

I was asked to comment on this Reddit thread: www.reddit.com/r/netsec/comments/s1t2c/netsec_how_would_you_design_an_electronic_voting/

 

This post is written with the assumption that a bitcoin-like system is used.

 

Nirvana / perfect solution fallacy

I agree. I don’t think an electronic system needs to solve every problem present in a paper system, it just needs to be better. Right now, for example, one could buy an absentee ballot and be done with it. I think a system that makes it less practical to do something similar is an improvement.

 

As always when considering options, one should choose the best solution, not stubbornly refuse any change that will not give a perfect situation. Paper voting is not perfect either.

 

 

Threatening scenarios

The instant you let people vote from remote locations, everything else is up in the air. It doesn’t matter if the endpoints are secure.
Say you can vote by phone. I have my goons “canvass” the area knocking on doors. “Hey, have you voted for Smith yet? You haven’t? Well, go get your phone, we will help you do it right now.”
If you are trying to do secure voting over the Internet, you have already lost.

 

While one cannot bring goons right into the voting boxes, it is quite clearly possible to threaten people to vote in a particular way right now. The reason it is not generally done is that every single vote has very little power and the costs therefore are absurdly high for anyone trying scare tactics.

 

It is also easy to solve by making it possible to change votes after they have been given. This is clearly possible with computer technology but hard with paper.

 

 

Viruses that target voting software

This is clearly an issue. However, people can easily check that their votes are correct in the votechain (blockchain analogy). A sophisticated virus might wait until the last minute and then vote, but this can easily be prevented by turning off the computers used.

 

Furthermore, I imagine that one will use specialized software for voting, especially a linux system designed specifically for safety and voting, and rigorously tested by thousands of independent coders. One might also create specialized hardware for voting, i.e. special computers. Specifically, one can have read only memory which makes it impossible to install malacious software on the system. For instance, the hardware might have built in software for voting and a camera for scanning a QR code with one’s private key(s).

 

Lastly, one can use 2FA to enchance security just as one does everywhere else where extra safety is needed on the web.

 

 

Anoynmous and veriable voting

You can either have a system where people can verify their vote and take some type of receipt to prove the system recorded their vote wrong, or you can have anonymous voting. You cannot have verifiable voting AND anonymous voting. Someone somewhere has to be able to decrypt or access whatever keys or pins or you are holding a meaningless or login or hash that can’t prove you aren’t lying or didn’t change your vote etc.

 

Yes you can, with pseudonymous voting with a bitcoin-like system. Everybody can verify that no more votes are used than there are eligible voters. But the individuals who control the addresses are not identifiable from the code alone. They can choose announce publicly their address so that people can connect the two. Will will ofc be used to public persons.

 

 

Selling votes

This is already possible. It is already possible to verify this as well, as one can easily film the process of voting. This is not generally illegal either.

 

The reason why people do not generally buy or sell votes is that single votes have basically no power and hence are worth nothing.

 

As pointed out in the thread, this is already possible with mail-voting.

 

Lastly, it is generally thought of to be evil or wrong to buy and sell votes, but only when done directly. It is clearly legal indirectly and even if not de jura legal, it is de facto legal. In every modern democracy, it is common for politicians offering certain wealth or income redistribution policies. If people who would benefit from these vote for the politicians they are indirectly receiving money for voting for a given politician/party. For this reason, the buying and selling of votes is a non-issue.

 

 

The ease of digital attacks

It seems to me that the real problem is the scalability of the attacks in the digital sphere. Changing votes in our regular system of several thousand human ballot counters looking a pieces of paper is rather costly. A well-planned digital attack can be virtually free of cost (not counting the time it takes to figure out the attack).

 

This is a concern, and that is why one will need tough security and verification technologies. I have suggested several above.

 

 

Interceptions of the signal

Whatever, VPN, custom software, browser. It’s the same thing. Malware or even an ISP could intercept and manipulate what is displayed or recorded. The software on the receiving end can also be manipulated but more likely to have some controls of the hardware and software, but again, who inspects this?

 

This could be a problem. It can be reduced by having a nationally free, encrypted VPN/proxy for voting purposes.

 

 

Others who were faster than me

Voting could not be more further from any of the simplest banking. The idea behind banking or any “secure” online transaction is that it is not anonymous. Bitcoin might be the only viable anonymous type online voting.

 

-

 

The bitcoin protocol would actually be fantastic for this. I should explain for those unaware: Bitcoin is actually two different things. One: A protocol, and Two: A software implementing the protocol to send ‘coins’ like money to others. I’ll do a writeup a little later, but the gist of it is: the votes would be public for anyone to view, impossible to fake/forge, and still anonymous. This would be done by embedding the voting information into the blockchain.

 

-

 

Strong encryption with distributed verification a la bitcoin. You don’t have to trust the clients; you trust the math. I’m by no means a crypto expert, so don’t look to me for design tips, but I suspect you could map a private key to each valid voter’s SSN then generate a vote (hash) that could be verified by the voter pool.

 

These posts dates to “1 year ago” according to Reddit. Clearly, I was not the first to think the obvious.

 

 

Who is going to mine votecoins?

So unless you are actually piggy-backing voting ontop of another currency (like the main bitcoin blockchain), there’s no incentive for ordinary citizens to participate and validate/process the blockchain. What are they mining? More votes?? That seems weird/illegitimate. If you say “well, some government agency can just do all the mining and distribute coins to voters” this would seem to offer no improvement over a straightforward centralized system, and only introduces extra questions like

 

The government and the users who want to help out. Surely citizens have some self interest in getting the election over with. This is a non-issue.

 

If the government started the block chain, mined the correct number of coins, and then put it in the “no more coins mode” then we would have the setup for it. If they could convince one of the major pools to do merged mining with them (i’m not sure what they would exchange for this, but it would only have to be for a week/month) if hiring a pool is out of the question then just realize that the govt spends millions routinely on elections, and $10M should be more than enough to beat most mafias (~9Thash/s which is roughly what the current bitcoin rate is). If someone like the coke brothers tried to overpower this it would be very obvious.

 

Yes, this is the same solution I suggested. Code the system so that the first block gives all votecoins.

 

Another option is making a dual currency system, such that one can help mine votecoins and only get rewarded in rewardcoins. That way the counting is distributed to whoever wants the job.

 

 

The prize for the least imagination

The simple answer is that I would not. The risks and downsides of such a system are inherently not worth the only benefit which I can think of (faster results). This should also answer your last question. This hasn’t been done simply because there is no good reason to do it.

 

No other benefits? Like… an infinite variety of other voting systems???

 

 

The price of online voting

You’re assuming the cost of an electronic voting system and the time it will take for people to be comfortable using them will outpace paper and pen, which if you ask me is a pretty damn big assumption. Maybe someday, but until a grandma can easily understand and use electronic voting I am loathe to even think about implementing it. A voting system needs to be transparent and easy to understand.

 

In Denmark it costs about 100 million DKK to have a vote. Is he really suggesting this cannot be done cheaper with computers? I can’t take it seriously.

 

 

 

“Why is humankind doomed without eugenics?” #2

From reddit www.reddit.com/r/genetics/comments/1z1tli/design_your_own_baby_a_genetic_ethics_dilemma/cfqrlol

Zorander22 writes:

1) I would wager a guess that most people are capable of far more than they’re current employment situations might indicate. The idea that machines are taking over increasingly complex tasks is an important one… which, depending on how wealth gets distributed, could ensure an easy future for many people, rather than spelling the doom of humanity. If machines and computers end up being able to do increasingly complex tasks without limit, it seems like they would soon outstrip people, even with substantial eugenics programs or genetic engineering in place.

2) People are still under selection processes. Many of these likely happen before birth (wombs and women may have built-in systems to stop supporting fetuses if there are signs there may be serious genetic problems). People are still dying in a non-random manner… and moreover, people are having children in a non-random matter, so sexual selection may play an important role. While there may be trends regarding intelligence and birth rates, it is likely that there are many other factors influencing birth rates and the success of offspring. Low intelligence may increase birth rates through poor implementation of birth control methods or planning, but there could be other hidden effects with high intelligence leading to more resources available for raising more children. As birth control gets easier to implement, you might soon see more intelligent people having more kids on average than less intelligent people.

What we are undergoing right now is an expansion in the variability within our gene pool. We have a huge number of organisms with new mutations cropping up. Far from being a bad thing, this variability is one of the key ingredients for evolution to take place – evolution doesn’t happen consistently throughout time, it often happens in response to changed environmental factors. For some organisms to have better success due to a changing environment, there needs to be a large amount of variability within the population, so that there are lots of phenotypes expressed, some of which will perform better than others. This increase in our genetic variability will serve us well if there’s ever a dramatic change in our environment.

Deleetdk writes:

I would wager a guess that most people are capable of far more than they’re current employment situations might indicate.

No. This is a core belief of educational romanticism which Charles Murray talks about[1] .

More yes, not “far more”. There are limits. The primary area, I think, where talent is not using used is with the gifted children. There is an extreme lack of gifted programs in many countries. Khan Academy is changing this. The future is bright in this area. :)

The idea that machines are taking over increasingly complex tasks is an important one… which, depending on how wealth gets distributed, could ensure an easy future for many people, rather than spelling the doom of humanity.

Let’s say we’re 30 years into the future and no eugenics has been used for g. Now, maybe 30% of the working age population is leeching (e.g. via a basic income policy[2] ), which raises taxes further for the working part of the population. Keep also in mind that people are having fewer children, so the non-working age population is also much larger (subreplacement fertility[3] is a huge economic problem in the near future). Let’s say that in total 30% of the population is working, while the rest is leeching. Why would the workers pay so much of their income? Keep in mind that crypto-currencies will make it more or less impossible to effectively force them if they don’t want to. Do you think this is a bright future? I don’t. One solution would be artificial wombs[4] , but that technology might not be ready yet by then. I don’t know.

If machines and computers end up being able to do increasingly complex tasks without limit, it seems like they would soon outstrip people, even with substantial eugenics programs or genetic engineering in place.

Yes, nonbiological computers will eventually outperform biological computers no matter how much we use eugenics for g. My idea is that we need to get MUCH smarter before allowing this to happen. I think we can make it work, but the world population needs to improve, say, 5 SD in g first.

People are still under selection processes. Many of these likely happen before birth (wombs and women may have built-in systems to stop supporting fetuses if there are signs there may be serious genetic problems). People are still dying in a non-random manner…

Yes, but this selection force is very weak compared to the constant influx of de novo mutations. Welfare systems without eugenics are unstable, since they lead directly to dysgenics that will sooner or later make the welfare system economically untenable.

people are having children in a non-random matter, so sexual selection may play an important role.

I agree. This selection force is likely to be stronger in the future due to increased assortative mating from online dating like OKCupid[5] (this is an interesting research question: do people who met over netdating show stronger assortative mating than those who didn’t? AFAIK, no one knows!). This might itself increase dysgenics for g though. It depends on how fertility is a function of g. If the effect is multiplicative rather than additive, then bright people will have a very low fertility indeed. I currently don’t know the answer to this question.

While there may be trends regarding intelligence and birth rates, it is likely that there are many other factors influencing birth rates and the success of offspring. Low intelligence may increase birth rates through poor implementation of birth control methods or planning, but there could be other hidden effects with high intelligence leading to more resources available for raising more children. As birth control gets easier to implement, you might soon see more intelligent people having more kids on average than less intelligent people.

No. The trend has been going for 100 years or more. This is no change in the future for this trend. See: Dysgenics: Genetic Deterioration in Modern Populations (Richard Lynn)[6] . PDF[7] .

What we are undergoing right now is an expansion in the variability within our gene pool. We have a huge number of organisms with new mutations cropping up. Far from being a bad thing, this variability is one of the key ingredients for evolution to take place – evolution doesn’t happen consistently throughout time, it often happens in response to changed environmental factors. For some organisms to have better success due to a changing environment, there needs to be a large amount of variability within the population, so that there are lots of phenotypes expressed, some of which will perform better than others. This increase in our genetic variability will serve us well if there’s ever a dramatic change in our environment.

Agreed about the variation (due to increased assortative mating which increases variation). Some evolution is more or less constant, selection for polygenic traits (height, g, weight, personality, etc.) is probably more or less constant and not ‘punctuated’ (in Gouldian sense).

There is plenty of variation currently in the human gene pools for evolution of more g. See also Steve Hsu on genetics of g[8] .

“Why is humankind doomed without eugenics?”

From Reddit.

Two reasons.

1) Technological unemployment. This is going fast right now. Already a large part of the population is useless and can only leech on society economically. This percent is due to increase quickly soon when automated cars become mainstream which will shortly make most drivers workless. There are thousands of people who cannot handle complex work, and the simple work is going away.

See e.g.: www.etla.fi/en/publications/computerization-threatens-finnish-employment/, skills.oecd.org/skillsoutlook.html Figure 1.6.

2) Dysgenics. First off, the less intelligent are having more children boosting the problem with the above. But second, the contant de novo mutations are filling up in the human population genome. There is almost no natural selection to sort it away. This means that over time humans will become weak with a high rate of various genetic diseases.

The only future is with eugenics, so they will have to overcome their guilt by association fallacious reasoning[3] with Nazism, just as they did for vegetarianism and anti-smoking (Hitler was a vegetarian and the Nazis were the first to introduce anti-smoking campaigns).

 

Review: Philosophy of Science: A Very Short Introduction

I had low expectations for this book. It was assigned for some humanities class im taking (Studium generale). However, the book is a quite decent introduction to the field. Happily surprised.

 

libgen.org/book/index.php?md5=7d804c1413f8993654ecc933170a5141

 

 

The first two statements are called the premisses of the inference,

while the third statement is called the conclusion. This is a

deductive inference because it has the following property: if the

premisses are true, then the conclusion must be true too. In other

words, if it’s true th at all Frenchman like red wine, and if it’s true

th at Pierre is a Frenchman, it follows th at Pierre does indeed like

red wine. This is sometimes expressed by saying th at the

premisses of the inference entail the conclusion. Of course, the

premisses of this inference are almost certainly not true – there

are bound to be Frenchmen who do not like red wine. But that is

not the point. What makes the inference deductive is the

existence of an appropriate relation between premisses and

conclusion, namely th at if the premisses are true, the conclusion

must be true too. Whether the premisses are actually true is a

different matter, which doesn’t affect the status of the inference as

deductive.

 

This distinction is not a good idea. In that case, the existence of a deductive and invalid argument is impossible. I wrote about this area years ago, but apparently never finished my essay, or published it. It is still on my desktop.

 

 

Philosophers of science are interested in probability for two main

reasons. The first is th at in many branches of science, especially

physics and biology, we find laws and theories th at are formulated

using the notion of probability. Consider, for example, the theory

known as Mendelian genetics, which deals with the transmission

of genes from one generation to another in sexually reproducing

populations. One of the most important principles of Mendelian

genetics is that every gene in an organism has a 50% chance of

making it into any one of the organism’s gametes (sperm or egg

cells). Hence there is a 50% chance th at any gene found in your

mother will also be in you, and likewise for the genes in your

father. Using this principle and others, geneticists can provide

detailed explanations for why particular characteristics (e.g. eye

colour) are distributed across the generations of a family in the

way that they are. Now ‘chance’ is ju st another word for

probability, so it is obvious th at our Mendelian principle makes

essential use of the concept of probability. Many other examples

could be given of scientific laws and principles th at are expressed

in terms of probability. The need to understand these laws and

principles is an important motivation for the philosophical study of

probability.

 

Author forgot about sex-linked genes, which complicate matters.

 

 

Modern science can explain a great deal about the world we live in.

But there are also numerous facts th at have not been explained by

science, or at least not explained fully. The origin of life is one such

example. We know that about 4 billion years ago, molecules with

the ability to make copies of themselves appeared in the primeval

soup, and life evolved from there. But we do not understand how

these self-replicating molecules got there in the first place. Another

example is the fact th at autistic children tend to have very good

memories. Numerous studies of autistic children have confirmed

this fact, but as yet nobody has succeeded in explaining it.

 

en.wikipedia.org/wiki/Autism_and_working_memory

 

Wiki seems to be of the exact opposite opinion.

 

 

Since the realism/anti-realism debate concerns the aim of science,

one might think it could be resolved by simply asking the scientists

themselves. Why not do a straw poll of scientists asking them about

their aims? But this suggestion misses the point – it takes the

expression ‘the aim of science’ too literally. When we ask what the

aim of science is, we are not asking about the aims of individual

scientists. Rather, we are asking how best to make sense of what

scientists say and do – how to interpret the scientific enterprise.

Realists think we should interpret all scientific theories as

attempted descriptions of reality; anti-realists think this

interpretation is inappropriate for theories th at talk about

unobservable entities and processes. While it would certainly be

interesting to discover scientists’ own views on the realism/anti-

realism debate, the issue is ultimately a philosophical one.

 

Good idea. Is that a case for expertimental filosofy?

 

Cudnt find any data from a quick google.

 

 

Cladists argue th at their way of classifying is ‘objective’ while th at of

the pheneticists is not. There is certainly some tru th in this charge.

For pheneticists base their classifications on the similarities

between species, and judgements of similarity are invariably partly

subjective. Any two species are going to be similar to each other in

some respects, but not in others. For example, two species of insect

might be anatomically quite similar, but very diverse in their

feeding habits. So which ‘respects’ do we single out, in order to

make judgements of similarity? Pheneticists hoped to avoid this

problem by defining a measure o f ‘overall similarity’, which would

take into account all of a species’ characteristics, thus permitting

fully objective classifications to be constructed. But though this idea

sounds nice, it did not work, not least because there is no obvious

way to count characteristics. Most people today believe that the very

idea o f ‘overall similarity’ is philosophically suspect. Phenetic

classifications do exist, and are used in practice, but they are not

fully objective. Different similarity judgements lead to different

phenetic classifications, and there is no obvious way to choose

between them.

 

Surely someone has tried factor analysis to find this overall similarity factor, if there is one? It’s not that hard to find out. Make a huge list of things to measure to species. Measure it all in say, 1000 species, and then factor analyze it. Is there an overall factor similar to g? If not, then the hypothesis is disconfirmed.

 

I checked. Yes, someone did this. ib.berkeley.edu/courses/ib200a/lect/ib200a_lect09_Lindberg_phenetics.pdf

 

Seems to be common practice. So this can avoid the charge of arbitrary classifications.

 

 

A similar issue arises regarding the relation between the natural

sciences and the social sciences. Ju st as philosophers sometimes

complain o f ‘science worship’ in their discipline, so social scientists

sometimes complain o f ‘natural science worship’ in theirs. There is

no denying that the natural sciences – physics, chemistry, biology,

etc. – are in a more advanced state than the social sciences –

economics, sociology, anthropology, etc. A number of people have

wondered why this is so. It can hardly be because natural scientists

are smarter than social scientists. One possible answer is that the

methodsof the natural sciences are superior to those of the social

sciences. If this is correct, then what the social sciences need to do

to catch up is to ape the methods of the natural sciences. And to

some extent, this has actually happened. The increasing use of

mathematics in the social sciences may be partly a result of this

attitude. Physics made a great leap forward when Galileo took the

step of applying mathematical language to the description of

motion; so it is tempting to think that a comparable leap forward

might be achievable in the social sciences, if a comparable way of

‘mathematicizing’ their subject matter can be found.

 

Ofc it can! All data confirm this, ex. emilkirkegaard.dk/en/?p=3925

 

Social science has the triple disadvantage of having 1) less smart researchers, 2) a more complex field, 3) fewer experimental options (due to ethical and monetary problems).

 

 

To be fair to the creation scientists, they do olfer arguments th at are

specific to the theory of evolution. One of their favourite arguments

is that the fossil record is extremely patchy, particularly when it

comes to the supposed ancestors of Homo sapiens.There is some

truth in this charge. Evolutionists have long puzzled over the gaps

in the fossil record. One persistent puzzle is why there are so few

‘transition fossils’ – fossils of creatures intermediate between two

species. If later species evolved from earlier ones as Darwin’s theory

asserts, surely we would expect transition fossils to be very \

common? Creationists take puzzles of this sort to show that

Darwin’s theory is ju st wrong. But the creationist arguments are

uncompelling, notwithstanding the real difficulties in

understanding the fossil record. For fossils are not the only or even

the main source of evidence for the theory of evolution, as

creationists would know if they had read The Origin o f Species.

Comparative anatomy is another important source of evidence, as

are embryology biogeography, and genetics. Consider, for example,

the fact that humans and chimpanzees share 98% of their DNA.

This and thousands of similar facts make perfect sense if the theory

of evolution is true, and thus constitute excellent evidence for the

theory. Of course, creation scientists can explain such facts too.

They can claim th at God decided to make humans and chimpanzees

genetically similar, for reasons of His own. But the possibility of

giving ‘explanations’ o f this sort really ju st points to the fact that

Darwin’s theory is not logically entailed by the data. As we have

seen, the same is true o f every scientific theory. The creationists

have merely highlighted the general methodological point th at data

can always be explained in a multitude of ways. This point is true,

but shows nothing special about Darwinism.

 

The author is confused about transitional fossils. All fossils are transitionary. There is no point at which

 

 

Human sociobiologists (henceforth simply ‘sociobiologists’) believe

th at many behavioural traits in humans can be given adaptationist

explanations. One of their favourite examples is incest-avoidance.

Incest – or sexual relations between members of the same family –

is regarded as taboo in virtually every human society, and subject to

legal and moral sanctions in most. This fact is quite striking, given

th at sexual mores are otherwise quite diverse across human

societies. Why the prohibition on incest? Sociobiologists offer the

following explanation. Children born of incestuous relationships

often have serious genetic defects. So in the past, those who

practised incest would have tended to leave fewer viable offspring

than those who didn’t. Assuming th at the incest-avoiding behaviour

was genetically based, and thus transmitted from parents to their

offspring, over a number of generations it would have spread

through the population. This explains why incest is so rarely found

in human societies today.

 

See: en.wikipedia.org/wiki/Westermarck_effect

 

 

If this response is correct, it means we should sharply distinguish

the ‘scientific’ objections to sociobiology from the ‘ideological’

objections. Reasonable though this sounds, there is one point it

doesn’t address: advocates of sociobiology have tended to be

politically right-wing, while its critics have tended to come from the

political left. There are many exceptions to this generalization,

especially to the first half of it, b ut few would deny the trend

altogether. I f sociobiology is simply an impartial enquiry into the

facts, what explains the trend? Why should there be any correlation

at all between political opinions and attitudes towards

sociobiology? This is a tricky question to answer. For though some

sociobiologists may have had hidden political agendas, and though

some of sociobiology’s critics have had opposing agendas of their

own, the correlation extends even to those who debate the issue in

apparently scientific terms. This suggests, though does not prove,

th at the ‘ideological’ and ‘scientific’ issues may not be quite so easy

to separate after all. So the question of whether sociobiology is a

value-free science is less easy to answer than might have been

supposed.

 

This typical claim has been found to be wrong. And it also doesnt fit with other facts, like that Wilson is a socialist. The father of sociobiology! Dawkins has also expressed leftist beliefs.

 

link.springer.com/article/10.1007/s12110-007-9024-y/fulltext.html

 

Critics of evolutionary psychology and sociobiology have advanced an adaptationists-as-right-wing-conspirators (ARC) hypothesis, suggesting that adaptationists use their research to support a right-wing political agenda. We report the first quantitative test of the ARC hypothesis based on an online survey of political and scientific attitudes among 168 US psychology Ph.D. students, 31 of whom self-identified as adaptationists and 137 others who identified with another non-adaptationist meta-theory. Results indicate that adaptationists are much less politically conservative than typical US citizens and no more politically conservative than non-adaptationist graduate students. Also, contrary to the “adaptationists-as-pseudo-scientists” stereotype, adaptationists endorse more rigorous, progressive, quantitative scientific methods in the study of human behavior than non-adaptationists.

 

emilkirkegaard.dk/en/wp-content/uploads/Testing_the_Controversy.pdf