Predatory journals

I had my first Twitter controversy. So:

I pointed out in the reply to this, that they don’t actually charge that much normally. The comparison is here. The prices are around 500-3000 USD, with an average (eyeballed) around 2500 USD.

Now, this is just a factual error, so not so bad. However…

If anyone is wondering why he is so emotional, he gave the answer himself:

A very brief history of journals and science

  • Science starts out involving few individuals.
  • They need a way to communicate ideas.
  • They set up journals to distribute the ideas on paper.
  • Printing costs money, so they cost money to buy.
  • Due to limitations of paper space, there needs to be some selection in what gets printed, the resulting system is peer review. In the system, academics write papers, they edit them, and review them. All for free.
  • Fast forward and what happens is that big business takes over the running of the journals so academics can focus on science. As it does, the prices rise becus of monetary interests.
  • Academics are reluctant to give up publishing in and buying journals becus their reputation system is built on publishing in said journals. I.e. the system is inherently conservatively biased (Status quo bias). It is perfect for business to make money from.
  • Now along comes the internet which means that publishing does not need to rely on paper. This means that marginal printing cost is very close to 0. Yet the journals keep demanding high prices becus academia is reliant on them becus they are the source of the reputation system.
  • There is a growing movement in academia that this is a bad situation for science, and that publications shud be openly available (open access movement). New OA journals are set up. However, since they are also either for-profit or crypto for-profit, in order to make money they charge outrageous amounts of money (say, anything above 100 USD) to publish some text+figures on a website. Academics still provide nearly all the work for free, yet they have to pay enormous amounts of money to publish, while the publisher provides a mere website (and perhaps some copyediting etc.).

Who thinks that is a good solution? It is clearly a smart business move. For instance, popular OA metajournal Frontiers are owned by Nature Publishing Group. This company thus very neatly both makes money off their legacy journals and the new challenger journals.

The solution is to set up journals run by academics again now that the internet makes this rather easy and cheap. The profit motive is bad for science and just results in even worse journals.

As for my claim, I stand by it. Altho in retrospect, the more correct term is parasitic. Publishers are a middleman exploiting the the fact that academia relies on established journals for reputation.

Review: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (Joshua D. Greene)

Years ago when i used to study filosofy, i came across Joshua’s website. On the site i found his phd thesis which i read. It is probably the best meta-ethics writing ive come across. He seems to have removed it from the site “available by request”, however i still have it: Greene, J. D. (2002). The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. Anyway, this thesis is what apparently turned into the book. The book is clearly written for a mass market, so it has only a few notes and is very light on statistics. I think it is basically sound. The later chapters were somewhat annoying to read due to excessive repetition and unclear language. I suppose he added to appeal more to laymen and confused people.

In he introduction, he is so nice as to lay out the book:

In part 1 (“Moral Problems”), we’ll distinguish between the two major kinds of moral problems. The first kind is more basic. It’s the problem of Me versus Us: selfishness versus concern for others. This is the problem that our moral brains were designed to solve. The second kind of moral problem is distinctively modern. It’s Us versus Them: our interests and values versus theirs. This is the Tragedy of Commonsense Morality, illus­trated by this book ‘s first organizing metaphor, the Parable of the New Pastures. (Of course, Us versus Them is a very old problem. But histori­cally it’s been a tactical problem rather than a moral one.) This is the larger problem behind the moral controversies that divide us. In part 1, we’ll see how the moral machinery in our brains solves the first problem (chapter 2) and creates the second problem (chapter 3).

In part 2 (” Morality Fast and Slow”), we’ll dig deeper into the moral brain and introduce this book’s second organizing metaphor: The moral brain is like a dual-mode camera with both automatic settings (such as “portrait” or “landscape”) and a manual mode. Automatic settings are efficient but inflexible. Manual mode is flexible but inefficient. The moral brain’s automatic settings are the moral emotions we’ll meet in part 1, the gut-level instincts that enable cooperation within personal relationships and small groups. Manual mode, in contrast, is a general capacity for practical reasoning that can be used to solve moral problems, as well as other practical problems. In part 2, we’ll see how moral thinking is shaped by both emotion and reason (chapter 4) and how this “dual-process” morality reflects the general structure of the human mind (chapter 5).

In part 3, we’ll introduce our third and final organizing metaphor: Common Currency. Here we’ ll begin our search for a met amorality, a global moral philosophy that can adjudicate among competing tribal moralities, just as a tribe’ s morality adjudicates among the competing inter­ests of its members. A metamorality’s job is to make trad e-offs among competing tribal values, and making trade-off s requires a common cur­rency, a unified system for weighing values. In chapter 6, we’ll introduce a candidate metamorality, a solution to the Tragedy of Commonsense Morality . In chapter 7, we’ll consider other ways of establishing a common currency, and find them lacking. In chapter 8, we’ll take a closer look at the metamorality introduced in chapter 6, a philosophy known (rather unfortunately) as utilitarianism. We’ll see how utilitarianism is built out of values and reasoning processes that are universally accessible and, thus, how it gives us the common currency that we need.*

Over the years, philosophers have made some intuitively compelling arguments against utilitarianism. In part 4 (” Moral Convictions”), we’ll reconsider these arguments in light of our new understanding of moral cognition. We’ll see how utilitarianism becomes more attractive the better we understand our dual-process moral brains (chapters 9 and 10).

Finally, in part 5 (” Moral Solutions”), we return to the new pastures and the real-world moral problems that motivate this book. Having de­fended utilitarianism against its critics, it’s time to apply it-and to give it a better name. A more apt name for utilitarianism is deep pragmatism (chapter 11 ). Utilitarianism is pragmatic in the go o d and familiar sense: flexible, realistic, and open to compromise. But it’s also a deep philosophy , not just about expediency. Deep pragmatism is about making principled compromises. It’s about resolving our differences by appeal to shared values-common currency.

So, TL;DR, morality is an evolved mechanism to facilitate cooperation. It does this well, but not always. Typical moral disagreements are confused due to relying on rights-talk. Rights-talk is fundamentally useless even counter-productive to resolving conflicts. Utilitarianism (aka cost-benefit analysis in moral language) is the only game in town, so even if it is not technically true, it is still the most useful approach to moralizing.

First and second-order logic formalizations

From researchgate:

What is the actual difference between 1st order and higher order logic?
Yes, I know. They say, the 2nd order logic is more expressive, but it is really hard to me to see why. If we have a domain X, why can’t we define the domain X’ = X u 2^X and for elements of x in X’ define predicates:
BELONGS_TO(x, y) – undefined (or false) when ELEMENT(y)
Now, we can express sentences about subsets of X in the 1st-order logic!
Similarly we can define FUNCTION(x), etc. and… we can express all 2nd-order sentences in the 1st order logic!
I’m obviously overlooking something, but what actually? Where have I made a mistake?

My answer:

In many cases one can reduce a higher order formalization to a first-order, but it will come at the price of complexity of the formalization.

For instance, formalize the follow argument in both first order and second order logic:
All things with personal properties are persons. Being kind is a personal property. Peter is kind. Therefore, Peter is a person.

One can do this with either first or second order, but it is easier in second-order.

First-order formalization:
1. (∀x)(PersonalProperty(x)→((∀y)(HasProperty(y,x)→Person(y)))
2. PersonalProperty(kind)
3. HasProperty(peter,kind)
⊢ 4. Person(peter)

Second-order formalization
1. (∀Φ)(PersonalProperty(Φ)→(∀x)(Φx→Person(x)))
2. PersonalProperty(IsKind)
3. IsKind(peter)
⊢ 4. Person(peter)

where Φ is a second-order variable. Basically, whenever one uses first order to formalize arguments like this, one has to use a predicate like “HasProperty(x,y)” so that one can treat variables as properties indirectly. This is unnecessary in second-order logics.

Some thoughts on online voting

I was asked to comment on this Reddit thread:


This post is written with the assumption that a bitcoin-like system is used.


Nirvana / perfect solution fallacy

I agree. I don’t think an electronic system needs to solve every problem present in a paper system, it just needs to be better. Right now, for example, one could buy an absentee ballot and be done with it. I think a system that makes it less practical to do something similar is an improvement.


As always when considering options, one should choose the best solution, not stubbornly refuse any change that will not give a perfect situation. Paper voting is not perfect either.



Threatening scenarios

The instant you let people vote from remote locations, everything else is up in the air. It doesn’t matter if the endpoints are secure.
Say you can vote by phone. I have my goons “canvass” the area knocking on doors. “Hey, have you voted for Smith yet? You haven’t? Well, go get your phone, we will help you do it right now.”
If you are trying to do secure voting over the Internet, you have already lost.


While one cannot bring goons right into the voting boxes, it is quite clearly possible to threaten people to vote in a particular way right now. The reason it is not generally done is that every single vote has very little power and the costs therefore are absurdly high for anyone trying scare tactics.


It is also easy to solve by making it possible to change votes after they have been given. This is clearly possible with computer technology but hard with paper.



Viruses that target voting software

This is clearly an issue. However, people can easily check that their votes are correct in the votechain (blockchain analogy). A sophisticated virus might wait until the last minute and then vote, but this can easily be prevented by turning off the computers used.


Furthermore, I imagine that one will use specialized software for voting, especially a linux system designed specifically for safety and voting, and rigorously tested by thousands of independent coders. One might also create specialized hardware for voting, i.e. special computers. Specifically, one can have read only memory which makes it impossible to install malacious software on the system. For instance, the hardware might have built in software for voting and a camera for scanning a QR code with one’s private key(s).


Lastly, one can use 2FA to enchance security just as one does everywhere else where extra safety is needed on the web.



Anoynmous and veriable voting

You can either have a system where people can verify their vote and take some type of receipt to prove the system recorded their vote wrong, or you can have anonymous voting. You cannot have verifiable voting AND anonymous voting. Someone somewhere has to be able to decrypt or access whatever keys or pins or you are holding a meaningless or login or hash that can’t prove you aren’t lying or didn’t change your vote etc.


Yes you can, with pseudonymous voting with a bitcoin-like system. Everybody can verify that no more votes are used than there are eligible voters. But the individuals who control the addresses are not identifiable from the code alone. They can choose announce publicly their address so that people can connect the two. Will will ofc be used to public persons.



Selling votes

This is already possible. It is already possible to verify this as well, as one can easily film the process of voting. This is not generally illegal either.


The reason why people do not generally buy or sell votes is that single votes have basically no power and hence are worth nothing.


As pointed out in the thread, this is already possible with mail-voting.


Lastly, it is generally thought of to be evil or wrong to buy and sell votes, but only when done directly. It is clearly legal indirectly and even if not de jura legal, it is de facto legal. In every modern democracy, it is common for politicians offering certain wealth or income redistribution policies. If people who would benefit from these vote for the politicians they are indirectly receiving money for voting for a given politician/party. For this reason, the buying and selling of votes is a non-issue.



The ease of digital attacks

It seems to me that the real problem is the scalability of the attacks in the digital sphere. Changing votes in our regular system of several thousand human ballot counters looking a pieces of paper is rather costly. A well-planned digital attack can be virtually free of cost (not counting the time it takes to figure out the attack).


This is a concern, and that is why one will need tough security and verification technologies. I have suggested several above.



Interceptions of the signal

Whatever, VPN, custom software, browser. It’s the same thing. Malware or even an ISP could intercept and manipulate what is displayed or recorded. The software on the receiving end can also be manipulated but more likely to have some controls of the hardware and software, but again, who inspects this?


This could be a problem. It can be reduced by having a nationally free, encrypted VPN/proxy for voting purposes.



Others who were faster than me

Voting could not be more further from any of the simplest banking. The idea behind banking or any “secure” online transaction is that it is not anonymous. Bitcoin might be the only viable anonymous type online voting.




The bitcoin protocol would actually be fantastic for this. I should explain for those unaware: Bitcoin is actually two different things. One: A protocol, and Two: A software implementing the protocol to send ‘coins’ like money to others. I’ll do a writeup a little later, but the gist of it is: the votes would be public for anyone to view, impossible to fake/forge, and still anonymous. This would be done by embedding the voting information into the blockchain.




Strong encryption with distributed verification a la bitcoin. You don’t have to trust the clients; you trust the math. I’m by no means a crypto expert, so don’t look to me for design tips, but I suspect you could map a private key to each valid voter’s SSN then generate a vote (hash) that could be verified by the voter pool.


These posts dates to “1 year ago” according to Reddit. Clearly, I was not the first to think the obvious.



Who is going to mine votecoins?

So unless you are actually piggy-backing voting ontop of another currency (like the main bitcoin blockchain), there’s no incentive for ordinary citizens to participate and validate/process the blockchain. What are they mining? More votes?? That seems weird/illegitimate. If you say “well, some government agency can just do all the mining and distribute coins to voters” this would seem to offer no improvement over a straightforward centralized system, and only introduces extra questions like


The government and the users who want to help out. Surely citizens have some self interest in getting the election over with. This is a non-issue.


If the government started the block chain, mined the correct number of coins, and then put it in the “no more coins mode” then we would have the setup for it. If they could convince one of the major pools to do merged mining with them (i’m not sure what they would exchange for this, but it would only have to be for a week/month) if hiring a pool is out of the question then just realize that the govt spends millions routinely on elections, and $10M should be more than enough to beat most mafias (~9Thash/s which is roughly what the current bitcoin rate is). If someone like the coke brothers tried to overpower this it would be very obvious.


Yes, this is the same solution I suggested. Code the system so that the first block gives all votecoins.


Another option is making a dual currency system, such that one can help mine votecoins and only get rewarded in rewardcoins. That way the counting is distributed to whoever wants the job.



The prize for the least imagination

The simple answer is that I would not. The risks and downsides of such a system are inherently not worth the only benefit which I can think of (faster results). This should also answer your last question. This hasn’t been done simply because there is no good reason to do it.


No other benefits? Like… an infinite variety of other voting systems???



The price of online voting

You’re assuming the cost of an electronic voting system and the time it will take for people to be comfortable using them will outpace paper and pen, which if you ask me is a pretty damn big assumption. Maybe someday, but until a grandma can easily understand and use electronic voting I am loathe to even think about implementing it. A voting system needs to be transparent and easy to understand.


In Denmark it costs about 100 million DKK to have a vote. Is he really suggesting this cannot be done cheaper with computers? I can’t take it seriously.




Murray’s aestethic realism

Murray (in Human Accomplishment) claims that knowledge of a field and judgement of the quality of items in that field follow each other. That’s testable.


What about this:

– Get a community sample.

– Divide into three groups.

– Teach group one about music, teaching group two about paintings and teach group three about chess (or nothing).

– Make a up a test of knowledge of the things taught to the groups.

– Make the groups evaluate a lot of items from the two areas: classical music and paintings. The items shud be unnamed, unknown to the people to begin with (except for chance happenings) and not covered in the teaching.


If Murray is right, we shud see that the higher knowledge group, i.e. the one that was taught about the relevant field, have different views what about items are good and are more in agreement.


The point of having three groups, is to see if there is a carry over effect from one aesthtic field to another (clas. music to painting and the other way around). The shud be no effect from chess theory.



The typical social science theory

Perhaps the easiest way to convince yourself is by scanning the literature of soft psychology over the last 30 years and noticing what happens to theories. Most of them suffer the fate that General MacArthur ascribed to old generals—They never die, they just slowly fade away. In the developed sciences, theories tend either to become widely accepted and built into the larger edifice of well-tested hu- man knowledge or else they suffer destruction in the face of recalcitrant facts and are abandoned, perhaps regretfully as a “nice try.” But in fields like personology and social psychology, this seems not to happen. There is a period of enthusiasm about a new theory, a period of attempted application to several fact domains, a period of disillusionment as the negative data come in, a growing bafflement about inconsistent and unreplicable empirical results, multiple resort to ad hoc excuses, and then finally people just sort of lose interest in the thing and pursue other endeavors.


A Galton quote of interest

General impressions are never to be trusted. Unfortunately when they are of long standing they become fixed rules of life and assume a prescriptive right not to be questioned. Consequently those who are not accustomed to original inquiry entertain a hatred and horror of statistics. They cannot endure the idea of submitting sacred impressions to cold-blooded verification. But it is the triumph of scientific men to rise superior to such superstitions, to desire tests by which the value of beliefs may be ascertained, and to feel sufficiently masters of themselves to discard contemptuously whatever may be found untrue.

Cited in: Modgil, Sohan, and Celia Modgil, eds. Arthur Jensen: Consensus and Controversy. Vol. 4. Routledge, 1987.

Can be found here:

Review of Expert Political Judgement (Philip E. Tetlock)


Very interesting book!


Game Theorists. The rivalry between Sherlock Holmes and the evil

genius Professor Moriarty illustrates how indeterminacy can arise as a

natural by-product of rational agents second-guessing each other. When

the two first met, Moriarty was eager, too eager, to display his capacity

for interactive thinking by announcing: “All I have to say has already

crossed your mind.” Holmes replied: “Then possibly my answer has

crossed yours.” As the plot unfolds, Holmes uses his superior “interac-

tive knowledge” to outmaneuver Moriarty by unexpectedly getting off

the train at Canterbury, thwarting Moriarty who had calculated that Paris

was Holmes’s rational destination. Convoluted though it is, Moriarty

failed to recognize that Holmes had already recognized that Moriarty

would deduce what a rational Holmes would do under the circum-

stances, and the odds now favored Holmes getting off the train earlier

than once planned.23


Indeterminacy problems of this sort are the bread and butter of behav-

ioral game theory. In the “guess the number” game, for example, con-

testants pick a number between 0 and 100, with the goal of making their

guess come as close as possible to two-thirds of the average guess of all

the contestants.

24 In a world of only rational players—who base their

guesses on the maximum number of levels of deduction—the equilib-

rium is 0. However, in a contest run at Richard Thaler’s prompting by

the Financial Times,

25 the most popular guesses were 33 (the right guess

if everyone else chooses a number at random, producing an average guess

of 50) and 22 (the right guess if everyone thinks through the preceding

argument and picks 33). Dwindling numbers of respondents carried the

deductive logic to the third stage (picking two-thirds of 22) or higher,

with a tiny hypereducated group recognizing the logically correct answer

to be 0. The average guess was 18.91 and the winning guess, 13, which

suggests that, for this newspaper’s readership, a third order of sophisti-

cation was roughly optimal.





Our reluctance to acknowledge unpredictability keeps us looking for

predictive cues well beyond the point of diminishing returns. 39 I witnessed

a demonstration thirty years ago that pitted the predictive abilities of a

classroom of Yale undergraduates against those of a single Norwegian

rat. The task was predicting on which side of a T-maze food would ap-

pear, with appearances determined—unbeknownst to both the humans

and the rat—by a random binomial process (60 percent left and 40 per-

cent right). The demonstration replicated the classic studies by Edwards

and by Estes: the rat went for the more frequently rewarded side (getting

it right roughly 60 percent of the time), whereas the humans looked hard

for patterns and wound up choosing the left or the right side in roughly

the proportion they were rewarded (getting it right roughly 52 percent of

the time). Human performance suffers because we are, deep down, de-

terministic thinkers with an aversion to probabilistic strategies that ac-

cept the inevitability of error. We insist on looking for order in random

sequences. Confronted by the T-maze, we look for subtle patterns like

“food appears in alternating two left/one right sequences, except after

the third cycle when food pops up on the right.” This determination to

ferret out order from chaos has served our species well. We are all bene-

ficiaries of our great collective successes in the pursuit of deterministic reg-

ularities in messy phenomena: agriculture, antibiotics, and countless other

inventions that make our comfortable lives possible. But there are occa-

sions when the refusal to accept the inevitability of error—to acknowledge

that some phenomena are irreducibly probabilistic—can be harmful.


indeed, but generally it is wise to not accept the unpredictability hypothesis about some fenomena. many things that were thought unpredictable for centures turned out to be predictable after all, or at least to some degree. i have confidence we will see the same for earthquakes, weather systems and the like in the future as well.


predictability (and the related determinism) hypothesis are good working hypotheses, even if they turn out to be wrong some times.


this is what i wrote about years ago on my danish blog here. basically, its a 2×2 table:


What we think/what is true Determinism Indeterminism
Determinism We keep looking for explanations for fenomena and in over time, we find regularities and explanations. We waste time looking for patterns that arent there.
Indeterminism We dont spend time looking for patterns, but there actually are patterns we that cud use to predict the future, and hence we lose out on possible advances in science. We dont waste time looking for patterns that arent there.


The above is assuming that indeterminism implies total unpredictability. This isnt true, but in the simplified case where were dealing with completely random fenomena and completely predictable fenomena, this is a reasonable way of looking at it. IMO, it is much better to waste time looking for explanations for things that are not orderly (after all), than risk not spotting real patterns in nature.


Finally, regardless of whether it is rash to abandon the meliorist search

for the Holy Grail of good judgment, most of us feel it is. When we weigh

the perils of Type I errors (seeking correlates of good judgment that will

prove ephemeral) against those of Type II errors (failing to discover

durable correlates with lasting value), it does not feel like a close call. We

would rather risk anointing lucky fools over ignoring wise counsel. Radi-

cal skepticism is too bitter a doctrinal pill for most of us to swallow.





But betting is one thing, paying up another. Focusing just on reactions

to losing reputational bets, figure 4.1 shows that neither hedgehogs nor

foxes changed their minds as much as Reverend Bayes says they should

have. But foxes move more in the Bayesian direction than do hybrids and

hedgehogs. And this greater movement is all the more impressive in light

of the fact that the Bayesian updating formula demanded less movement

from foxes than from other groups. Foxes move 59 percent of the pre-

scribed amount, whereas hedgehogs move only 19 percent of the pre-

scribed amount. Indeed, in two regional forecasting exercises, hedgehogs

move their opinions in the opposite direction to that prescribed by Bayes’s

theorem, and nudged up their confidence in their prior point of view after

the unexpected happens. This latter pattern is not just contra-Bayesian; it

is incompatible with all normative theories of belief adjustment.8