Costs and benefits of publishing in legacy journals vs. new journals

I recently published a paper in Open Differential Psychology. After it was published, I decided to tell some colleagues about it so that they would not miss it because it is not published in any of the two primary journals in the field: Intell or PAID (Intelligence, Personal and Individual Differences). My email is this:

Dear colleagues,

I wish to inform you about my paper which has just been published in Open Differential Psychology.

Abstract
Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

You can read the full paper at the journal website: openpsych.net/ODP/2014/09/the-international-general-socioeconomic-factor-factor-analyzing-international-rankings/

Regards,
Emil

One researcher responded with:

Dear Emil,
Thanks for your paper.
Why not publishing in standard well established well recognized journals listed in Scopus and Web of Science benefiting from review and
increasing your reputation after publishing there?
Go this way!
Best,
NAME

This concerns the decision of choosing where to publish. I discussed this in a blog post back in March before setting up OpenPsych. To be very short, the benefits of publishing in legacy journals is 1) recognition, 2) indexing in proprietary indexes (SCOPUS, WoS, etc.), 3) perhaps better peer review, 4) perhaps fancier appearance of the final paper. The first is very important if one is an up-and-coming researcher (like me) because one will need recognition from university people to get hired.

I nevertheless decided NOT to publish (much) in legacy journals. In fact, the reason I got into publishing studies so late is that I dislike the legacy journals in this field (and most other fields). Why dislike legacy journals? I made an overview here, but to sum it up: 1) Either not open access or extremely pricey, 2) no data sharing, 3) in-transparent peer review system, 4) very slow peer review (~200 days on average in case of Intell and PAID), 5) you’re supporting companies that add little value to science and charge insane amounts of money for it (for Elsevier, see e.g. Wikipedia, TechDirt has a large number of posts concerning that company alone).

As a person who strongly believes in open science (data, code, review, access), there is no way I can defend a decision to publish in Elsevier journals. Their practices are clearly antithetical to science. I also signed The Cost of Knowledge petition not to publish or review for them. Elsevier has a strong economic interest in keeping up their practices and I’m sure they will. The only way to change science for the better is to publish in other journals.

Non-Elsevier journals

Aside from Elsevier journals, one could publish in PLoS or Frontiers journals. They are open access, right? Yes, and that’s a good improvement. They however are also predatory because they charge exorbitant fees to publish: 1600 € (Frontiers), 1350 US$ (PLoS). One might as well publish in Elsevier as open access for which they charge 1800 US$.

So are there any open access journals without publication fees in this field? There is only one as far as I know, the newly established Journal of Intelligence. However, the journal site states that the lack of a publication fee is a temporary state of affairs, so there seems to be no reason to help them get established by publishing in their journal. After realizing this, I began work on starting a new journal. I knew that there was a lot of talent in the blogosphere with a similar mindset to me who could probably be convinced to review for and publish in the new journal.

Indexing

But what about indexing? Web of Science and SCOPUS are both proprietary; not freely available to anyone with an internet connection. But there is a fast-growing alternative: Google Scholar. Scholar is improving rapidly compared to the legacy indexers and is arguably already better since it indexes a host of grey literature sources that the legacy indexers don’t cover. A recent article compared Scholar to WOS. I quote:

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle become complete in which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for meta-analyses and systematic reviews. [source]

In other words, Scholar covers almost all the articles that WoS covers already and is quickly catching up on the older studies too. In a few years Scholar will cover close to 100% of the articles in legacy indexers and they will be nearly obsolete.

Getting noticed

One thing related to the above is getting noticed by other researchers. Since many researchers read legacy journals, simply being published in them is likely sufficient to get some attention (and citations!). It is however not the only way. The internet has changed the situation here completely in that there are new lots of different ways to get noticed: 1) Twitter, 2) ResearchGate, 3) Facebook/Google+, 4) Reddit, 5) Google Scholar will inform you about new any research by anyone one has cited previously, 6) blogs (own or others’) and 7) emails to colleagues (as above).

Peer review

Peer review in OpenPsych is innovative in two ways: 1) it is forum-style instead of email-based which is better suited for communication between more than 2 persons, 2) it is openly visible which works against biased reviewing. Aside from this, it is also much faster, currently averaging 20 days in review.

Reputation and career

There is clearly a drawback here for publishing in OpenPsych journals compared with legacy journals. Any new journal is likely to be viewed as not serious by many researchers. Most people dislike changes including academics (perhaps especially?). Publishing there will not improve chances of getting hired as much as will publishing in primary journals. So one must weigh what is most important: science or career?

How good is Google Scholar?

I found this paper:  The expansion of Google Scholar versus Web of Science: a longitudinal study – See also other interesting papers by the same author: Joost de Winter

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle becomecompletein which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for metaanalyses and systematic reviews.

This is relevant to me because I mostly publish in my own journals that rely only on indexing engines like GS to get noticed, aside from what the author does himself (e.g. thru ResearchGate). Given the above findings, GS is already a mature tool to be used for e.g. meta-analytic purposes.

The g factor in autistic persons?

Check www.ncbi.nlm.nih.gov/pubmed/19572193

Eyeballing their figure seems to indicate that the g factor is much less strong in these children. A quick search on Scholar didn’t reveal any studies that investigated this idea.

If someone can obtain subtest data from autism samples, that would be useful. The methods I used in my recent paper (section 12) can estimate the strength of the general factor in a sample. If g is weaker in autistic samples, this should be reflected in these measures.

I will write to some authors to see if they will let me how the subtest data.

New paper out: The international general socioeconomic factor: Factor analyzing international rankings

openpsych.net/ODP/2014/09/the-international-general-socioeconomic-factor-factor-analyzing-international-rankings/

Abstract
Many studies have examined the correlations between national IQs and various country-level indexes of well-being. The analyses have been unsystematic and not gathered in one single analysis or dataset. In this paper I gather a large sample of country-level indexes and show that there is a strong general socioeconomic factor (S factor) which is highly correlated (.86-.87) with national cognitive ability using either Lynn and Vanhanen’s dataset or Altinok’s. Furthermore, the method of correlated vectors shows that the correlations between variable loadings on the S factor and cognitive measurements are .99 in both datasets using both cognitive measurements, indicating that it is the S factor that drives the relationship with national cognitive measurements, not the remaining variance.

This one took a while to do. Had to learn a lot of programming (R), do lots of analyses, 50 days in peer review. Perhaps my most important paper so far.

 

Comments on Learning Statistics with R

So I found a textbook for learning both elementary statistics much of which i knew but hadnt read a textbook about, and for learning R.

health.adelaide.edu.au/psychology/ccs/teaching/lsr/ book is free legally

www.goodreads.com/book/show/18142866-learning-statistics-with-r

Numbers refer to the page number in the book. The book is in an early version (“0.4″) so many of these are small errors i stumbled upon while going thru virtually all commands in the book in my own R window.

 

120:

These modeOf() and maxFreq() does not work. This is because the afl.finalists is a factor and they demand a vector. One can use as.vector() to make them work.

 

131:

Worth noting that summary() is the same as quartile() except that it also includes the mean.

 

151:

Actually, the output of describe() is not telling us the number of NA. It is only because the author assumes that there are 100 total cases that he can do 100-n and get the number of NAs for each var.

 

220:

The cakes.Rdata is already transposed.

 

240:

as.logical also converts numeric 0 and 1 to F and T. However, oddly, it does not understand “0” and “1”.

 

271:

Actually P(0) is not equivalent with impossible. See: en.wikipedia.org/wiki/Almost_surely

 

278:

Actually 100 simulations with N=20 will generally not result in a histogram like the above. Perhaps it is better to change the command to K=1000. And why not add hist() to it so it can be visually compared to the theoretic one?

 


>
hist(rbinom( n = 1000, size = 20, prob = 1/6 ))

298:

It would be nice if the code for making these simulations was shown.

 

299:

“This is just bizarre: σ ˆ 2 is and unbiased estimate of the population variance”

 

Typo.

 

327:

Typo in Figure 11.6 text. “Notice that when θ actually is equal to .05 (plotted as a black dot)”

 

344:

Typo.

“That is, what values of X2 would lead is to reject the null hypothesis.”

 

379:

It is most annoying that the author doesn’t write the code for reproducing his plots. I spent 15 minutes trying to find a function to create histplots by group.

 

385:

Typo.

 

“It works for t-tests, but it wouldn’t be meaningful for chi-square testsm F -tests or indeed for most of the tests I talk about in this book.”

 

391:

“we see that it is 95% certain that the true (population-wide) average improvement would lie between 0.95% and 1.86%.”

 

This wording is dangerous because there are two interpretations of the percent sign. In the relative sense, they are wrong. The author means absolute %’s.

 

400:

The code has +’s in it which means it cannot just be copied and runned. This usually isn’t the case, but it happens a few times in the book.

 

408+410:

In the description of the test, we are told to tick when the values are larger than. However, in the one sample version, the author ticks when the value is equal to. I guess this means that we tick when it is equal to or larger than.

 

442:

This command doesn’t work because the dataframe isn’t attached as the author assumes.

> mood.gain <- list( placebo, joyzepam, anxifree)

 

457:

First the author says he wants to use the R^2 non-adjusted, but then in the text he uses the adjusted value.

 

464:

Typo with “Unless” capitalized.

 

493:

“(3.45 for drug and 0.92 for therapy),”

He must mean .47 for therapy. .92 is the number for residuals.

 

497:

In the alternates hypothesis, the author uses “u_ij” instead of “u_rc” which is used in the null-hypothesis. I’m guessing the null-hypothesis is right.

 

514:

As earlier, it is ambiguous when the author talks about increases in percent. It could be relative or absolute. Again in this case it is absolute. The author should use %point or something to avoid confusion.

 

538:

Quoting

 

“I find it amusing to note that the default in R is Type I and the default in SPSS is Type III (with Helmert contrasts). Neither of these appeals to me all that much. Relatedly, I find it depressing that almost nobody in the psychological literature ever bothers to report which Type of tests they ran, much less the order of variables (for Type I) or the contrasts used (for Type III). Often they don’t report what software they used either. The only way I can ever make any sense of what people typically report is to try to guess from auxiliary cues which software they were using, and to assume that they never changed the default settings. Please don’t do this… now that you know about these issues, make sure you indicate what software you used, and if you’re reporting ANOVA results for unbalanced data, then specify what Type of tests you ran, specify order information if you’ve done Type I tests and specify contrasts if you’ve done Type III tests. Or, even better, do hypotheses tests that correspond to things you really care about, and then report those!”

 

An exmaple of the necessity of open methods along with open data. Science must be reproducible. The best is to simply share the exact source code to the the analyses in a paper.

Review: Is there anything good about men? (Roy F. Baumeister)

www.goodreads.com/book/show/8765372-is-there-anything-good-about-men

gen.lib.rus.ec/book/index.php?md5=B21C5698CE12510CDEDBE940259BDF6F

If you read the original essay, there is not much to recommend about the book. It taught me very little, has no data tables, no plots, no figures. Numbers are only mentioned in the text and sources are only given in the back of the book. There were a few interesting works mentioned, but basically the book is just a longer and more repetitive version of the essay.

Hard to say whether to give this 2 or 3 stars. Generally the author has truth on his side. Perhaps 3 then.

Review: Understanding human history (Michael H. Hart)

www.goodreads.com/book/show/1737823.Understanding_Human_History

gen.lib.rus.ec/search.php?req=Understanding+Human+History&open=0&view=simple&column=def

I think Elijah mentioned this book somewhere. I can’t find where.

The basic idea of the book is to write a history book that does take known population differences into account. Normal history books don’t do that. Generally, the chapters are only very broad sketches of some period or pattern. Much of it is plausible but not too well-argued. If one looks in the references for sources given, one can see that a large number of them are to some 1985 edition of Encyclopedia Britannica. Very odd. This is a post-Wikipedia age, folks. Finding primary literature on some topic is really easy. Just search Wikipedia, read its sources. The book is certainly flawed due to the inadequate referencing of claims. Many claims that need references don’t have any either.

On the positive side, there are some interesting ideas in it. The simulations of population IQ’s in different regions is clearly a necessary beginning of a hard task.

Probably you should only read this book if you are interested in history, population genetics and differential psychology beyond a pop science superficial level.

The author is an interesting fellow. en.wikipedia.org/wiki/Michael_H._Hart

 

Pun #4923

If a person is waiting to be treated at a hospital and he complains about waiting too long… is he being impatient?

-

[14:40:05] Emil – Deleet: is it funny to talk about a sex division of labor?
[14:40:39] Emil – Deleet: meaning #1: Effort expended on a particular task; toil, work.
meaning #2: The act of a mother giving birth.

-

 

So i tried linux again

Usually every few years i try linux just to see how it has improved since last time i tried it. So far i have not migrated permanently to linux on my desktop. Simply, windows (7) is better for my purposes.

Whenever i try linux, i picked the most popular distro. This time it was Mint. Overview here. The reason to pick the most mainstream one is that it is the one likely to have the best driver support, least number of problems, most features, easiest support for programs and so on. Basically, im picking the best linux distro to compare with windows.

The first problem after installing was that i cud not make fully use of my dual screen setup. In windows i use the program UltraMon so that i can have a taskbar on the second screen as well. Very useful when one has lots of programs open. After googling it, this feature is apparently not avaiable in the default Cinnamon desktop. It’s been an open issue for 2 years.

So the solution was to install some other desktop environment. A few people mentioned that this cud be done in KDE. So then i tried installing KDE thru the standard Software Manager. However, it only worked halfway or so. Asking my linux expert roommate, he told me that SM is dumb and doesn’t install necesssary dependencies. Why would anyone make the default program so stupid? Anyway, i then did it with Synaptics (another Software Manager-ish program, also built in). I loaded over to KDE and it was possible to get a working taskbar on the second screen, altho not intuitive and kinda complicated (so complicated one needs a guide even if one is considered a computer expert). Hurray!

So, the next annoyance was to change date format and stuff, especially getting KDE to display a 24h system clock was difficult. But again with guides i managed it.

Then there was the very annoying thing that KDE opens stuff with 1 click instead of 2 clicks. This was easily solvable tho.

A larger pain is that linux still does not have a proper winamp alternative. None of the alternatives i have tried (>10) have a specific feature of library indexing that winamp has. If one has a huge library full of compilations, one will automatically have thousands of artists, most of them with only 1 or 2 tracks. All the other programs offer only alfabetic sorting of artist names. This is useless. What is needed is sorting by number of tracks by that artist, which winamp has. One cud run winamp thru Wine but it is silly that this feature is still missing after so many years.

There is ofc also the usual issue with gaming. Few games work well on linux. DOTA2 runs with unplayable 15-30 fps on linux. Using the same settings it runs with 60 on windows. Not strictly linux’s fault, but due to microsoft monopoly with directx, it is still a problem.

Another issue was that there were no useful hotkeys by default in KDE. No hotkey for minizing all windows. No hotkey for opening the applications launcer (start menu equivalent). Worse, one cud not set the WIN key for this purpose in KDE since it’s apparently purely a modifer (dead) key. In windows and Cinnamon, the WIN key is treated specially in that it can both be a modifier and a key in itself. Fortunately, there was a hack to fix this problem.

What linux needs

For linux to become decent for mainstream use, there are some obvious requirements. First one, it must never be necesssary for normal users to use the terminal or any other non-GUI app to do anything. Everything must be GUI. Linux is clearly not ready.

Some good things

Some good things i noticed. Booting is much faster. The system is lighter, especially important for my shitty laptop (which still runs linux and will continue to do so). Important working programs like R and LATEX works mostly fine. In general, Cinnamon is good. They really have to fix that obvious problem with using dual monitors effectively.