How good is Google Scholar?

I found this paper:  The expansion of Google Scholar versus Web of Science: a longitudinal study – See also other interesting papers by the same author: Joost de Winter

Abstract Web of Science (WoS) and Google Scholar (GS) are prominent citation services with distinct indexing mechanisms. Comprehensive knowledge about the growth patterns of these two citation services is lacking. We analyzed the development of citation counts in WoS and GS for two classic articles and 56 articles from diverse research fields, making a distinction between retroactive growth (i.e., the relative difference between citation counts up to mid-2005 measured in mid-2005 and citation counts up to mid-2005 measured in April 2013) and actual growth (i.e., the relative difference between citation counts up to mid-2005 measured in April 2013 and citation counts up to April 2013 measured in April 2013). One of the classic articles was used for a citation-by-citation analysis. Results showed that GS has substantially grown in a retroactive manner (median of 170 % across articles), especially for articles that initially had low citations counts in GS as compared to WoS. Retroactive growth of WoS was small, with a median of 2 % across articles. Actual growth percentages were moderately higher for GS than for WoS (medians of 54 vs. 41 %). The citation-by-citation analysis showed that the percentage of citations being unique in WoS was lower for more recent citations (6.8 % for citations from 1995 and later vs. 41 % for citations from before 1995), whereas the opposite was noted for GS (57 vs. 33 %). It is concluded that, since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS. A discussion is provided on quantity versus quality of citations, threats for WoS, weaknesses of GS, and implications for literature research and research evaluation.

A second threat for WoS is that in the future, GS may cover all works covered by WoS. We found that for the period 1995–2013, 6.8 % of the citations to Garfield (1955) were unique in WoS, indicating that a very large share of works indexed in WoS is now also retrievable by GS. In line with this observation, based on an analysis of 29 systematic reviews in the medical domain, Gehanno et al. (2013) recently concluded that: ‘‘The coverage of GS for the studies included in the systematic reviews is 100 %. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed’’. GS’s coverage of WoS could in principle becomecompletein which case WoS could become a subset of GS that could be selected via a GS option ‘‘Select WoS-indexed journals and conferences only’’. 2 Together with its full-text search and its searching of the grey literature, it is possible that GS becomes the primary literature source for metaanalyses and systematic reviews.

This is relevant to me because I mostly publish in my own journals that rely only on indexing engines like GS to get noticed, aside from what the author does himself (e.g. thru ResearchGate). Given the above findings, GS is already a mature tool to be used for e.g. meta-analytic purposes.

Human Accomplishment (Charles Murray)

www.goodreads.com/book/show/282085.Human_Accomplishment

gen.lib.rus.ec/book/index.php?md5=735E143F5E5C9A49FD1188DA457E8287&open=0

 

This book was very interesting much of the time, somewhat interesting some of the time, and dumb at some time. However, the first part was much larger than the other two, so i think its a great book. The chapters where Murray speculates beyond the data are worst ones IMO.

 

 

Chinese medicine, unlike Chinese science, was backed by abundant

theory, but that theory is so alien to the Western understanding of physiology and pharmacology that Western scientists even today are only beginning

to understand the degree to which Chinese medicine is coordinate with modern science.42

It worked, however, for a wide range of ailments. If you

were going to be ill in 12C and were given a choice of living in Europe or

China, there is no question about the right decision. Western medicine in

12C had forgotten most of what had been known by the Greeks and

Romans. Chinese physicians of 12C could alleviate pain more effectively

than Westerners had ever been able to do —acupuncture is a Chinese medical technique that Western physicians have learned to take seriously —and

could treat their patients effectively for a wide variety of serious diseases.

 

Murray is being way too nice to the chinese here. Their theories are crap and their treatment generally dont work.

 

 

The second blind spot is the tendency to confuse that which has been

achieved with that which must inevitably have been achieved. It is easy to

assume that someone like Aristotle was not so much brilliant as fortunate

in being born when he was. A number of basic truths were going to be

figured out early in mankind’s intellectual history, and Aristotle gave voice to

some of them first. If he hadn’t, someone else soon would have. But is that

really true? Take as an example the discovery of formal logic in which

Aristotle played such a crucial role. Nobody had discovered logic (that we

know of ) in the civilizations of the preceding five millennia. Thinkers in the

non-Western world had another two millennia after Aristotle to discover

formal logic independently, but they didn’t. Were we in the West “bound”

to discover logic because of some underlying aspect of Western culture?

Maybe, but what we know for certain is that the invention of logic occurred

in only one time and one place, that it was done by a handful of individuals,

and that it changed the history of the world. Saying that a few ancient Greeks

merely got there first isn’t adequate acknowledgment of their leap of imagination and intellect.

 

Murray is wrong again: en.wikipedia.org/wiki/Indian_logic

 

But yes, many cultures never invented logic, or much else.

 

 

en.wikipedia.org/wiki/Lotka%27s_law

 

I had been looking for this!

 

 

The earliest and most commonsensical explanation for the “something

else” is that the source of great accomplishment is multidimensional—it does

not appear just because a person is highly intelligent or highly creative or

highly anything else. Several traits have to appear in combination. The

pioneer of this view was British polymath Francis Galton in the late 1800s.

Even though he had been instrumental in creating the modern concept of

intelligence, Galton argued that intelligence alone was not enough to explain

genius. Rather, he appealed to “the concrete triple event, of ability combined

with zeal and with capacity for hard labour.”13 Ninety years later, William

Shockley specified how the individual components of human accomplishment, normally distributed, can in combination produce the type of hyperbolic distribution—highly skewed right, with an elongated tail—exemplified by the Lotka curve.14

 

Galton <3

 

 

Establishing the outer boundaries of the population is easy. Modern scholars

have helpfully produced large and comprehensive biographical dictionaries

with the avowed purpose of containing everyone who is worth mentioning

in their particular field. For the sciences, an international consortium of

scholars has been laboring for more than four decades on the Dictionary of

Scientific Biography, now up to 18 volumes.1 In philosophy, we have the Encyclopédie Philosophique Universelle,2 only two volumes, but fat ones. For Western

art, we may turn to the 17-volume Enciclopedia Universale dell’Arte compiled

by the Istituto per la Collaborazione Culturale. At least one such encyclopedic reference work is among the sources for every inventory.

 

Old book. No wikipedia!

 

 

That the basic ideas were in the air for so long without being developed

suggests how complex and mind-stretching the change was. Indeed, a major

continuing issue in the history of science is the degree to which it is appropriate to talk of a scientific method as a body of principles and practice that

has clear, bright lines distinguishing it from science practiced by other means.

It is not a debate that I am about to adjudicate here. In claiming the scientific

method as a meta-invention, or a collection of synergistic meta-inventions, I

am associating myself with the position that, incremental as the process may

have been, a fundamental change occurred in post-medieval Europe in the

way human beings went about accumulating and verifying knowledge. The

common-sense understanding of the phrase scientific method labels the aggregate of those changes. I use the phrase to embrace the concepts of hypothesis, falsification, and parsimony; the techniques of the experimental method; the application of mathematics to natural phenomena; and a system of intellectual copyright and dissemination.

 

COPYRIGHT?! unfortunately, Murray does not expand on it.

 

 

DID GALILEO MAKE UP HIS DATA?

In De Motu ,Galileo reported that the lighter body falls faster at the

beginning, then the heavier body catches up and arrives at the

ground slightly before the lighter one. Since this should not be true

of the objects that Galileo used, a wooden sphere and an iron one, if

they are released simultaneously, it has been inferred that Galileo was

either a poor observer or making up his data. But in replications of

Galileo’s procedure, it has been found that when a light wooden

sphere and a heavy iron one are dropped by hand, the lighter

wooden sphere does start out its journey a bit ahead—a natural, if

misleading, consequence of the need to clutch the heavier iron ball

more firmly than the wooden one. This causes the iron ball to be

released slightly after the wooden ball even though the experimenter has the impression that he is opening his hands at the same time. Then, because of the differential effects of air resistance on

objects of different weight, the iron ball catches up with and passes

the wooden ball, just as Galileo reported. There is a satisfying irony

in this finding. The modern critics of Galileo were making the same

mistake that the ancients made, criticizing results on the basis of

what “must be true” rather than going out and doing the work to

find out what is true35

 

interesting story.

 

 

In recognizing how thoroughly non-European science and technology

have been explored, let’s also give credit where credit is due: By and large, it

has not been Asian or Arabic scholars, fighting for recognition against European indifference, who are responsible for piecing together the record of

accomplishment by non-European cultures, but Europeans themselves.

Imperialists they may have been, but one of the by-products of that imperialism was a large cadre of Continental, British, and later American scholars,

fascinated by the exotic civilizations of Arabia and East Asia, who set about

uncovering evidence of their accomplishments that inheritors of those civilizations had themselves neglected. Joseph Needham’s seven-volume history

of Chinese science and technology is a case in point.[10]

Another is George Sarton’s Introduction to the History of Science, in five large volumes published

from 1927–1948, all of which is devoted to science before the end of 14C,

with the bulk of it devoted to the period when preeminence in science was

to be found in the Arab world, India, and China.

 

The irony.. :)

 

 

 

Why Open Access is a moral imperative

i hav riten about it befor, but here is som mor i came acros.
Found via:
www.techdirt.com/articles/20130117/03040821712/scientist-explains-why-putting-research-behind-paywall-is-immoral.shtml

For general public:
www.guardian.co.uk/science/blog/2013/jan/17/open-access-publishing-science-paywall-immoral
www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=417576&c=1
www.guardian.co.uk/science/2012/jul/15/free-access-british-scientific-research

Background material for the curious:
Blog series:
svpow.com/2009/06/01/choosing-a-journal-for-the-neck-posture-paper-why-open-access-is-important/
svpow.com/2011/09/29/researchers-stop-doing-free-work-for-non-open-journals/
svpow.com/2011/10/17/collateral-damage-of-the-non-open-reviewing-boycott/
svpow.com/2012/01/30/what-is-a-private-sector-research-work/
svpow.com/2012/03/30/my-rcuk-submission/
svpow.com/2012/05/17/see-this-is-why-publishers-irritate-me-so-much/
svpow.com/2012/05/18/publishers-versus-everyone/
svpow.com/2012/10/16/publish-means-make-public-paywalls-are-the-opposite-of-publishing/

Other:
drvector.blogspot.dk/2007/02/reason-317-to-be-depressed-journal.html
www.scottaaronson.com/writings/journal.html
www.michaeleisen.org/blog/?p=911

Proposal: A graphical tool to explore relationships between academic papers

Background

Lately I’ve been interested in cluster analysis and factor analysis. These two families of analyses have extremely many practical data-related uses. So far I’ve begun cluster analyzing Wikipedia to get an overall idea about the structure of human knowledge (how cool is that?). I’ve also read Arthur Jensen’s The g Factor to get an idea about how factor analysis works with regards to intelligence testing, and other psychometrics and biometrics (like the proposed f factor).

Today I was reading a book about the future of schooling, Salman Khan’s The One World Schoolhouse (I will post my review soon). In the book he mentions some stuff about homework. I was curious and looked up his sources. That got me reading a meta-analysis (another kind of analysis! I love analysis) about the effects of homework. While reading that I got a new idea for an analysis.

The idea

Citation indexes already exist. With such an index, one can look up a particular paper and find other papers that cite that paper. Or one can look up an author and see which papers he has published and who cites those papers and so on. However, these tools have no or poor graphical representations of the data. It is a shame, since graphical representations of data are so much more useful and cool. One need only watch a couple of TED talks about the subject to be convinced:



There are various things that one can show graphically in a very illustrative way. My idea is to have each paper as a node and have lines between them that indicate who cites who. These lines would normally be one-directional, since it is difficult to cite a paper that will be published in the future (but it happens that papers cite other papers that are “in press”, so in a sense it’s not unheard of). My idea is that one of y-axis (or x-axis if one prefers that) time is showed. In this way one can follow the citations of a papers over time. More interestingly, one can follow the citations between the other papers that cite the first paper over time. A web that becomes more complex over time, or perhaps dies of, if the academic community loses interest in that particular subject (academic interest is a bit like fashion).

Here’s a fictive example that I have made to show off the general idea:

(Proposal A graphical tool to explore relationships between academic papers)

In the example above, there are 20 papers marked for interest. All the citations between them are then found, and shown with lines. Optimally, the direction of the relationships should also be shown, perhaps by small arrows on the lines. Also optimally, the authors or names or both of the papers should be shown in a very small font on top of the papers or something like that. It should be enlarged when the mouse is on top of the nodes, with links to the actual papers, and the abstract ready to be read.

It it also possible to color the nodes after authors or research groups. In the example above, there are two lines of authors, or research groups, or research programs. The left one publishes more papers than the right one. One can employ various coloring schemes to make such features salient in the graphical representation. One can also see how the two lines interrelate; they do cite each others papers, just not as frequent as they cite their own papers.

One can also change the nodes with respect to other information than the authors. One can control their size relative to the papers individual citation count, for instance. This makes it easier for an outsider to locate the papers that gathered the most cites (either in general, or in the pool of papers of interest), and hence, most likely the most interest from fellow researchers. If one wants, one can also do the opposite, and look for hidden gems of insight in the literature that have been missed by other authors.

Even better, given the problems with replications, especially direct replications in some fields of science, especially psychology, one can color nodes after whether they are replications of previous papers or not. One could also have special arrows for replications. Similarly, literature reviews, meta-reviews, systematic reviews could have their own node shape or color so that one can locate them more easily. Surely, something like this is the proper view of evaluating the influence of scientific papers.

What next?

Two things. Improve the ideas, and add to them. Then 1) Find programmers, and convince them that the project is cool and that they should invest their time in it! 2) Find other people that have more prestige and hopefully access to funding that can be used to hire programming people to convert the ideas to reality.