Sunday, 12 Apr il 2009 - 01:59 UTC
Barry Hudson
No scientist would be telling the truth, if at some point or other they checked the impact factor (IF) of journals before submitting an article, and simultaneously declaring the IF being a flawed system. A recent article in BMJ, highlights another aspect of the IF; differences in levels of IF of published studies between industry funded versus government funded research. Essentially, publication in “prestigious journals” was found to be associated with industry funding in studies assessing the effects of influenza vaccines. In other words, industry funded research ended up in journals with a higher impact factor.
However, ask any scientist how this system is calculated and how it is regulated and you be met with largely blank stares. So what is the IF and what does it really mean?
The idea of the IF was first proposed in 1955 by Eugene Garfield as a quantitative means to assess the impact of a particular article in the scientific literature on the scientific community as a whole, or in other the IF measures the popularity of a paper. This system however, was then adapted to a means of scoring a scientific journal as a whole. The IF for a journal is calculated, (for example for 2007) by:
IF =
Citations in 2007 of articles published in 2005-2006
-——————————————————————————
Citable items published in 2005-2006
On this basis, here are the top 10 journals for 2007:
1 CA-A Cancer Journal for Clinicians 69.026
2 New England Journal of Medicine 52.589
3 Annual Review of Immunology 47.981
4 Reviews of Modern Physics 38.403
5 Nature Reviews Molecular Cell Biology 31.921
6 Annual Review of Biochemistry 31.190
7 Cell 29.887
8 Physiological Reviews 29.600
9 Nature Reviews Cancer 29.190
10 Nature 28.751
A quick look at this list and 7 of the top ten journals publish review articles only. So therefore how can review articles without any original scientific content have a higher impact than original scientific articles in Nature, Science (14th in the list), PNAS and JBC (both not even in top 100)?
Alone this suggests there are some limitations to the IF. It can be deduced from this that a review journal with a small number of articles per issue is more likely to gain a higher citation rate than that of a journal publishing numerous primary science articles of a diverse nature.
There are numerous other criticisms of the IF system including:
• Differences in citation rates between scientific fields
• Inclusion of letters and commentaries in the citation count, but not in the count of citable items
• The bias of self-citations by authors
• The date of the publication in the year
• Skewed citation data from “landmark” papers (it is estimated that ~20% of citations in a journal may account for 80% of citations)
• Short time-frame of calculating the citation level of publications
• Citations of flawed studies
As this system is used in some cases to assess an individual for funding, promotions and research institutes as a whole, what alternatives are out there? Other models of assessment include the Hirsh h-index (an assessment of the individual scientist), SCImago Journal Rank, the Prestige factor and the PageRank. In the modern information driven age, the use of the Google PageRank system is an interesting approach which evaluates not only the number of hyperlinks (popularity), but the quality of the referring sites (prestige). In their article on this, Bollen and colleagues (Scientometrics, 2006) demonstrated by PageRank that a very different “top 10” could be generated from that of the IF list. Taking this one step further they plotted the impact factor score against PageRank (the Y-Factor) and came up with another “top ten” journal list:
1. Nature
2. Science
3. New Engl J Med
4. Cell
5. PNAS
6. J Biol Chem
7. JAMA
8. Lancet
9. Nat Genet
10. Phys Rev Lett
This combined approach seems to result in a list most similar to the perceived top scientific journals. Although devising an ideal system to measure the quality of an individual scientist’s work is difficult, this seems like a reasonable approach by combing both the IF and PageRank. However, one major flaw in the IF system is the fact that the IF data is generated and owned by a private company (Thompson Scientific). In the same realms of making databases of genomes freely available to all, a similar approach is needed for assessing publication / journal / individual scientist status. Perhaps it is up to scientist themselves to solve this problem and form an international committee / set of standards. Although the suggestion of Dr Garfield himself (Chairman Emeritus, Thompson Scientific) about the use of the IF is that “there is nothing better and it has the advantage of already being in existence”, scientists should be aware that not only are there other measures of quality, but the limits of the impact factor.