### Impact Factors and Citations in Mathematics

30Aug10

All last week, I was occupied with a report that tells about achievements and future directions of the laboratory I’m heading for the moment. Naturally, the main issue is publications and among the performance indicators sought was Cumulative Impact Factor (CIF) obtained by summing impact factors of journals where the lab/institute publishes in. Impact factor $IF_A(T)$  in year $T$  is a measure of how many citations that a particular journal $A$ gets within a two-year period. Its precise formula is as follows:

$IF_A(T) = \cfrac{C_A(T-1, T-2)}{P_A(T-1,T-2)}\quad ,$

where $C_A(T-1,T-2)$  is the number of citations for articles published in $A$  from the years $T-1, T-2$, while $P_A(T-1,T-2)$  is the number of articles published in $A$  in the years $T-1, T-2$. By looking at CIF, some indication is given on the “quality” productivity of the lab. Unfortunately IF itself is a rather crude measure whose real significance is not very clear and it is probably unfair when one does comparison across disciplines.

Just to show that there is great variability between disciplines, I list down the highest impact factor for a few different disciplines below for comparison:

From just the above (random) listing, one can see that mathematics appear at the bottom of the list and it is no wonder that mathematicians here are screaming when they are directly compared with other disciplines. The complain is of course shouted elsewhere too – a report was in fact written by Robert Adler, John Ewing (chair) and Peter Taylor for International Mathematical Union warning on the misuse of Citation Statistics. One of the keypoints stated is that the use of (a) number to replace the subjective peer review of evaluating research/researchers belies the subjectivity in its interpretation (more here for summary). Why should a number (faithfully) represents the multidimensional character of research? At best, citation indices (which include impact factors) are one-dimensional projections onto some properties which aren’t even clear in the first place. Just to take a simple example, that given on pages 10-12 of the report. Suppose journal A has impact factor $\bar{a}$ which is greater than journal B of impact factor $\bar{b}$ i.e. $\bar{a} >\bar{b}$. The statement that a paper in A will be cited more than a paper in B, can be more than 50% wrong (calculate $\sum_{a) renders the impact factor useless in ranking a paper. A more appropriate measure is to follow tha actual citation number that each paper has.

Some would like to suggest that one should rescale the highest impact factor of mathematics journal to that of other disciplines but really this wouldn’t be fair either; certainly there are properties that other discplines have that mathematics doesn’t (and vice versa, of course). Even mathematically, one knows that direct comparison can’t be made since the distribution of citation data for each discipline may be altogether different. There is one interesting result made by Henk F. Moed in his paper “Citation Analysis of Scientific Journals and Journal Impact Measures”, Current Science 89 (12) (2005) 1990-1996. By observing that review journals are often cited more than normal journals, Moed suggested to incorporate such difference within a discipline and calculate what he calls a normalized impact measure:

$\cfrac{n_r c_r + n_a c_a}{n_r \bar{c}_r + n_a \bar{c}_a}$

where $n_r, n_a$ are respectively the number of review articles and the number of ordinary articles that the journal has with their corresponding actual citations $c_r, c_a$ per document while $\bar{c}_r, \bar{c}_a$ are the citations per document within the whole discipline that the journal belongs to. Here is a plot from the paper of the normalized impact measure versus the JCR impact factor fro two differing disciplines of mathematics and biochemistry & microbiology.

Observe the similar ranges of the values of the normalized impact measures for both disciplines. I have yet to understand how this actually works but such comparability seems encouraging. Moed apparently has a book entitled “Citation Analysis in Research Evaluation” (Springer, 2010) which may be worth getting.

Perhaps the other point of objection that is often raised and worth getting into is the source of the citation data. It is altogether well-known that the use of SCI journals only misses out other forms of research out ut such as books, chapters in books, and policy reports and are often raised by social scientists. In a different twist, it was pointed out that it is rather surprising that majority of researchers and research evaluators tend to rely very much the services of a commercial company from the United States and the journals from there. It is so much glaring that Moed in his report “Bibliometric Rankings of World Universities” showed that the US universities are highly overrepresented in the top ranks of world universities based on citation impacts. Many have criticise this biasness towards US journals and US universities in the rankings (see e.g. Kaltenborn and Kuhn). Charlton & Andras for example, in their article, “Evaluating Universities using Simple Scientometric Research Output Metrics: Total Citation Counts per Universities for a Retrospective Seven Year Rolling Sample“, Science & Public Policy 34 (8) (2007) 555-563, have opted out to include both ISI (leaning towards US) and Scopus (leaning towards Europe) in their database sources for better evaluating research. In this regards, it is perhaps worth noting that our local universities have decided to go mostly for ISI in research evaluation, very much to the drive for impact factor. While one can see the short term benefits of this, one should be aware of the dangers lurking and should note that one is running counter to multi-facted assessment suggested by many in the literature.

#### One Response to “Impact Factors and Citations in Mathematics”

1. Appreciate it. Loads of content.

• 6,539 hits