For those with journals indexed in Clarivate Analytics’ Web of Science, the June/July period sees the release of the Journal Citation Reports, usually sparking a flurry of debate on the validity and usage of citation metrics such as the Impact Factor. Indeed, the Impact Factor is often discussed to the exclusion of other metrics—yet it can be very useful to understand the alternative measures by which journals can be assessed, and to use these to inform a more rounded picture of journal progress.
The core principle of citation analysis is that when an academic cites another article, they have asserted that the cited article has had an impact on their research. That the nature of that impact—positive, negative, neutral—is unstated, is a key reason why citation metrics should not be considered synonymous with ‘quality metrics’. However, the analysis of citations does provide a measure as to how engaged the academic community is with a piece of research, once certain controls have been accounted for.
There are several variables that can influence the rate of citation for an article, each of which must be considered if citation metrics are influencing a decision-making process. These include:
- Discipline. Some disciplines—particularly in the social sciences and humanities—tend to cite older content and hence have a lower proportion of citations falling within the ‘citation window’ of a given metric. They might also cite or publish more books than journals, and hence have generally lower metrics than more hard-end scientific disciplines. Citation metrics should not be compared across disciplines unless this is accounted for (i.e., the SNIP metric).
- Document type. Review papers tend to attract the most citations; editorials and meeting abstracts, the fewest. In the middle fall things like case studies—generally counted as original research, but more of practical use than of use in ongoing research, and therefore, not cited much. Some metrics (such as the Impact Factor) control for document type—yet this attracts controversy, as the nature of ‘original research’ varies widely across different disciplines and titles. Where metrics don’t control for document type, though, titles publishing a high proportion of editorial-style content are likely to suffer.
- Age of research cited. Older articles will have more citations. If using a metric that measures ‘total citation counts’, keep in mind that the metric will be skewed towards older papers, or towards academics who have been in their careers for a longer period.
- The data source. There are many sources of citation information (i.e., Web of Science, Scopus, Google Scholar), and the citation scores for a single article are likely to be higher in the largest database (Google Scholar). Most citation metrics are tied to a single source of citation data, but some (such as the H-Index) can be calculated from any dataset and should therefore only be presented with the data source declared.
These factors have led to the development of a wide range of journal-level citation metrics, each of which controls for different aspects of citation behavior. Most well-known citation metrics are calculated at the journal level—however, criticism of the way in which journal citation metrics are used to rank contributing articles, authors, or institutes has driven the development of an increasing number of article-level metrics, and the development of metrics based on factors other than citations. Indeed, the San Francisco Declaration on Research Assessment (DORA) of 2012 specifically challenged the use of the Impact Factor and other journal metrics in decision-making, focusing on three main themes:
- the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
- the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
- the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).
The resulting proliferation of alternative metrics (also known as altmetrics) gives editors an increased scope to understand the ways in which academics interact with their content, whether those interactions are through citation, usage, or social media. What is common to all these sources is that they are from outside of the traditional academic journal environment in which research may be mentioned. For further information on what altmetrics are, you may be interested in this beginner’s guide.
In a scholarly world that sometimes seems obsessed with rankings, journal citation metrics remain dominant—yet the proliferation of alternative metrics speaks to a changing environment, with more emphasis given to new measures of academic engagement and a focus on articles decoupled from their journal. Any one of these metrics can be valuable—if imperfect—tools, but only if used to measure the right things in the right way. It is essential that anyone using metrics to make important decisions—whether as a publisher or a society, a funder or an editor—really understands their shortcomings as well as their strengths. By understanding all of the different ways in which ‘impact’ may be measured, we can gain a clearer understanding of how academics interact with academic research and can therefore work to ensure that it serves the academic community, rather than merely serving a calculation.
Take a look at our quick guide to article metrics for more detail on each metric.