Monitoring Journal Performance

When you analyze your journal's success, you're bound to look at metrics. But there are more to choose from than ever before. Do you look at:

  • Citation metrics, the classic Impact Factor rating?
  • Usage statistics, how the users interact with the articles online?
  • Social networking statistics, engagement with the articles on social media?
  • Altmetrics, a whole range of other methods for judging the quality of an article?

Here's the inconvenient truth about journal metrics: There just isn't one simple, go-to metric that gives an unambiguous rating of the quality of a journal. Some important factors you’ll want to consider when evaluating your journal’s performance are:

Measuring Impact: Read our Guide to Impact Factor and find out how to get indexed.
Alternative Metrics: Visit our overview of Alternative Metrics and find out how Wiley helps authors monitor the Altmetric score of their article.
Measuring journal usage: Find out how Wiley uses usage reporting standards to effectively report on journals readership.
Avoiding Scams: Find out how to identify Counterfeit Journal Metrics services.
Effectively Interpreting Data: Read our guide to making the most of your reports.

 

Guide to the Impact Factor

Citation metrics remain the most prevalent method of analyzing a journal's success. They're based on the assumption that when an article is cited by another academic, it's had an impact on their research. It's often used as a way to estimate a journal's influence on its subject area—and, by proxy, its perceived quality.

But there are many reasons why an academic might choose to cite another person's work. Those reasons don't always reflect the 'quality' of the cited work. Nevertheless, citations provide a way to measure the extent to which the published academic community has engaged with a given piece of research.

 

How to Get Citation Values

Citation values can be obtained from a number of services, including Google Scholar, Microsoft Academic Search, CrossRef, PubMed Central, and Altmetrics. However, the majority of journal analysis is based on multidisciplinary indexing databases, such as Web of Science and Scopus. Unlike Google Scholar and other autonomous databases, Web of Science and Scopus only index content after it has been reviewed for academic quality. So, we know that a citation in one of these databases is derived from academic material.

Citations are counted in a database only if both the cited and the citing article are indexed. This means that citation scores are likely to be higher within larger databases. For this reason, the same article is likely to have a larger citation count in Google Scholar than in either Scopus or Web of Science—not because the database has 'missed' citations, but because it does not index all the citing content.

 

How to Boost Impact

Network wherever you can! Personal bonds can play a significant role in journal awareness as well as attracting submissions, readers, and citations.
Ensure that your review process works efficiently. The shorter your turnaround times, the longer the article will have to make an impact.
Get your mix of articles right. Well-targeted review papers can be very useful in ongoing research and tend to be heavily cited.
Keep the balance. Balance your need to drive metrics with the needs of the academic community. Case reports and short communications tend to attract few citations, but do they drive practical interest and readership in your community?
Collect papers into thematic sets. Well-targeted sets, either as special issues or symposia in standard journal issues, will create interest.
Publish meeting abstracts. If publishing conference supplements, are they full papers or proceedings? Proceedings papers will count as full articles in your metrics, and often attract very low interest.
Embrace search. Encourage your authors to read Wiley's Writing for Search Engine Optimization. This provides tips on how to make their articles discoverable online.

 

The Citation Window: How it Affects Your Articles

You won't find many measures that simply compare papers by the total number of citations they have received. If you do find one, run away. For obvious reasons, any measure calculated from total citations will be heavily biased towards older papers. Simply because it has been around for a longer time, the older article gets an advantage.

To combat this, most metrics set a 'citation window': the period of time after an article's publication during which citations will be included in the calculation for that metric. For example, the impact factor has a 2-year citation window. For an article published in 2013, only citations received in the 2 years after publication (2014 and 2015) will count towards an impact factor. This means that the age of an article is to some extent controlled. But the metric works for the calendar year (or cover year) so that a paper published in January 2013 has an advantage over a paper published in December 2013.

 

Evaluating Impact Factor

Most journal citation metrics are a measure of the average number of citations per paper in a given set of articles. The 2015 Journal Impact Factor, for example, measures the average number of citations received in 2015 to papers published in the previous two years (2013 and 2014).

This aggregation of data means it is not necessarily representative of individual articles within the journal—one may be very highly cited while others have not been cited at all. When using citation metrics to compare research, there are a number of factors that you need to consider, including:

Subject area: Different disciplines (and sub-disciplines) have different citation behaviors. The Social Sciences and Humanities tend to cite more slowly and cite a larger proportion of books (as opposed to journals) compared with scientific disciplines. Metrics should not be compared across subjects unless these factors are accounted for.
Type of research: Review papers tend to attract the most citations. Case studies are often invaluable for teaching or practical work but tend to be less well cited in academic research. This doesn't mean that they're poor quality. Usually, 'non-substantive' papers, such as meeting abstracts and editorials, are excluded from the denominator of citation metrics.
Age of research cited: Older articles will have more citations. If using a metric that measures 'total citation counts', keep in mind that the metric will be skewed towards older papers or towards more established academics.
Data source: There are many sources of citation information (i.e. Web of Science, Scopus, Google Scholar), and the citation scores for a single article are likely to be higher in the largest database (Google Scholar). Most citation metrics are tied to a single database, but not all are. In these instances, it is important to note the data source.

 

How to Get Your Journal Indexed

In most cases, the journal's publisher (where applicable) will arrange the application to any indexing service. They also ensure that the correct communications, permissions, and systems are in place in the event of acceptance. Your Wiley editorial representative will provide feedback on your journal, so you can estimate the likelihood of acceptance.

There are a range of factors used when deciding whether to index a journal. It is important that these criteria are met before submitting a journal for coverage. Examples of criteria used by Thomson Reuters include:

  • Timeliness of publication: Late or short publication can indicate poor academic reception and the possibility that the journal will falter in the near future.
  • Quality of peer review: A journal must have a robust peer review system in order to maintain research quality.
  • Distinctiveness of subject area: A journal must have a distinctive aims and scope. Companies like Thomson Reuters do not want to index titles that cause a redundancy or unnecessary addition. You have to show how your title will enrich the database.
  • Internationality: Unless a journal is regional, you should try to reflect geographical diversity of the subject area in your authors and editorial board.
  • Number of citations: Journals are often rejected because of low citation levels in their category, which could be because its main competitors are not indexed and there is no record of articles that cite the journal.

 

Overview of Alternative Metrics  

In recent years, metrics that apply to different aspects of publication behavior have developed. Examples include:

  • Mendeley readers
  • Social Media—Tweets, Facebook 'likes', blog mentions
  • Bookmarks—online bookmarks imply that people are going back to consult the article again and again
  • Usage metrics—how the users interact with the articles online

However, as with citation metrics, each unit of measurement has its own complexities. No one metric can be taken as a sign of 'quality'.

 

Evaluating the Usefulness of Alternative Metrics

Advantages Disadvantages
They give us an insight into public impact, rather than just scholarly attention.

They're quicker to accumulate—sometimes they even predict future citations.

They can be used to track the attention for non-traditional research outputs.
They can't tell us anything about the quality of the research.

It's almost impossible to keep track of everything everyone is doing online, so the picture is almost always incomplete.

 

The truth is, you need both kinds of metrics to get the full picture of research impact. A majority of research doesn't have attention online—and that's OK. It's likely just not being discussed on the sites that altmetrics aggregators track, like more than 80% of other research online.

More information on the theory behind alternative metrics is available at altmetrics.org.

 

Altmetric

The Altmetric service is available to all journals on Wiley Online Library and allows you to:

Freely track online activity and discussions about individual scholarly papers from social media sources, including Twitter, Facebook and blogs, the mainstream media, and online reference managers such as Mendeley and CiteULike.
See the attention that articles receive in real-time—the score is open for everyone to see, follow, and understand.

For more information, visit our Altmetric page on Wiley Online Library.

PLEASE NOTE: "Altmetrics" are not the same thing as "Altmetric." 

Altmetrics (aka alternative metrics) is a type of data that helps people understand how scholarship is discussed online.
Altmetric (aka Altmetric.com) is one of several services that report on altmetrics.

 

Measuring Journal Usage  

Web usage metrics have become increasingly popular. The potential for immediate data on an article has been invaluable to journal editors. The problem is ensuring everyone uses the same system. Just as different citation databases report on different citation numbers—different usage systems report different usage.

For this reason, the international organization COUNTER (Counting Online Usage of Networked Electronic Resources) has established a range of standards and codes of practice for organizations reporting usage data. Like many publishers, Wiley regularly captures usage of all traffic on the Wiley Online Library, following the rules specified by COUNTER. Usage data are only delivered in aggregate form, respecting end user privacy.

As with citation metrics, you need to be aware of the factors that can influence the rate of download:

Geographical Disparity
Online-only metrics can be affected by the variation in internet access across the world. Journals that target geographical locations with poor levels of internet access can therefore report "lower" usage.
Data Sources
Some journals are available on multiple platforms, not all of which are COUNTER-compliant.
Self-Usage and Promotional Usage
Self-usage and promotional usage affect usage metrics. It's similar to the way that self-citation can distort citation metrics.
Controlling for Robots and Web-Crawlers
This is a real challenge to usage systems. How do you differentiate between genuine academics accessing research, and usage by automated robots and data-miners? Robust systems must be in place to exclude them from usage reports.

 

Find out more about usage statistics on the COUNTER website.

 

Counterfeit Journal Metric Services  

Unfortunately, there are fraudulent services that offer to index journals for a fee. If you receive an invitation from a service, here are a few questions that you should ask.

What is their reputation and history?
A web search will often pull up questions raised by other editors or analysts regarding a particular metric source. Failing that, explore their website. Do they give an office address or a contact for data corrections?
Are they asking for an upfront payment?
Very few reputable metrics will ask you for a fee in return for indexing (and providing metrics) for your journal. Indexing quality content enhances the value of their database and publicizing the metric on your website will direct traffic back to them. So, asking for a fee is a gigantic red flag.
Do they outline the mathematical formula for their metric?
So an indexing service calls you up and tells you that your journal’s Impacting Factoid is 3.662. Thanks for that. But without knowing the calculation of the metric (or what that means in the context of other indexed journals), it is a useless number. If they do tell you the calculation, does it make sense?
Do they disclose their data source?
Accurate citation metrics rely on a robust citation database that is carefully curated. Cutting corners here results in inaccurate or duplicate indexing of articles. Never trust a metric if you can't identify (and validate) the underlying dataset.

If they cite their own data sourcehow has it been compiled, and what is its scope?
A citation database can only count citations to and from the papers indexed in the database and it is important to know:

  • How many journals are indexed?
  • What are their indexing criteria?
  • Can you search the database (at the article level)—i.e., can you validate their data?

For those journals affiliated with a publishing house, such as Wiley, your first step should be to check with your editorial team. Most publishers will have procedures in place to handle data feeds to the many abstracting and indexing websites and will have experience identifying fraudulent services.

Previous Article
Attracting Submissions
Attracting Submissions

Learn how to attract more authors to submit to your journal.

Next Article
Managing Peer Review
Managing Peer Review

Find out more about the Peer Review process, best practices, and guidelines.