Skip to Main Content
Hunter Library
Research Guides
Western Carolina University

Find Journal Information & Scholarly Metrics: Introduction to Impact Factors

This guide provides information on various tools that discuss the scholarly reputation of journals, specific journal articles, and in some instances, scholars themselves.

Journal Impact Factors Introduction

What are Impact Factors?

Definition

The impact factor is a citation measure produced by Thompson Scientific's ISI Web of Knowledge database. Impact factors are published annually in ISI's Journal Citation Reports database. Impact factors are only available for journals that are indexed in ISI databases. 

One journal's impact factor on its own doesn't mean much. Instead, it's important to look at impact factors of multiple journals in the same subject area. This way, one can determine if the impact factor of the journal of interest is high or low compared to other journals in a subject area.

Impact Factor Debate

Impact factors have been much debated in the literature in terms of their value for evaluating research quality. The general consensus is that impact factors have been misunderstood and abused by many institutions that place too much value on something that is not entirely scientific or reliable. Please refer to the 'Factors that Influence Impact Factors' and 'Additional Readings' sections to find out more.

How Impact Factors are Calculated

A journal's impact factor for 2007 would be calculated by taking the number of citations in 2007 to articles that were published in 2006 and 2005 and dividing that number by the total number of articles published in that same journal in 2006 and 2005. Please see the example below.

Example:

The specific calculations for Nursing Research's 2007 impact factor are displayed below.

Articles published in 2006 that were cited in 2007: 98
Articles published in 2005 that were cited in 2007: 103
98+103=201 

Total Number of articles published in 2006: 67
Total number of articles published in 2005: 48
67+48=115

201 (articles published in 2006 and 2005 that were cited in 2007)
115 (total number of articles published in 2006 and 2005)
= 1.748

The 2007 Impact Factor for the journal Nursing Research means that, on average, articles published in this journal from one or two years ago have been cited around 1 and three-quarter times.

Factors that Influence Impact Factors 

Date of Publication

The impact factor is based solely on citation data and only looks at the citation frequency of articles from a journal in its first couple years of publication. Journals with articles that are steadily cited for a long period of time (say, 10 years) rather than only immediately lose out with this calculation.

Large vs. Small Journals

Large and small journals are compared equally. Large journals tend to have higher impact factors--nothing to do with their quality.

Average Citation

It’s important to remember that the impact factor only looks at an average citation and that a journal may have a few highly cited papers that greatly increase its impact factor, while other papers in that same journal may not be cited at all. Therefore, there is no direct correlation between an individual article’s citation frequency or quality and the journal impact factor.

Review Articles 

Impact factors are calculated using citations not only from research articles but also review articles (which tend to receive more citations), editorials, letters, meeting abstracts, and notes. The inclusion of these publications provides the opportunity for editors and publishers to manipulate the ratio used to calculate impact factor and falsely try to increase their number.

Changing / Growing Fields 

Rapidly changing and growing fields (e.g. biochemistry and molecular biology) have much higher immediate citation rates, so those journals will always have higher impact factors than nursing, for instance.

ISI's Indexing / Citation Focus

There is unequal depth of coverage in different disciplines. In the health sciences, the Institute for Scientific Information (ISI), the company which publishes impact factors, has focused much of their attention on indexing and citation data from journals in clinical medicine and biomedical research and has not focused on nursing as much. Very few nursing journals are included in their calculations (around 45). This does not mean that nursing journals they do not include are of lesser quality, and, in fact, they do not give any explanation for why some journals are included and others are not. In general, ISI focuses more heavily on journal dependent disciplines in the sciences and provides less coverage for areas of the social sciences and humanities, where books and other publishing formats are still common.

Research vs. Clinical Journals

In some disciplines such as some areas of clinical medicine where there is not a distinct separation between clinical/practitioner versus research journals, research journals tend to have higher citation rates. This may also apply to nursing.

 

Compiled by Heidi Schroeder, Michigan State University (used with permission)

Measure Research Impact

Researchers are often asked to demonstrate the impact of their research. Typically this means showing evidence that your work is being used / built-on / responded-to by other researchers in your field. The simplest approach to this is by listing each of your publications and then counting the number of times each work is cited. Beyond that, many departments and disciplines rely on more complicated approaches to measure impact.

A typical approach considers the quality of publication venues as well as some measure of how often your work is cited. There are also approaches that try to measure the impact of research beyond the sphere of journals; this alternative approach to metrics is sometimes called “altmetrics.”

The use of any particular measure and the importance of those measures depends on the standards within a given academic discipline and even more specifically within a particular department. You should consult your Departmental Collegial Review Document and other relevant documentation to see if and how your department measures impact.

Citation counts and metrics

The most rudimentary form of measuring impact is to count the citations for each of your publications. Since checking every subsequent article in your field to see if your publication is mentioned isn’t a practical or sustainable approach, most scholars lean on an indexing database or service to find citation counts for a given article or author. Web of Science, Scopus, and Google Scholar are the three most used general databases. Western Carolina pays for access to Web of Science and Scopus, while Google Scholar is free to the public. There are also some smaller databases that cater to specific disciplines. SciFinder is one such database to which Western Carolina has access and its focus is limited to journals in the fields of biology, chemistry, physics, and medicine.

Web of Science and Scopus have fairly straightforward search engines for tracking the citations of particular articles and authors. Google Scholar works similarly for articles, but for citation counts for authors, they use a more robust, but also more involved Scholar Profile which requires a little bit of set-up.


There are a few other metrics based on citation count that are popularly used and are often also calculated by these services. The most popular of these is h-index, a measure of an author’s scholarly output and impact where h represents how many publications an author has that have been cited h times. An author with an h-index of 5 has five publication that have each been cited at least five times, while an h-index of 12 would signify twelve publications each cited at least twelve times. This measure was proposed by J.E. Hirsch in 2005 and has enjoyed widespread popularity. 

There have been other proposed summary measurements of varying complexity. The g-index is a proposed improvement on the h-index, but there have been many other attempts to build on or improve what the h-index does. Anne-Wil Harzing discusses some of these variants and their various strengths and weaknesses on their website. Ultimately, the point of these measures is to demonstrate the impact of the researcher’s work and their usefulness to you will be largely dependent on your specific department’s requirements.

Publish or Perish is a software application developed by Anne-Wil Harzing specifically for researchers looking to demonstrate the research impact of their work. It is based on top of the Google Scholar database, but provides a way to look at and calculate some of these other metrics.


Measuring journal quality

There are several popular methods of measuring journal quality. Although many of these measures can be calculated independently, they are often associated with a particular commercial database or service that indexes journal articles.

CiteScore and Scimago Journal Rank

CiteScore is a number produced from the indexing done by Elsevier’s Scopus database. It is defined as the measure of the ratio of citations to documents in a journal over four years divided by the total number of citable documents that journal published over that same period. This measure (and rankings derived from this measure) are freely available on the Scopus website. 

The Scimago Journal Rank (SJR) indicator uses the same set of Scopus-indexed articles, but uses a process based on Google’s PageRank algorithm. Essentially, the entire interconnected network of citations is analyzed so that journals that are cited often impart more prestige when they cite an article than the same citation in a less-cited journal. Citations in Nature would count more than citations in Bob’s Funtime Quarterly, an imaginary and presumably much less prestigious publication. SJR indicators for journals are freely available on the Scimago Journal Rank website.

Impact Factor and its variants

In its most simple formulation, impact factor is calculated as the ratio of number of citations received by that journal in a given year to the total number of citable items in the journal over the most recent two years. While this formula is platform agnostic, the term “impact factor” and “journal impact factor” are most closely associated with the calculations done using the articles indexed on the Clarivate Web of Science platform and then reported in the InCites and Journal Citation Reports products. Both of these are proprietary products that Western Carolina does not currently subscribe to.

Eigenfactor is a proprietary measure that attempts to account for the differences between a citation in a highly-cited journal and one in a less-cited journal. If this sounds a lot like what the SJR indicator does, it’s because the two measures share the same approach. For a further discussion of the differences between SJR indicator and Eigenfactor there is a solid explanation and breakdown on the Society for Scholarly Publishing’s blog. The source data used to calculate Eigenfactors is again from the Clarviate platform, but for a time, Eigenfactors were made publicly available on Eigenfactor.org. However, it looks like any values from 2016 or later are not available.

Google Scholar takes a different approach to measuring journal quality, adapting the author metric h-index and applying it to journals. Using h-index (and a number of its variants), Google Scholar ranks journals and also offers rankings for particular fields and areas of expertise, using the same suite of h-index measurements. 


Altmetrics

“Altmetrics” is an expansive term that encompasses approaches to measuring impact that don’t map onto the more traditionally used approach of citation counts in peer-reviewed journals. The approach is an acknowledgment of the basic truth that actual scholarly impact isn’t limited to this fairly restricted dataset. Often hinging on uses of publicly available data on the internet, many altmetrics track mentions and discussion in other publications beyond peer-reviewed academic journals such as in newspapers, blogs, and even on social media. Some approaches try to quantify other measures of engagement beyond mentions (e.g. times an article is viewed, downloaded, and shared).

Our Research has several websites that can give you a taste of two approaches to altmetrics. Impactstory is the better known and is notable for its integration with ORCID, while Paperbuzz is a newer project powered by using the Crossref API. The websites detail the methodology used by each of these projects, but the purpose is the same as more well-known and traditional methods: to demonstrate the impact of your research.

Altmetrics, by nature, is an ever-evolving and malleable approach. But like any of the more traditional measures, its purpose is to help demonstrate and quantify scholarly impact. If it’s useful for showing the reach of your work, it’s worth further investigation.