Impact Factor = Community Measure

We think it measures importance, but what it was designed to measure is centrality

Impact Factor = Community Measure

Editor’s Note: This is a revised version of a post I wrote in 2018, as I am still finding it relevant.

The vocabulary around the Impact Factor has been a little puzzling to me, ever since I first encountered it coming into scholarly publishing in the mid-1990s. Why was it called the Science Citation Index, the Impact Factor, the Web of Science? Like most of us, I’d never thought about it that hard. While sensing the import the terms carried in the field, I otherwise went with the flow.

Now, older and more curious about how we got here, I started wondering one day about the word choices. They certainly were meant somewhat literally at the time, comporting some functionality or vision of Eugene Garfield and crew. So, I started pondering three questions:

  1. Why is it a “factor”?
  2. Why is it an “index”?
  3. Why a “web”?

It turns out that exploring the reasons for these carefully selected terms yields some insights about why the Impact Factor was first created, its purpose, its misappropriation by academia, and more.

The Science Citation Index (SCI) was created for the same reason anyone creates an index — as a way of making shortcuts to things. Basically, the SCI was created to aid discovery, creating an index of citations in the sciences (social sciences and others) as a shortcut into the literature.

It started via some inspirational conversations, fueled by associates of Eugene Garfield’s who urged him to think about review journals as a source of insight into what was really driving the evolving literature — to think about a way to index the most important articles via citations from these synthesis journals.

What emerged was a vision of journals as useful hubs of discovery. After some years, the Journal Citation Reports (JCR) were introduced, with Garfield writing in a booklet introducing the JCR:

The more frequently a journal’s articles are cited, the more the world’s scientific community implies that it finds the journal to be a carrier of useful information.

Journals were already useful intellectual hubs. Counting citations was just a new way of providing a quantitative measure of the size and centrality of the hub. He described his vision of a citation index as “an association-of-ideas index.” Through these associations, journals that received more citations were likely to serve as meaningful connectors, making them more useful as a point of departure for someone seeking to discover more relevant information, learn new things, or understand a field and what it was collectively pursuing or discussing.

This all led to the Journal Impact Factor (JIF), called colloquially “the Impact Factor,” dropping a key distinction through colloquialism. The JIF is a score given to each journal based on recent citation activity divided by the number of scholarly articles in that recent timeframe. The timeframe was the prior two years, to make sure that the index was relatively current and manageable. The calculation generated a score, which could be used to evaluate the relative connectedness of a journal hub.

Of course, once academics saw a score next to a journal, their natural prestige-seeking senses tingled, and they realized that there was an objective-seeming measure they could use to gain a perceptual advantage. That is, the Journal of Applied McGuffins may be prestigious, we think, but now we know its prestige is 8.675 while the Journal of Practical McGuffins only has a prestige of 3.099. Therefore, my publication in Applied McGuffins is more than twice as important as your publication in Practical McGuffins. Bwa-ha-ha-ha!

This misappropriation of a factoring mechanism for an index, and its transmutation into a talisman with symbolism far beyond its original design, has led to all sorts of unintended consequences. Through this transmutation, the factor in the index has gained outsized status, and has baked itself into the academic rewards system. Such are the accidents of history. Antagonists of the Impact Factor often point to all of these accidents of history as requiring that the Impact Factor be banished. However, it serves a useful if circumscribed purpose. The challenge is to return it to that specific role in a discovery system, it seems. The flaw is not with the factor, but with ourselves.

We accept the phrase “Impact Factor” without contemplating the implications of having a word like “factor” in the metric. Like risk factor, it is a bit of information, like a person’s age or weight might be in regards to their health. And the “impact” is related not to importance, at least in concept, but to discoverability. The factor relates to inbound discovery, not outward prestige. That is, the higher the Impact Factor, the better the journal is as a connector among relevant outside resources. It is important as a hub, as a hint within a discovery tool about where might be a good place to start looking for the information many people with expertise are talking about.

Once you wrap your head around this, you may feel that dissonance of two equally valid but contradictory ideas weighing on your thoughts when you hear the phrase “Impact Factor.” Suddenly, it’s not just a blunt tool of prestige, but a subtle tool in a larger discovery concept with clear ideas of utility emerging from it. That is, you might see a journal with a high Impact Factor not as having more prestige, but as having more utility for researchers in the field because it has more inbound links, so is more likely to be useful and discovered.

Updating this to the Age of Google, if the Impact Factor were actually deployed actively, it would be very much like finding the top search term, which is itself largely based on incoming citations, all of which have their authority based on recursive checking of the sources themselves by the PageRank algorithm and other tricks of the trade. This vision of the Web of Science is precisely why this service received its name.

The Science Citation Index utilized a discovery metric based on inbound citations called the Impact Factor, which was part of a vision of interconnected resources called the Web of Science. Fast-forward 40+ years, and most have forgotten about the Science Citation Index, the Web of Science is on the web so sounds more like a brand that migrated online (like e-commerce), and the Impact Factor has taken on a life of its own because prestige remains the coin of the realm, and it has become a proxy currency.

When you talk next about the Impact Factor, the history of knowing that it was misappropriated by prestige-seeking academics almost from the get-go, that it was meant to be a factor in an indexing and discovery service, and that it was a utilitarian measure of what’s recently interesting to a field might help explain a lot — why Impact Factors vary by field, why journals are what are being measured and not articles, and why its use as a measure of prestige always seems slightly “off.”

What a tangled Web of Science we weave when citations are what we retrieve.