Impact factories.
Via Crooked Timber, a story in the Chronicle of Higher Education about how the impact factor may be creating problems rather than solving them.
The impact factor is supposed to be a way to measure the importance of a journal, and of the articles published in that journal, in the great body of scientific research. To compute the impact factor for Journal X for a given year, you divide the number of citations in that year of articles published in Journal X in the two preceding years. Then, you divide that by the total number of articles published in Journal X in the two preceding years. What does this give you? A measure of how important other researchers in the field think the articles in Journal X are (since, the thinking goes, you cite articles that are important, and articles that are important get cited). Because we're looking at a ratio, we get a sense of the proportion of important articles in a journal rather than just the raw number of cited articles. So, if in 2005 there are 100 citations of articles published in 2003 and 2004 in PZ's Journal of Squid Canoodling, and 100 citations of articles published in 2003 and 2004 in Deep Thoughts from the Discovery Institute, but PZ J. Sq. Can. published 1000 articles in 2003-2004 while DTDI published only 300 in that time, the impact factor of PZ J. Sq. Can. is 0.1, while the impact factor of DTDI is 0.33.
As scientists at prestigious research universities know all too well, the impact factor is a way for tenure committees and granting agencies to judge how good your publication record really is. Instead of simply counting your publications, folks can also look at the impact factor of the journals in which you've published. This could be a helpful thing if one has a small number of publications but they're in widely cited journals. Similarly, it could bite you in the butt if your articles happen to be in journals that hardly get cited at all.
But, as seems to be the case any time you represent something like the importance of a journal (or of your scholarship) to a number, there are ways the impact factor doesn't tell you all you might want or need to know. For one thing, given that it's often useful to cite review articles, journals that publish lots of review articles get cited more, thus raising their impact factor. This doesn't tell you much about the impact of those journals' articles on original research. Also, remember that the impact number is calculated based on citations of articles from the preceding two years. It sometimes happens that really important results, for whatever reason, are not recognized (and cited) that quickly. So a three-year-old article that is cited like crazy will do nothing for the impact factor of the journal it's in. And of course, "little" journals that focus on fairly specialized scientific subfields have a much harder time getting high impact factors simply because there are fewer scientists who work in these subfields to cite the articles. (If these journals keep a really tight reign on the number of articles they publish, they could offset the low number of articles cited. But, this isn't necessarily what you want to do in a little subfield that is just taking off -- building the literature seems like a more natural impulse.)
Also (as the "journals" in my example above should suggest), sometimes articles are cited a lot to be made fun of.
The Crooked Timber discussion of the article talks about ways journal editors might try to game the system, raising some legitimate worries. Any place you are clear on the selection criterion, you can usually figure out multiple strategies to satisfy it. For example:
The gaming has grown so intense that some journal editors are violating ethical standards to draw more citations to their publications, say scientists. John M. Drake, a postdoctoral researcher at the National Center for Ecological Analysis and Synthesis, at the University of California at Santa Barbara, sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: "I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal's themes."
Because the manuscript had not yet been accepted, the request borders on extortion, Mr. Drake says, even if it weren't meant that way. Authors may feel that they have to comply in order to get their papers published. "That's an abuse of editorial power," he says, "because of the apparent potential for extortion."
Robert P. Freckleton, a research fellow at the University of Oxford who is the journal editor who sent the message to Mr. Drake, says he never intended the request to be read as a requirement. "I'd be upset if people read it that way," he says. "That's kind of a generic line we use. We understand most authors don't actually do that." He changed the wording in the form letter last week to clear up misunderstandings, he said.
The benign reading of the editorial suggestion to add citations is, "Hey, there's other work out there, which you might not have noticed, that bears on yours in an interesting way -- have a look!" And, sure, who can argue against keeping up with the literature that relates to your own work? It might have been more persuasively above-board had the literature-to-look-at list included articles from other journals (in a Macy's sends you to Gimbel's kind of move), but you can't blame an editor for knowing his own journal best. The less benign reading is that the editors are taking active steps to artificially boost their journals' impact factors.
What gets lost in all this gamesmanship is the idea that scientific work ought to be evaluated on its own merits. Peer reviewers are supposed to be assessing the soundness of the science, not the sexiness of the finding. Sure, the sheer number of scientific journals in most fields makes it hard for any mortal to "keep up with the literature", which means that scientists look for quick ways to locate the papers most likely to be important. But the quick ways may not be the most reliable ways. And, depending on how sensitive peer review decisions are to impact factor gamesmanship, it is conceivable that things could reach a point where being published in a high impact journal has less to do with the soundness of your science than with the fashionability of your findings. At the extreme, scientists might have to spend a lot more time replicating published results, and might spend quite a bit of time ignorant of important findings that are published in out of the way places, or are waiting in the queue at the journals with high impact factors. And that would suck.
Technorati tags: impact factor, scientific literature
2 Comments:
I agree completely. Impact-factor grubbing is a very problematic way of assessing the quality of work, and only serves to reinforce the heirarchy of journals. This puts more pressure on younger investigators to sit on their data until they have higher impact stories (1 Cell paper is better than 2 JCS papers) and accumulate figures, until every journal wants ten figures to publish - and on and on.
The mechanism of the calculation seems quite reasonable. Winstrol
Post a Comment
<< Home