Egregious ranking analysis?

[Cross-posted at Revise and Dissent.]

The Research Quality Framework (RQF) was a proposal by the previous Australian federal government to introduce a set of metrics by which the research output of university departments can be measured. Something like the Research Assessment Exercise in Britain, certainly in principle (I don't know enough about either to say how alike they were in practice). The new federal government scrapped the RQF earlier this year. It's gone, dead, buried. Instead we're getting something called the Excellence in Research for Australia (ERA) Initiative, which is completely different, and much better. Personally, I think I preferred the RQF -- more interesting possibilities for backronyms there.

I don't really have an objection to this type of thing, in principle. But as everyone knows (including, I'm sure, those who devise them) performance metrics can lead to perverse incentives. The average length of time people have to wait for elective surgery would seem to be a good one for hospitals, but not if they start turning people away rather than hire more doctors or expand facilities in order to reduce this metric. Or even worse, start turning them out before they are fully recovered.

So the precise metrics used matter. And one of the ERA metrics seems to be causing a lot of concern: the ranking of journals, both international and Australian, in terms of quality. Publishing in high quality journals scores more highly than publishing in low quality journals, and in the end this presumably translates into more dollars. Seems fair enough on the face of it: obviously most historians would prefer to publish in high journals whenever possible anyway, with or without an ERA. But who decides which journal gets what rank?

The short answer is: not historians. The longer answer is the Australian Research Council (ARC), which is the peak body in this country for distributing research grants. In the first instance they are relying on journal impact factors (a measure of how often articles from a journal are cited by other journals), which at first glance would seem to discriminate against historians, for whom monographs are a hugely important means of publishing research. Maybe there's a way of correcting for that, I don't know. Anyway, there are four ranks, ranging from C at the bottom, through B and A, to A* at the top. [Spinal Tap reference here] A* is defined as follows:

Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/subfield. Virtually all papers they publish will be of a very high quality. These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted. Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions.

This is supposed to represent the top 5% of all journals in a field or subfield. A is like A*, only less so, and represents the next 15%; B, the next 30%; C, the bottom 50%. I can see a danger for perverse incentives here, at least for Australian journals (international journals won't notice a couple of submissions more or less): C rank journals might get even fewer quality articles submitted to them, because these will be directed to the A*s, As and Bs first: how can they then hope to climb up to B? So ranking journals in this way not only measures the quality of journals, it might actually fix them in place: a self-fulfilling metric.

At least the ARC is seeking input from historians (and indeed, all researchers in Australia in all fields) about the proposed ranks, but what effect this will have is anyone's guess. The ARC has already extended the deadline for submissions from next week to mid-August, so they're clearly aware of the 'large interest' the journal ranks have aroused.

So, to see how the draft ranks play out in my own field (what that field is is debatable, but let's call it modern British military history), I trawled through my bibliography files to gather together a list of journals which have published articles useful to me in the last 25 years, discarding any which appeared only once (on the basis that they're evidently only of marginal relevance).

A*:

  • Historical Journal
  • Journal of Contemporary History
  • Journal of Modern History

A:

  • War in History

B:

  • Journal of Military History
  • War and Society

C:

  • International History Review
  • Journal of British Studies
  • Twentieth Century British History

One thing leaps out: no military history journal is ranked A*. There's an A (War in History) and two Bs. This is troubling for military historians: we will be disadvantaged relative to historians working in other subfields if even our best journals are ranked low. It could be argued that some journals are too specialised to be in the top rank, but then Ajalooline Ajakiri Tartu Ulikooli Ajaloo Osakonna Ajakiri, which I think focuses on Baltic history, is given an A* rank, alongside American Historical Review and Past & Present.

Is it really the case that military history doesn't deserve a place in the top 5%? Are the other journal ranks relatively fair? What do people think?

CC BY-NC-ND 4.0 This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Permissions beyond the scope of this license may be available at http://airminded.org/copyright/.

6 thoughts on “Egregious ranking analysis?

  1. It's hard to see how WiH could be any better than it is. If it isn't publishing enough ambitious field-defining articles then maybe no-one's writing them. But WiH is clearly the best in its field and it's an important field. (But obviously I have a vested interest in the one journal that I've appeared in being ranked really high!)

    I'm surprised that Journal of British Studies is ranked so low. I thought it was better than that.

  2. Post author

    I'd tend to agree with you on WiH, and if I only made one change it would be to make it A*. I think, at least, the positions of the military history journals relative to each is about right. I've been advised that it would be better to publish the paper I'm working on in JMH than WiH, but my gut feeling is that while both have published important work in my area, WiH's are more interesting overall, which I guess equates to better! (The other issue is that JMH is more tolerant of long papers than WiH, so WiH might not be right for this paper anyway.) There's also JCH, which seems willing to publish military history, broadly defined.

    Maybe JBS has declined in recent years? I've got a lot refs from it in my db, but only 1 after 1985.

  3. A fundamental problem for many humanities scholars is that publications within your own "national" context - on Australian or Irish History or literature for example - may not be of great interest to international journals whereas scientific articles are universal. It is quite possible for an academic in the humanities (or indeed the social sciences) to do a great deal of good work which will be "below the radar" of these criteria.

    These is also a problem with assessment based on citations - we have science academics in my uni with over 13,000 citations while the top cited history staff only have a few hundred.

    The other problem with all these journal based assessments is that papers in the sciences typically have multiple authors, including research supervisors, heads of dept and other bodies whereas a humanities paper is usually a solo effort. the Head of a productive dept in the sciences may accumulate many co-authored papers and many many citations, whereas his or her (hir?) humanities colleagues will have fewer.

    Humanities academics really have to keep on making these points to the "bean counters"

  4. Post author

    Excellent points, Mike. I can demonstrate some of them from my own brief, unillustrious career in science, from which I garnered 4 publications, 1 in a refereed journal and 3 in conference proceedings (the exceedingly low publication rate for humanities conferences doesn't help either). Only one of these I wrote; none has fewer than 3 authors (one has 6). The refereed paper has, so far, had a total of 46 citations. All this without really trying! In fact, the journal paper was published nearly 4 years after I left the field and 8 years after my major contribution to the project ended.

    An easy way to correct for this would be to weight a publication's value by the number of authors, i.e. a single-authored paper is worth 1, one with 2 authors is worth 1/2 to each author, one with 3 authors is worth 1/3, etc. But then people working in fields like particle physics, where there are hundreds or even thousands of co-authors, would complain ... Maybe science and humanities just work too differently to be measured by the same metrics, but we're measured using those drawn from the science model.

  5. Post author

    Antoine Capet sent the above post and comments to H-Albion. On 24 July, Peter Stansky replied as follows:

    I think this is a totally shocking listing, particularly from the point of view of historians of Britain. In my view, the Journal of British Studies and Twentieth Century British History are deeply distinguished journals and that they should be ranked C rather than A demonstrates the fundamentally flawed nature of this sort of enterprise.

  6. Post author

    Some more comments from H-Albion, from 27 July. The first is from Robert McNamara:

    List members might be interested in the initial rankings given to some
    of the Journals mentioned below by the European Reference Index for the
    Humanities.

    The European Science Foundation through the HERA network has established
    the European Reference Index for the Humanities (ERIH). The ERIH is an
    index of journals in 15 discipline areas, these lists aim to help
    identify excellence in Humanities scholarship and provide a tool for the
    aggregate benchmarking of national research systems, for example, in
    determining the international standing of the research activity carried
    out in a given field in a particular country. However, as they stand,
    the lists are not a bibliometric tool.

    Eleven of these lists have been published and are available for review
    with a further four to be published shortly. The page through which the
    lists, guidelines and feedback forms can be accessed is:

    http://www.esf.org/research-areas/humanities/research-infrastructures-in
    cluding-erih/erih-initial-lists.html

    Scroll down and click on the History PDF

    I compared the rankings given by ERA for the Journals mentioned and the
    initial findings of the ERIF.

    ERA rankings
    A*:
    Historical Journal
    Journal of Contemporary History
    Journal of Modern History

    A:
    War in History

    B:
    Journal of Military History
    War and Society

    C:
    International History Review
    Journal of British Studies
    Twentieth Century British History

    ERIF rankings

    It uses A, B, C ranking i.e. no A*.

    ERIF
    A
    Historical Journal
    Journal of Contemporary History
    Journal of Modern History
    International History Review
    Twentieth Century British History
    War in History
    B
    Journal of Military History
    Journal of British Studies
    War and Society (not ranked)

    The ERIF seem more generous, and more along the lines that I would have
    thought.

    The second is from Ian Welch:

    As a heavy user of journal articles on 19th China and missionary
    history, I somehow missed the earlier discussion on this issue. Would
    someone oblige me by forwarding the original list of journal rankings.
    (ian.welch@anu.edu.au). Who was it, who used to say, "I thangyow?" A
    quick test for British social historians, just to lighten our load for
    a moment and a reminder of the fleeting nature of fame and therefore
    research.

    [** The original message, as it came to H-Albion, is posted below.
    You will have to get in touch with Prof. Capet or track down the blog
    he cites. Remember, you can also search the logs at http://www.h-net.org/~albion/
    for missed postings. -- rbg **]

    I suspect that those distressed by certain shortcomings in the list do
    not transfer to missionary history, the graveyard of ambitious
    academics, at least in Australia. Even religious historians, a fairly
    rare breed in OZ, although invariably highly competent researchers,
    rarely descend so far into the bog of career ignominity (a lovely
    word, wish I was sure it actually fits!).

    And that seems to be the underlying question, at least as I have
    followed the thread so far. It all seems to depend on the deeper issue
    of what constitutes importance in research. There seems to be a strong
    value of managerial/economic utilitarianism underlying attempts to
    rank journals, especially by the seemingly reasonable method of
    citations. Utilitarianism is very prominent in all areas of academic
    enquiry and has, in the end, not the goal of satisfying mere
    intellectual curiosity but serving the 'interests' (however defined)
    of the contemporary ruling social, cultural and economic elites.
    Nothing new in that, of course, it is, and has always been,
    inseparable from research when people are seeking funding.

    But obviously, or is it not obvious other than to me, the number of
    citations should be reviewed against values other than sheer
    frequency. I'll bet Canberra to a brick that anything that now
    mentions 'climate change' gets lots of citations, such as that famous
    scientific collective report now cited endlessly.

    And the third is from William Anthony Hay:

    Having missed the original posting [** See the previous message in
    this thread. -- rbg **], my immediate reaction is to concur with Peter
    Stansky for the most part and question the whole idea of
    mathematically ranking journals. I think most of us would agree that
    journals vary, not least because they aim at different audiences. But
    quantifying differences produces the bizarre results that Stansky
    notes. The whole approach fits more with hard sciences and social
    sciences than humane disciplines like history. Librarians and
    administrators who don't get what we do tend to push this approach as
    an easy measure of assessment, so we are stuck with the task of
    educating them to see whay it doesn't work.

Leave a Reply

Your email address will not be published. Required fields are marked *