A Position on Peer Reviewing in HCI, part 1

HCI is continuing a trend towards using conferences, rather than journals or books, as the premier venue for published work. The speed of the submission-decision cycle often means that decisions are fast and binary: one shot and you’re in or you’re out. And in this process, a very small number of people hold a lot of power over your paper’s fate. In this post, I offer my position on the role and responsibilities of being an AC. The short version: peer reviewing is a critical, not a scientific, activity; it ought to be acknowledged as such; and people ought to be mentored in and held to the standards to which other academic critics hold themselves.

It is possible to argue that ACs and reviewers are simply there to verify that your science is good. Antti Oulasvirta has done precisely that, in a lengthy blog about CHI rejections that lays out 8 reasons “why your paper was rejected,” as determined by an AC. His 8 reasons all deal with issues of validity, research design, replicability, etc. Implied but not stated explicitly in Oulasvirta’s post is that if your paper didn’t get in, it’s your fault for doing bad science or reporting as if you had done bad science. Also implied but not stated is that if your paper didn’t get in, it’s not your AC’s fault. Oulasvirta’s post concludes with a cartoon that says, “Shouting at reviewers in your rebuttal is only going to make it worse.” I think this post reflects an assumption that the data (in this case, “your paper”) is presumed to speak for itself, and the reviewer or AC simply sees what is there. If your work lacks construct validity, or your work is not replicable, or your findings are inapplicable, then your reviewer discovers that fact and rejects you. The agency for the outcome is all in your paper.

I disagree with this position. I don’t disagree that papers get rationally rejected because of scientific flaws. I disagree with the implied proposition that the data speaks for itself, that the AC and the reviewers are not responsible for their decision, because the paper itself is.

ACs and reviewers are chosen not on the basis that they represent (as part of a sample) a community, but because they are exceptional: they are presumed to know the topic especially well and to be in a uniquely qualified position to judge a paper. Judging a human-made work to assess its contribution to a community and the world is criticism. It entails a complex and holistic decision that weights factors ranging from the timeliness of the topic, the rigor of the science, the framing of the problem space, the relevance of the contribution, the nature of the contribution, the applicability of the contribution, and so forth. Poor rigor is seldom a winning recipe for publication, of course! At the same time, some flaw or compromise in the rigor can be found in most papers. The issue is not the ability of a reviewer to note the fact that there was such-and-such flaw in the methodology: the issue is how seriously to judge that flaw as a part of the whole contribution.

So in my view, ACs need to stop acting like a paper’s acceptance or rejection is simply causally related to whether that paper adheres to prevailing standards of scientific rigor. ACs make judgements based on their expert point of view: they should and they must. This judgment, because of its consequences, needs a rational justification. This is what the metareview is for–though it is not always effectively used that way. ACs and reviewers need to be accountable to their own position of power–and how responsibly they wield it. And if they fail in that duty, then they deserve to be yelled at in the rebuttal. And if you are an AC or reviewer that gets yelled at in a rebuttal, before browbeating disempowered authors with your copy of The Basics of Social Research, how about a little introspection? Because at CHI today no one else can or will demand it of you, thanks to a combination of (a) reviewer anonymity and (b) structural power imbalance. Let’s explore this:

  • Reviewers and ACs decide what is accepted and rejected, both shaping what constitutes mainstream discourse at CHI and also influencing people’s careers (tenure, etc.). Authors have no such power over reviewers or ACs.
  • Reviewers and ACs each can write as much as they want about a paper. Authors can reply via a one-time rebuttal with up to 5,000 characters (not words!) to all ACs and reviewers. Reviewers can write as much as they want in reply to rebuttals, but authors can’t see or reply to these responses. ACs and reviewers can hold secret conversations that authors are not privy to. ACs meet in a secret meeting to make final decisions, and questions and comments are raised in that meeting that authors will never hear or have any chance to respond to.
  • Authors don’t know who these people are who are wielding this power over them.

Now I am aware of all the practical reasons that have led to such structural outcomes. I am not declaring a need to replace them with a new process (though personally I support both the CSCW’12 and alt.chi’08 and ’12 models). But I want to make one thing very clear: ACs and reviewers have very little structural accountability in the current CHI Papers and Notes model. If ACs or reviewers make an honest mistake, or, worse, if they are negligent through laziness, intellectual narrowmindedness, or personal conflicts, there is very little consequence for them, opportunity to correct it, or recourse for authors.

Hence my main thesis: if we continue to use the existing reviewing structure, then to ensure the most rational reviewing process, we need to work proactively to ensure that ACs and reviewers are competent and act with integrity inasmuch as they are critics (a job separate from whatever their area of subject expertise is).

In the spirit of contributing to this work, I have taken the time to lay out some of the key expectations every AC should be held to. To see them, continue on to Part 2.

4 Comments

  1. jeffreybardzell
    Permalink

    For the record, Antti has objected to my characterization of his post. Antti reiterates what he said in that post, which is that its purpose was to point out the sort of deal-breaker flaws that can kill papers outright, not to offer a theory of ACing.

    My purpose in mentioning his blog post was to try to articulate different epistemological positions people might adopt with regard to reviewing, specifically the extent to which we foreground or background the role of the reviewer in evaluating contributions. My hope was to thoughtfully engage with different formulations of this relationship, not to attack Antti personally.

    Regardless, I hope questions of how we might interpret Antti’s blog don’t distract from the main point of this piece. (Though, ironically, any debate on my reading of his blog is symptomatic of the problem I am articulating: do assessments of contributions come from “the text itself,” the intentions and creative situation of the person who wrote it, or from those who interpret it?)

    Reply
  2. John C. Thomas
    Permalink

    Good point. Obviously (or seemingly obviously), reviewing is a judgemental and therefore largely subjective process but we often talk about it as though it is objective. If we wrote a paper describing the review process and submitted it to CHI, it would probably be rejected as being non-scientific.

    A number of papers accepted for CHI, BTW, treat ordinal data as ratio data, compare performance on specific tasks and then make conclusions about the “universe” of tasks while basing the statistics as though the tasks are a fixed factor, or treat successive trials as “independent” events when we know from the psychological literature that they are certainly not independent observations. Similarly, people often treat the reactions of members of a group using a system as independent observations when there is every reason to believe that they are not. So, I think we aspire to and pretend to much more “scientific rigor” (physics envy?) than is actually justified.

    Reply
    • jeffreybardzell
      Permalink

      Physics envy is alive and well throughout the social sciences. The reason for its entrenchment in the social sciences is at least comprehensible. Inasmuch as HCI is moving toward culture and “third wave HCI,” however, physics envy is looking more and more like a persistent but irrational dogma.

      Reply
  3. Ed H. Chi (@edchi)
    Permalink

    There is a commonality behind all scientific academic endeavors: the choice of problems is a subjective process, and whether it is important or not is a critical judgement. Because of this, I very much agree with your point. It is worth noting also that this consensus or lack of consensus somewhat defines a field and is socially constructed.

    However, I think in some part of HCI research, once there is some agreement on what are important problems, there are definite scientific objective processes that take over that determines who executed the research better than another. This part of the process is less subjective.

    Your overall point is good, in either case, that yes, there are elements of critical activity in here. The interesting thing is that I think this is true of any scientific reviewing activity, even in physics.

    Reply

Leave a comment