Textual analysis is fundamental to many kinds of research, from psychology to literature, philosophy to information science. Not surprisingly, different strategies have emerged from within the various disciplines that do textual analysis, and naturally these strategies reflect the epistemologies of the disciplines from which they emerge. And as long as one stays insular to one’s own discipline, there isn’t a problem.
But as soon as a field claims the mantel of “interdisciplinarity,” it faces a dilemma: to protect and preserve what is known to work, or to open itself out to alternative ways of knowing. Now, both of these impulses are in themselves legitimate in themselves, but as they enter everyday life (e.g., the writing, reviewing, and editing of papers), they sometimes appear in clumsy ways. Some of these clumsy ways are as follows:
- Epistemological bigotry: This happens when someone asserts (often without meaning to) that she or he knows the right way and everything else is “fluff” or wrongheaded. In HCI, scientism is often confused for science, to the detriment of both HCI and science.
- Piecemealism: This happens when someone injects a small piece from one tradition uncritically into another, without recognizing that a piece might not represent the whole from which it is drawn, nor recognizing that that piece might be at intellectual odds with the rest. In HCI, I see this with “critical” approaches to HCI where a single concept is ripped from a complex tradition, such as poststructuralism, and applied to traditional design approaches to, say, mobile phones or Web applications.
- Equivocation: This happens when two or more groups of people use the same word in completely different ways, without seeming to be aware that their use is not “natural” or universal. In HCI, “aesthetics” seems to be a word that has almost no relationship to the 2,500 year old tradition of aesthetic theory, as I’ve ranted on before.
All of these involve a combination of dogmatism and muddled thinking. While scientism–by which I refer to as a fetish for scientific ways of knowing, placing it above other forms of intellectualism–is dogmatic and often intellectually muddled, I would stress that neither dogmatism nor muddled thinking is scientific. Scientism so-defined is bad science.
In this post, I will talk about discourse analysis versus close reading. Both are strategies of textual analysis. Both have disciplinary rigor. Both have legitimate benefits. And yet often when I do close reading, I am attacked on the grounds that I am not being “systematic,” not “coding,” and/or just putting forward my “opinion.” And I want to just scream out: I’ve read Virgil in Latin, Proust in French, Dante in medieval Italian, Joyce in whatever language he wrote in: I don’t need you to tell me how to read! But that is self-expression. It doesn’t solve the broader problem, which is that the rigor I bring to text analysis seems to be literally invisible to these reviewers. Instead, 12 years (!) and a doctorate in a Ph.D. program in Comparative Literature comes off as me just putting forward my “opinion.” I need to address this.
Ironically, and no doubt to the detriment of my tenure case, I think a lot more people read my blog than any of my papers, so I want to use this platform to define both discourse analysis and close reading with the hope of making very clear the following:
- They take different epistemological positions. That means that their way of knowing–their assumptions about how to derive meaning from texts, what meanings are supposedly there in texts, the approaches you use to access them–differ.
- They embody different forms of rigor, and if it is your job to evaluate rigor then it is your responsibility to know how to recognize different strategies of textual analysis and to know how to recognize and evaluate their actual or lack of rigor.
- Their outputs are different. What you learn from discourse analysis is not the same thing as what you learn from a close reading, and each approach lends itself to certain kinds of claims–and also fails to lend itself to other sorts of claims.
My Thesis Statement
All of this leads to my thesis statement, and to make sure no one misses it, or if you just skim this post and only see on thing, then let it be this:
1. Discourse analysis and close reading are NOT interchangeable
And that implies this:
2. If someone does a close reading, it does not follow necessarily that they should have done a discourse analysis instead.
With my thesis out of the way, I’m ready to blog-defend it. (A “blog defense” means that this is probably a half-baked and half-assed defense; your recourse is to take me on in comments.) Before I start, I want to share one other value: I try to be generous with concepts, theories, and methodologies. That means that I will attempt a fair-minded summary of both approaches, even though everyone reading this knows which one I like better and am more likely to practice (that said, I have done discourse analysis). But my personal preferences are just that: personal preferences. They do not amount to universal claims or pretenses. Stated directly: I respect discourse analysis as much as I respect close reading, even though I personally practice one more than the other.
On Discourse Analysis
Discourse analysis is a scientific and empirical strategy of textual analysis. At its most basic level, it entails a methodology along these lines:
- Identify a phenomenon you are concerned with, whose significance is at least partially embodied in texts. Example: FOX, CNN, and MSNBC written news coverage of Obama; mommy blogs; letters to the editor published in your local paper on topic X; Amazon.com customer reviews of Y.
- Identify the totality of texts available, and identify a significant and representative sample of the whole.
- Develop a coding system that lets you tag instances of a significant textual feature (e.g., the presence or absence of a feature in a given unit of text).
- Preferably with multiple people, code the texts using the framework. (I’m hereby skipping summary of establishing intra- and inter-coder reliability, but if you are curious, go read Krippendorf who lays all this stuff out nicely).
- When you are done with step 4, you now have a numeric representation of your sample. This can now be analyzed statistically.
What this sort of analysis gets you: If you do it well, you have a bona fide empirical snap-shot of your phenomenon. You are in a position to claim what has been said in those texts. You are in a position to observe patterns that are explicitly present, but which may have been hard to see just by reading all the texts. You are also in a position to discover relationships among those patterns: female writers were more likely to A, while male writers were more likely to B; MSNBC coverage was more friendly to liberals, FOX coverage to conservatives, CNN coverage to lipstick celebrities.
Limitations of this analysis: Strongly empirical approaches such as this are very good at exposing what is there. They are less successful at exposing what is “between the lines,” because in a literal way, what is between the lines is not “there” to be found or represented. Now, obvious stuff between the lines is easy enough to unearth–FOX is conservative, MSNBC is liberal, CNN is vapid–but the deeper, juicier stuff can’t be accessed this way. Discourse analysis alone cannot also get at context very well; who said it and why? I’m sure discourse analysis practitioners will contest me on this, but I mean context in much broader and more radical way than I typically see in these sorts of papers: psychoanalytic, ideological, and other complex cultural and/or subcultural contexts are extremely difficult to see using a positivist strategy like discourse analysis.
[Update]: See comments below for a discussion of whether this is a good or fair summary of discourse analysis.
On Close Reading
The term “close reading” is descriptive, not exactly technical. I might have said “humanistic reading” or “interpretive reading” or something like that. Examples of what I am talking about are acts of criticism. Here I don’t mean critical theory but rather close interpretations of single “texts” (“text” here understood as any cultural artifact): Sontag’s interpretation of photographic portraits of herself; Butler’s interpretation of the ethics of torture photographs in the Bush years; Bloom’s interpretation of Plato; Bazin’s interpretation of de Sica’s Bicycle Thieves; Barthes’ interpretation of a photo on the cover of Paris Match; and so on.
A close reading doesn’t involve a set methodology and as such it is very hard to describe. Foolishly perhaps, I nonetheless attempted it here. But the gist of this sort of approach is that an expert (which I will leave undefined here) engages with a text with great care. This engagement typically entails a number of activities: multiple readings/viewings of the text; situating the text in its social and historical contexts; deconstructing the text using a variety of critical strategies (e.g., from Marxism, feminism, poststructuralism, postmodernism, structuralism, reception theory, psychoanalysis); bringing to bear what, if anything, everyone else has said about that text, including interviews with the author/creator, its critical tradition, similar texts (e.g., by the same author/creator); and so forth. Note that this sort of approach is holistic and relies for its success on the expertise of the expert doing it; it is unique, individual, and subjective; it does not follow any disembodied abstract methodology but rather the logic of the scholar-expert in whose hands it is being executed.
What this sort of analysis gets you: A close reading of this sort explores and exposes far more sensitively the complex cultural embeddedness of the text. It gets at matters of aesthetics, craft, and ethics in profound ways. It is capable of revealing much about a text and a community that is neither explicit in the text nor even known to its community. A spectacular example of this is Dick Hebdige’s Subculture: The Meaning of Style, whose analysis of the punk subculture explores the specific historical and operational details of the xenophobic working class underpinnings of punk’s emergence and war on mainstream mass culture. At no point does Hebdige claim that his analysis represents the conscious expressed point of view of a subculture; rather, he explores and reasons about what the unconscious, unspoken point of view seems to be, where it comes from, and what evidence justifies this line of thinking.
Limitations of this analysis: Close readings are strongly inductive and speculative in nature, so what it won’t get you is confidence that you have an objective and correct representation of external reality as it is. Rather, a close reading situates the text against a network of complex ideas and reflections, with the hope of cultivating our capacity to appreciate and understand the source text. Close readings of aesthetic works often call attention to theorization of art to help expose (or even create) its cultural significance–in the most robust possible sense, and for better and/or worse–in the critic’s society. As I have said elsewhere, a critic often models the act of reading, not to reproduce a static understanding in the reader’s head of what is in the critic’s head, but rather to encourage the reader to use similar interpretive strategies both in the original text and in subsequent texts of interest to the reader.
If you interview 1,000 people coming out of a theatre and transcribe the interviews, you can use discourse analysis to get a real sense of how that film was liked, understood, perceived, etc.
If you read a critical essay about that same film (and here I don’t so much mean newspaper movie reviews but rather scholarly film critiques), you will understand that film’s participation in film, mass media, and everyday culture: its craft, its ideology, its construction of concepts that matter: love, social justice, freedom, sexual liberation, identity, politics, beauty, and so on.
It should be obvious at the very least that both of these kinds of knowledge are legitimate and important, if not always to the same people. If I am a film investor, I absolutely want to understand how moviegoers perceive, experience, and evaluate movies. That is fundamentally an empirical question, and empirical strategies are entirely appropriate. If I am a prospective director, a concerned citizen, a film student or teacher, a film buff, someone who makes decisions about which films should be shown as a part of a community film festival, and so on, then the film critic’s message is much more likely to resonate with me.
In HCI, we combine all of these audiences. We want to design stuff for commercial success. We want to design things that do what they are supposed to do. Our scientific and empirical approaches are already very good at helping us achieve these goals.
But now we also care about sustainability, felt experience, quality of life, social justice. We have Web 2.0 communities whose emergent behavior literally changes the “meaning” of a system over time. As battles between Web 2.0 communities and their software “owners” (e.g., Facebook, Second Life) have shown, it is not even clear who does or should be responsible for these systems. Thanks to APIs and SDKs, software developers from Adobe and Blizzard to Twitter and Yahoo allow users to redesign interfaces. The emergent UI results are sometimes cannibalized and implemented in future releases of the software. What, then, is an “interface” now, anyway?
These broader questions are much more complex than whether a system is usable or whether users prefer this color scheme to that one. Their complexity in large part lies in the un-articulated and often unseen relationships between and among vastly complex phenomena, from human identity practices to social behavior, from globalisation to the history of art, from emergent user-created interfaces to the incomprehensibility of information produced by user-content creators. These issues cannot be adequately described by scientific reductionism, the way predicting task performance can be. This is not at all to say that scientific reductionism can’t contribute to our understandings in powerful ways–of course it can! But drop the scientism, HCI! It’s not going to meet our needs and it’s lousy science anyway (all dogmatism is). Good science and good critique should complement and reinforce each other. But as long as we categorically dismiss non-scientific strategies, we’re only fake-interdisciplinary and we’re going to botch our work.
And today, bad HCI is more than an unusable Web page–it is unsustainable, socially unjust, culturally irresponsible–and a significant majority of our thousand best users just might miss it.