This is an approximate version of the talk I gave at the 2013 Conference on College Composition and Communication
Listening as Literate Practice: Insights from Blind and Low Vision Adults
Today, I’m discussing an exploratory, collective case study focused on digital literacy and blindness. For the first part of the study, I am conducting a series of one-hour interviews with adults who are blind and low vision about their literacy histories and daily literacy practices; the basic framework for the interviews is based on the literacy life history research of Deborah Brandt’s Literacy in American Lives and Gail Hawisher & Cynthia Selfe’s Literate Lives in the Information Age. My research is still in its early stages. I’m currently immersed conversations, and I haven’t fully processed the insights provided by those who have graciously shared their experiences. Today I’ll be talking about some emerging themes from these interviews and what this research means for us as composition teachers. Before I begin, I want to thank the people who have taken time from their busy lives to speak with me and to share their insights and expertise.
The recently launched journal Literacy in Composition Studies defines literacy as “a fluid and contextual term”:
It can name a range of activities from fundamental knowledge about how to decode text to interpretive and communicative acts. Literacies are linked to know-how, to insider knowledge, and literacy is often a metaphor for the ability to navigate systems, cultures, and situations. At its heart, literacy is linked to interpretation—to reading the social environment and engaging and remaking that environment through communication.
This is a definition of literacy that I feel comfortable with; it feels nuanced and rich and emphasizes the contextual, social nature of literacy. But this definition, like many definitions of literacy, fails to address the “how” of literacy, specifically in relation to the acts of the body. Is literacy a matter of sight, touch, sound? For individuals who are blind and low vision, this question matters.
The “how” matters because people who are blind and low vision are excluded by the assumption underlying many definitions of literacy—that literacy is a matter of print/digital text (and thus, of sight). In my interviews thus far, when I asked participants “how do you define literacy?” most included some mention of the “how.” For many people who are blind, braille (reading through touch) is the very definition of literacy. Yet, many people who are blind or low vision don’t read braille. They read with synthesized speech or recorded voice. They write with a keyboard and audio playback. However, these practices are often denied the label of “literacy.” The National Federation of the Blind, an organization that claims to be “voice of the nation’s blind,” argues that listening is not literacy, but rather a means to gather information. But what is literacy if not exactly that? In my interviews, participants have a range of opinions about braille or the validity of listening as literacy, yet their definitions are strikingly similar in their focus on literacy’s role in facilitating information, communication, connection.
The debate surrounding braille and listening is significant within the blindness community, so much so that one of the participants in my study reported being called “illiterate” on multiple occasions because she does not know braille (even though she has written and published an academic book). This debate shouldn’t only be the purview of the blindness community, as its implications extend far beyond. This debate and the experiences of people who are blind and low vision speak directly to a variety of disciplinary conversations, two of which I will address today: 1) visual literacy and 2) multimodality.
What did Charlize Theron wear to the Oscars? Is that blue dress on the Land’s End website ugly? What does a fingerprint look like? These are all sample problems that interview participants used to illustrate the challenge of blindness within the context of a visual culture, a visual culture that WJT Mitchell argues, “entails a meditation on blindness, the invisible, the unseen, the unseeable, and the overlooked” (p. 170). Within Mitchell’s stance on visual culture is an assumption that red carpet fashion, online shopping, or forensic science are all unseeable (and by extension, unknowable) for people who are blind. Yet, Georgina Kleege challenges Mitchell’s stance, suggesting that “the average blind person knows more about what it means to be sighted than the average sighted person knows about what it means to be blind” (p. 179). Visual culture assumes a certain way of knowing, “overlooking” other possibilities for representation.
The appearance of a red carpet dress can be represented through print or digital image, but what other possibilities are available? How can we translate information from one mode into another, and how might we better ensure that this translation (or “transduction,” as Kress calls it) happens more regularly, that we don’t assume that the visual is how meaning is/can/ should be conveyed? How might the visual be represented through word, through touch, through sound? A dress, for instance, might be represented in words with rich, evocative detail. This process of representing the visual through the verbal has a long history through the rhetorical concept of ekphrasis. Through further consideration of this concept, we might better understand how verbal information, and the process of listening to this verbal information can expand our understanding of the visual aspects of literacy.
Within composition studies, we are comfortable with the idea of multimodality, and increasingly it is the way we do business. But often, when we talk about multimodality, it is a specific type of multimodality. Discussions of multimodality tend to operate from an assumption that modes are partial, that “different modes have potentials that make them better for certain tasks than others; and not every mode will be equally ‘useable’ for a particular task” (Jewitt and Kress, 2003, p.3) In a multimodal document, “The meaning of the message is distributed across all of these, not necessarily evenly. In short, different aspects of meaning are carried in different ways by each mode. Any one mode in that ensemble is carrying a part of the message only: each mode is partial in relation to the whole of the meaning” (p. 3). Multimodality is presented as a layering of the other modes, each building towards a comprehensive whole.
With this sense of the partiality of modes, also comes a privileging of modes. As Jay Dolmage critiques the NLG’s view of multimodality, “Behind much of the New London Group’s work is the implicit argument that, in each individual learner, the more modes engaged, the better. I would argue that we do not all have the same proclivity, desire, or ability to develop all of our modal or literate engagements” (p. 187; see also Yergeau). While multimodality acknowledges the embodied nature of literacy, “the bodilyness of mode” (Kress, 2003, p. 45), much of our theory about multimodality fails to consider disability. While the multimodal composing opportunities brought to us by new media technologies remind us that literacy is embodied, they should also be reminding us that not every body is the same.
We need to consider how each mode can fulfill similar affordances, to ask how the same message can be conveyed through multiple modes, not altogether but each on its own terms. With this approach, should one mode be inaccessible because disability (or circumstance) another mode can be utilized. As an example, consider efforts to create a multimodal web, where digital web content can be “accessible through the user’s preferred modes of interaction [e.g. “GUI, Speech, Vision, Pen, Gestures, Haptic interfaces”] with services that adapt to the device, user and environmental conditions” (W3C). The concept is also captured by the following description of a hypothetical multimodal book, specifically with a blind user in mind:
The system I feel we really need will have a choice of modalities—speech, Braille, large print and dynamic graphic displays. It will be configurable according to the user’s needs and abilities. It will scan pages into its memory, process them as best it can, and then allow us to read them in our choice of medium. . .with a keyboard, a tablet, a mouse or perhaps tools from Virtual Reality. It will offer us any combination of speech, refreshable braille or large print as well as a verbal description of the format or layout. (Harvey Lauer, qtd in Mills, 2012).
Choice. This is what we need. A choice of modes, where one mode is not culturally favored over another, where representation is flexible and fluid. Many of the individuals I have interviewed thus far have emphasized the importance of choice, that literacy should be what each individual blind person chooses it to be. For many people who are blind or low vision, listening is literate practice. Not only is literacy a “fluid and contextual term,” but it is also a fluid and contextual reality. We need to attend to our assumptions about the how of literacy and multimodality, we need to challenge those assumptions in order to speak to the reality of disability, to make space for its possibilities.
Brandt, Deborah. Literacy in American Lives. New York: Cambridge University Press, 2001.
Dolmage, Jay. “Disability, Usability, Universal Design.” Rhetorically Rethinking Usability: Theories, Practices, and Methodologies. Susan K. Miller-Cochran and Rochelle L. Rodrigo, Eds. New York: Hampton Press, 2009. 167-190.
Glascott, Brenda et al. “Introduction.” Literacy in Composition Studies. 1.1. 2013.
Jewitt, Carey and Gunther Kress, Eds. Multimodal Literacy. New York: Peter Lang, 2003.
Kleege, Georgina. “Blindness and Visual Culture: An Eyewitness Account.” Journal of Visual Culture 4.2 (Oct 2005) 179-190.
Mills, Mara. “Other Electronic Books: Print Disability and Reading Machines.” The Future of the Book—MIT. 30 April 2012.
Mitchell, W.J.T. “Showing Seeing: A Critique of Visual Culture.” Journal of Visual Culture 1.2 (2002) 165-181.
Selfe, Cynthia and Gail Hawisher. Literate Lives in the Information Age. New York: Routledge, 2004.
W3C. Multimodal Access. 2013.
Yergeau, Melanie. Disabling Composition: Toward a 21st Century, Synaesthetic Theory of Writing. Dissertation. The Ohio State University, 2011.