I encountered a graph recently that presented an interesting accessibility fail (and challenge). The graph was part of a blog posting on NPR’s Planet Money titled, “The Scariest Jobs Chart Ever isn’t Scary Enough.” Jacob Goldstein, the author, also calls the graph, “one of the defining graphs of our time.” Sounds pretty exciting!
Alt txt: Take 1
Well, if the reader is blind or low vision and reading with a screenreader, this is the alternative text that describes the graph, “Jobs lost and gained in postwar recessions.” Well, that’s not very exciting. The text of the blog post provides very little additional information about the graph, other than “It tracks the job market in every U.S. recession and recovery since WWII — and it shows just how brutal the the past few years have been.”
Alt txt: Take 2
The graph is one of those that gets posted and reposted, so I followed its path to see if its previous iterations were any more successful in terms of accessibility. Business Insider posted the graph with the following alt text, “chart of the day, the scariest jobs chart ever, january 2013.” Again, not terribly informative. The brief article surrounding the graph provides a bit more context with a quote from Bill McBride from Calculated Risk, the blog which originated the graph:
“This shows the depth of the recent employment recession – worse than any other post-war recession – and the relatively slow recovery due to the lingering effects of the housing bust and financial crisis”
Alt txt: Take 3
The fourth graph shows the job losses from the start of the employment recession, in percentage terms, compared to previous post WWII recessions. The dotted line is ex-Census hiring. This shows the depth of the recent employment recession – worse than any other post-war recession – and the relatively slow recovery due to the lingering effects of the housing bust and financial crisis.
One More Try
The figure is entitled Percent Job Losses in Post WWII Recessions. There are eleven lines on the graph, representing the following employment recession years: 1948, 1953, 1957, 1960, 1969, 1974, 1980, 1981, 1990, 2001, 2007 (Current Employment Recession).
The vertical axis is labeled “Percent Job Losses Relative to Peak Employment Month,” beginning with -7.0% to 1.0% in increments of 1.
The horizontal axis is labeled “Number of Months After Peak Employment,” beginning at 0 to 70 in increments of 2.
In the graph, the current employment recession line (2007) drops steadily from 0.0% to approximately 6.4% at 25 months. The line then begins a slow incline, nearly up to 2.0% at 61 months. There is a small peak at 28 months, but it is corrected for with a dotted line indicating ex-Census hiring.
The other 10 recession lines are clustered together with steep drops and inclines between 0.0%-5.0% and 0-30 months. The line for 2001 extends to 48 months, but only falls to 2.0% job loss.
[NOTE: NCAM also recommends presenting graphs as data tables, a simple deconstruction that allows the data to speak for itself. I attempted to produce a table from this data, but it got a little messy given the graph’s complexity.]
What do you think?
Now that I’ve attempted a fuller description, I’ll share the image. For alt txt, I’ve used the graph title: “Percent Job Losses in Post WWII Recessions.” In a document, I would use this in connection with the previous long description on a separate, linked page.
This is my attempt at dealing with this complex image, and I’d love to hear your thoughts about what might make this description more effective. I think the simple alt text and the long description would work very well together to make the original documents more accessible. I do like the idea, however, of incorporating a long description into the main content of a document. The description I created could benefit from some adjustments for this purpose, specifically a little less formality. And even though my description is functional, it misses some of the drama and intensity of the graph.
This graph, as currently presented on the web, is a clear accessibility fail but it also offers and interesting challenge/ opportunity. How can we use long descriptions to not only convey basic information about an image, but also the tone of the image? What strategies would you use to convey the “scary” aspect of this graph?
This is an approximate version of the talk I gave at the 2013 Conference on College Composition and Communication
Listening as Literate Practice: Insights from Blind and Low Vision Adults
Today, I’m discussing an exploratory, collective case study focused on digital literacy and blindness. For the first part of the study, I am conducting a series of one-hour interviews with adults who are blind and low vision about their literacy histories and daily literacy practices; the basic framework for the interviews is based on the literacy life history research of Deborah Brandt’s Literacy in American Lives and Gail Hawisher & Cynthia Selfe’s Literate Lives in the Information Age. My research is still in its early stages. I’m currently immersed conversations, and I haven’t fully processed the insights provided by those who have graciously shared their experiences. Today I’ll be talking about some emerging themes from these interviews and what this research means for us as composition teachers. Before I begin, I want to thank the people who have taken time from their busy lives to speak with me and to share their insights and expertise.
The recently launched journal Literacy in Composition Studies defines literacy as “a fluid and contextual term”:
It can name a range of activities from fundamental knowledge about how to decode text to interpretive and communicative acts. Literacies are linked to know-how, to insider knowledge, and literacy is often a metaphor for the ability to navigate systems, cultures, and situations. At its heart, literacy is linked to interpretation—to reading the social environment and engaging and remaking that environment through communication.
This is a definition of literacy that I feel comfortable with; it feels nuanced and rich and emphasizes the contextual, social nature of literacy. But this definition, like many definitions of literacy, fails to address the “how” of literacy, specifically in relation to the acts of the body. Is literacy a matter of sight, touch, sound? For individuals who are blind and low vision, this question matters.
The “how” matters because people who are blind and low vision are excluded by the assumption underlying many definitions of literacy—that literacy is a matter of print/digital text (and thus, of sight). In my interviews thus far, when I asked participants “how do you define literacy?” most included some mention of the “how.” For many people who are blind, braille (reading through touch) is the very definition of literacy. Yet, many people who are blind or low vision don’t read braille. They read with synthesized speech or recorded voice. They write with a keyboard and audio playback. However, these practices are often denied the label of “literacy.” The National Federation of the Blind, an organization that claims to be “voice of the nation’s blind,” argues that listening is not literacy, but rather a means to gather information. But what is literacy if not exactly that? In my interviews, participants have a range of opinions about braille or the validity of listening as literacy, yet their definitions are strikingly similar in their focus on literacy’s role in facilitating information, communication, connection.
The debate surrounding braille and listening is significant within the blindness community, so much so that one of the participants in my study reported being called “illiterate” on multiple occasions because she does not know braille (even though she has written and published an academic book). This debate shouldn’t only be the purview of the blindness community, as its implications extend far beyond. This debate and the experiences of people who are blind and low vision speak directly to a variety of disciplinary conversations, two of which I will address today: 1) visual literacy and 2) multimodality.
What did Charlize Theron wear to the Oscars? Is that blue dress on the Land’s End website ugly? What does a fingerprint look like? These are all sample problems that interview participants used to illustrate the challenge of blindness within the context of a visual culture, a visual culture that WJT Mitchell argues, “entails a meditation on blindness, the invisible, the unseen, the unseeable, and the overlooked” (p. 170). Within Mitchell’s stance on visual culture is an assumption that red carpet fashion, online shopping, or forensic science are all unseeable (and by extension, unknowable) for people who are blind. Yet, Georgina Kleege challenges Mitchell’s stance, suggesting that “the average blind person knows more about what it means to be sighted than the average sighted person knows about what it means to be blind” (p. 179). Visual culture assumes a certain way of knowing, “overlooking” other possibilities for representation.
The appearance of a red carpet dress can be represented through print or digital image, but what other possibilities are available? How can we translate information from one mode into another, and how might we better ensure that this translation (or “transduction,” as Kress calls it) happens more regularly, that we don’t assume that the visual is how meaning is/can/ should be conveyed? How might the visual be represented through word, through touch, through sound? A dress, for instance, might be represented in words with rich, evocative detail. This process of representing the visual through the verbal has a long history through the rhetorical concept of ekphrasis. Through further consideration of this concept, we might better understand how verbal information, and the process of listening to this verbal information can expand our understanding of the visual aspects of literacy.
Within composition studies, we are comfortable with the idea of multimodality, and increasingly it is the way we do business. But often, when we talk about multimodality, it is a specific type of multimodality. Discussions of multimodality tend to operate from an assumption that modes are partial, that “different modes have potentials that make them better for certain tasks than others; and not every mode will be equally ‘useable’ for a particular task” (Jewitt and Kress, 2003, p.3) In a multimodal document, “The meaning of the message is distributed across all of these, not necessarily evenly. In short, different aspects of meaning are carried in different ways by each mode. Any one mode in that ensemble is carrying a part of the message only: each mode is partial in relation to the whole of the meaning” (p. 3). Multimodality is presented as a layering of the other modes, each building towards a comprehensive whole.
With this sense of the partiality of modes, also comes a privileging of modes. As Jay Dolmage critiques the NLG’s view of multimodality, “Behind much of the New London Group’s work is the implicit argument that, in each individual learner, the more modes engaged, the better. I would argue that we do not all have the same proclivity, desire, or ability to develop all of our modal or literate engagements” (p. 187; see also Yergeau). While multimodality acknowledges the embodied nature of literacy, “the bodilyness of mode” (Kress, 2003, p. 45), much of our theory about multimodality fails to consider disability. While the multimodal composing opportunities brought to us by new media technologies remind us that literacy is embodied, they should also be reminding us that not every body is the same.
We need to consider how each mode can fulfill similar affordances, to ask how the same message can be conveyed through multiple modes, not altogether but each on its own terms. With this approach, should one mode be inaccessible because disability (or circumstance) another mode can be utilized. As an example, consider efforts to create a multimodal web, where digital web content can be “accessible through the user’s preferred modes of interaction [e.g. “GUI, Speech, Vision, Pen, Gestures, Haptic interfaces”] with services that adapt to the device, user and environmental conditions” (W3C). The concept is also captured by the following description of a hypothetical multimodal book, specifically with a blind user in mind:
The system I feel we really need will have a choice of modalities—speech, Braille, large print and dynamic graphic displays. It will be configurable according to the user’s needs and abilities. It will scan pages into its memory, process them as best it can, and then allow us to read them in our choice of medium. . .with a keyboard, a tablet, a mouse or perhaps tools from Virtual Reality. It will offer us any combination of speech, refreshable braille or large print as well as a verbal description of the format or layout. (Harvey Lauer, qtd in Mills, 2012).
Choice. This is what we need. A choice of modes, where one mode is not culturally favored over another, where representation is flexible and fluid. Many of the individuals I have interviewed thus far have emphasized the importance of choice, that literacy should be what each individual blind person chooses it to be. For many people who are blind or low vision, listening is literate practice. Not only is literacy a “fluid and contextual term,” but it is also a fluid and contextual reality. We need to attend to our assumptions about the how of literacy and multimodality, we need to challenge those assumptions in order to speak to the reality of disability, to make space for its possibilities.
Brandt, Deborah. Literacy in American Lives. New York: Cambridge University Press, 2001.
Dolmage, Jay. “Disability, Usability, Universal Design.” Rhetorically Rethinking Usability: Theories, Practices, and Methodologies. Susan K. Miller-Cochran and Rochelle L. Rodrigo, Eds. New York: Hampton Press, 2009. 167-190.
Glascott, Brenda et al. “Introduction.” Literacy in Composition Studies. 1.1. 2013.
Jewitt, Carey and Gunther Kress, Eds. Multimodal Literacy. New York: Peter Lang, 2003.
Kleege, Georgina. “Blindness and Visual Culture: An Eyewitness Account.” Journal of Visual Culture 4.2 (Oct 2005) 179-190.
Mills, Mara. “Other Electronic Books: Print Disability and Reading Machines.” The Future of the Book—MIT. 30 April 2012.
Mitchell, W.J.T. “Showing Seeing: A Critique of Visual Culture.” Journal of Visual Culture 1.2 (2002) 165-181.
Selfe, Cynthia and Gail Hawisher. Literate Lives in the Information Age. New York: Routledge, 2004.
W3C. Multimodal Access. 2013.
Yergeau, Melanie. Disabling Composition: Toward a 21st Century, Synaesthetic Theory of Writing. Dissertation. The Ohio State University, 2011.