Access Granted: Making Media Accessible, Part 5

In this fifth and final installment of my “Making Media Accessible” series, I want to discuss a type of disability that–though extremely prevalent–can often be overlooked when considering accessibility. Cognitive disabilities are in many ways invisible; there aren’t always physical indications that a person has a cognitive disorder, so it can be easy for others to ignore, downplay, or dismiss the needs of those who have one. In another way, the very prevalence of cognitive disorders can make them seem invisible. Like color-blindness, the more common a disability is, the less impression it seems to make on us, and the more we assume and expect those with the disability will find their own ways to adapt. Unfortunately, this can put those with an invisible disability in an especially difficult position: not only do they need to seek accommodation for their disability, they are also burdened with both explicitly disclosing their disorder and demonstrating or proving to others that they need accommodation–a potentially invasive and embarrassing experience. For these reasons, it is vital to understand and acknowledge the validity of cognitive disability and to consider ways to create greater accessibility for the cognitively disabled wherever we can.


So what is cognitive disability? And what kind of accommodations does it require? The term “cognitive disability” may be a problematic one, as many conditions that it might describe are less a disability than a difference. However, as discussed in Part 1, it is preferable to think of ALL disability as difference, but difference made difficult by its social and cultural marginalization. With that in mind, cognitive disability can encompass a wide range of neurodiverse ways of thinking and processing information, but can be loosely described as a condition in which one “has greater difficulty with one or more types of mental tasks than the average person” (WebAIM). This includes anything from dementia to autism to ADHD to brain injury to dyslexia to Down’s Syndrome. With such a large variety of ways cognitive disability might manifest–each with its own unique qualities–it can be difficult to plan for all possible accessibility needs. Like so many other types of disability we have discussed before, the key is not to strive for universal accessibility, but to look at a few simple places where we can start.

Clear Headings and Navigation

Cognitive disorders that impact processes like problem-solving and memory can make navigation of web and other multimodal texts complex, confusing, and frustrating. Includification describes the need for explicit tutorials in video games for cognitively impaired players to learn how to play before diving into the game. Consider this in the context of composition: when we build multimodal texts with multiple points of entry, how can we establish a tutorial-like experience for our audience to learn to read that text? One way to accomplish that is to make structural and navigational information very clear; for example, headings and links should make explicit where they will go, and essential navigational points like menus and links “home” should be clear and accessible from any part of the text. These features mean that your audience doesn’t need to remember how they got to their place in the text, and they won’t get lost in navigation or confused because they ended up somewhere different than they expected.

Limited Distractions

Attentional disorders are in some ways well-served by multimodal texts: having several different ways of accessing the work may help hold interest, if not on one part of the text, then on another. If not well-planned, however, multimodal texts can be almost impossible for someone with an attentional disorder to process. Limit potential distractions by eliminating unnecessary stimuli like arbitrary or unrelated graphics and non-essential moving or flashing visuals. In gaming, attention can become an issue when a lot is happening on-screen and a player needs to identify important actions while disregarding other, less important visual information. One approach advocated by Includification is large clear markers to indicate characters that are hostile, when differentiating between friend and foe is vital. A lesson we may take here is to create greater visual weight for more essential information in multimodal compositions. Making important pieces stand out means less time and focused attention is required to assess what is important and what is not, while also potentially creating a more easily skimmable text whose general meaning can be understood by someone who lacks the attentional focus to read the full text.

Friendly Text

While unnecessary visuals can be distracting, relevant ones can be immensely helpful to support understanding of a text for people with reading and language processing disorders. People with conditions like dyslexia can have an especially difficult time with large chunks of text, so supplementing or breaking up that text with related images can add context and make it easier to process. WebAIM uses this neat example to demonstrate how images can aid understanding for reading and language disorders:

Read this phrase:

Tob eornot obe

Now go to this link and see if it makes more sense!


I’d mentioned previously that cognitive disability was an issue of particular importance and relevance to me. This is because very recently I was diagnosed with ADHD (attention deficit/hyperactivity disorder). It came as kind of a shock. I know several people who have received this same diagnosis, but I’d never considered it as a possibility for me. I probably should have; I have a lot of the tell-tale signs. I struggled in school, was often thought to be “lazy” or “unmotivated” by parents and teachers. I’m disorganized, forgetful. My handwriting is difficult to read because I’m constantly running words together or skipping words completely. Still, despite learning about ADHD through the experiences of close friends, I never thought it seemed like me.

One of the main reasons for this is that I have a lesser and only more recently diagnosed form of ADHD, known as “inattentive type” or “ADHD without hyperactivity”. This form is not as frequently diagnosed in children because, without the presence of hyperactivity, kids with inattentive ADHD don’t call attention to themselves by acting out. Instead, they are more likely to daydream excessively, and are prone to being distracted by their own thoughts rather than external stimuli. By the time they are adults, many people with inattentive ADHD have figured out ways to hide or compensate for their difficulty controlling their attention (because that’s what ADHD is really about, not a deficit but rather uncontrolled attention).

I think another major factor in my resistance to the idea that I might have ADHD was related to my sense of identity. I’ve loved reading and writing for as long as I’ve been able to do them (and I had an early start at that, too). My perception of ADHD included being a struggling and even resistant reader, and I’d never been that. Had I? When I finally began to think seriously about this diagnosis, it made sense. One feature of ADHD is hyperfocus, the ability to focus extremely well on one thing for a long time. The trick of it is, hyperfocus is not something you can control. It comes when it comes and it goes when it goes. It does, however, occur most frequently when engaging in activities of special interest. For me, that WAS reading and writing. I might have struggled with them more if I didn’t have such a particular love for them, if I’d had a passion for photography or woodworking or skateboarding or whatever else. As it was, I could blow through a complex novel in an afternoon (sometimes), but reading a chapter of a textbook could seem impossible. It was hard for me to accept that I was having difficulty reading; it ran so counter to my identity as someone who adored books, and later as an English major and lit theory nerd.

Since being diagnosed, so much has changed. Understanding the source of a problem is always the first step to dealing with it, and though I still have work to do, the strategies I’ve developed to accommodate my learning style have made my life–and especially my work–infinitely easier and less stressful. What’s more, I’m able to see now the strengths and advantages of ADHD, not just the struggles and limitations. The ADHD brain is creative, inventive, and entrepreneurial. It is an inherent part of the person I am, and that’s a person I happen to like.

When the obstacles start to fall away, it becomes easier to appreciate and value difference, and I think that’s essentially what accessibility is about. If disability doesn’t have to be a limitation, if those with disabilities have free and wide-ranging access to spaces, activities, and media of all kinds, then we can stop seeing them as disabilities and start seeing them as diverse ways of thinking, moving, and being in the world. I hope this series of posts has helped start readers on a path to thinking and learning more about accessibility. The tips and suggestions I’ve provided are just a beginning. The most important thing we can do is to always keep accessibility in our minds, to see beyond our own experience and create work that can be shared by a wide and diverse audience.


Access Granted: Making Media Accessible, Part 4

OK! Let’s charge right in to our next modality up for consideration: audio. Having recently worked on an audio project now being groomed for possible publication, this topic has really been at the forefront of my mind, so I’m excited to dig in. Let’s go!


Digital texts are often very visual, so accessibility for those with hearing disabilities may sometimes go unconsidered, or dismissed with the simple addition of subtitles to videos. However, as those of us who have composed exclusively or primarily audio works can attest, there is an immense amount of information that can be conveyed through subtleties of sound, far beyond the words being spoken. The key questions for us to ask ourselves here are 1) what information is being conveyed through audio? and 2) how can we bring that information into another mode?

Subtitles & Closed Captioning

First, let’s address that most obvious first step in audio accessibility: subtitles. More specifically, the difference between subtitles and closed captioning. In many film and video productions, a subtitle track is added which presents all spoken dialogue in text–sometimes in varying languages–at the bottom of the screen.  While this is helpful for hearing viewers who don’t understand the language used in the film, hearing impaired viewers are losing out on a lot more than just the dialogue. Closed captioning adds contextual information to the text track, allowing hearing impaired viewers to experience the fullness of a scene. Includification offers this example to help make clear the difference between subtitles and closed captioning and to demonstrate the importance of true closed captioning for the hearing impaired:

Sally: Good morning, John

-Closed captioning-
[Sally knocks on the door.]
Sally: Good morning, John.
[The floorboard creaks underfoot.]

In addition to adding a closed captioning track to video, these considerations can be incorporated into text transcripts that accompany audio works. Podcasts have seen a rise in popularity in recent years, and many podcasts provide text transcripts of their content. However, these transcripts do not always contain contextual information. Consider the wildly popular and genre-shaking podcast Serial. While the official website for Serial does provide a lot of supporting media in text and image form (scans of documents, charts, timelines, maps), there actually are no official transcripts of the podcast itself. In a great display of the widespread desire for accessibility options, the Serial discussion group on Reddit has collectively produced a full set of text transcripts for the podcast, as well as a FAQ to help navigate the series. The website Genius, which allows users to share and collectively annotate texts, also features user-created text transcripts of Serial. While these transcripts are essentially the same as the Reddit transcripts–dialogue only–the annotations take a step in the right direction by allowing users to add more contextual information. Still, the official Serial site devotes a whole section to crediting those who have composed music specifically for this work; what might be lost to a reader who doesn’t hear that music? What is lost in the absence of human voices, their emotive qualities beyond the words they use? A dialogue-only subtitle track or text transcript may increase accessibility for some of the more explicit ideas in an audio work, but a closed-captioning-style approach would come much closer to providing a genuinely accessible experience.

Visual Cues

But wait! Aren’t we multimodal composers? Can’t we do more than just turn everything into a traditional alphabetic text? Sure we can! Visual cues are a great way to reinforce some of the contextual information contained in your audio track. Includification considers this option with regard to video game design, describing games that use flashing on-screen warnings or an increasingly obscured field of vision to indicate a character’s decreasing health. These kinds of visual cues are often used alongside audio cues, which not only increases accessibility but also deepens the player’s immersion in the game experience. In dealing with an audio text like Serial, visual cues could take a variety of forms. Text transcripts could be accompanied by images of characters and settings (photographs or even artists’ interpretations), to fill in contextual information otherwise provided by voice and ambient sound. The layout, background, and font choices made for a digital text can all be chosen with considerations of mood and tone, elements often provided by music in an audio work. Even within the text itself, font size, style, and color might be adjusted to reflect shifts in voice. A particularly ambitious and interesting project might even be to attempt to translate an audio text into a silent video text, if nothing else than as an experiment in how audio information can be conveyed visually. This is an area where I think there are so many questions, but just as many possibilities, and all are worth exploring and experimenting.


While it may not be a widespread option yet, there is another way in which video games are approaching audio accessibility that is just starting to reach other formats. Tactile modalities, engaging the sense of touch, are generally overlooked and underutilized in digital media. This is largely because not all media formats have the technical capabilities for tactile engagement, but video games have long used vibration–or haptic feedback–to communicate information and increase player immersion. From Sega’s Moto-Cross arcade game in 1976, which vibrated its handlebars on collision, to Sony’s DualShock 4 controller for their latest PlayStation console, which features a capacitive touch pad and motion detection as well as vibration, video games provide some of the most interesting and diverse applications for communicating a digital text experience through touch. Outside of video game technologies, however, options become much more limited, though hopefully that is beginning to change. Mobile devices (such as cell phones) use haptic feedback technologies as well, but have primarily used it in fairly limited ways, such as a simple buzz for a notification or soft feedback to imitate a button press when using a touch screen. Apple’s new smartwatch seems to be trying to take haptic feedback a little further. With new “Force Touch” and “Taptic Engine” technology, Apple is playing with new ways for an electronic device to interact with a user. Apple Watch wearers can even send one another their own heartbeat, read from the sender’s pulse and communicated to the receiver through gentle haptic feedback. While the practical use of these technologies may still be undetermined, developing new ways to communicate through underutilized modes may have incredible possibilities for composition, as a way of increasing accessibility and as a whole new mode of expression.

It is worth noting too that, while digital technologies that engage tactile modes aren’t yet available to everyone, multimodality does NOT have to be digital! Tactile and kinesthetic engagement are great reasons to play with physical artifacts as a mode for composition. One particularly interesting use of physical artifacts as a supplement to a traditional text is artist Miranda July’s novel The First Bad Man, which was accompanied by a series of online auctions allowing readers to buy physical objects referenced in the book. The objects ranged from a brush with blonde hair in it (sold for $93) to a small gold crucifix (sold for $117.50) to a post-it note (sold for $355)–all proceeds, of course, went to charity. Like Apple’s smartwatch heartbeat, this is certainly more art than practical accommodation, but this kind of experimentation and play can be viewed as inspirational, as a jumping off point for thinking about how we might bring our texts into the physical world.

Accessibility is for Everyone

Audio accessibility and accommodation for the hearing impaired is one area where, to me, it becomes more clear than ever just how much accessibility is not about disability. Providing audio-free accessibility can benefit many people in many different ways. Includification uses the example of Baby-Friendly Settings, which they describe as “the idea that those who are parents trying to play video games should be able to do so at 3 in the morning with the sound disabled and the baby sleeping right beside them”. This is just one example of why audio might be inconvenient, undesirable, or inaccessible for a hearing person. Many offices and other professional environments may be inappropriate for listening to an audio track, which means including audio-free accessibility can increase your chance of being heard in these situations. In fact, I think this kind of accessibility stands out to me the most because I work in a library, where quiet is an absolute must! I often find myself simply skipping video and audio content I encounter online while I’m at work, no matter how interesting I find it. Even when I plan to come back to it later, I rarely do; the moment is lost. Those who can hear the audio may still need additional help understanding it. Someone unfamiliar with the language used in audio work can much more easily seek translation assistance with an alphabetic text, or gain understanding through recognizable symbols and images. The point is this: the more accessibility options you can provide for your text, the more people you will reach–any person, any time, any place. And who doesn’t want that?


My next and (probably?) final post in this series will be about a kind of disability that is far too often overlooked despite having a significant impact on a huge number of people–including myself. Cognitive disabilities: what are they? how do they cause problems with accessing texts? what can we do to increase cognitive accessibility? Stay tuned to help me figure it out.

Access Granted: Making Media Accessible, Part 3

[Surprise! I changed the look of the blog! I’ll explain why at the end of this post, if you haven’t already figured it out.]

As it turns out, being the two most-likely-to-be-engaged modes in digital and multimodal texts, accessibility for aural and visual modes are just way too much to address in one blog post. So today we start with visual, probably the number one most-used mode in almost any kind of composition. A visual text can be anything from an alphabetic text to a video to an image to a diagram, and all of these mediums can present challenges to the visually impaired. Further, visual impairment is incredibly common; according to the American Foundation for the Blind, 20.6 million American adults reported vision loss (poor vision, even after the use of corrective lenses) in a 2012 survey.


Disabilities can vary immeasurably, but for the purposes of accessibility accommodation in digital and multimodal texts–and in trying to assist the most people in the fewest moves–it can be helpful to consider three different categories of vision disability: blindness, low vision, and color deficiency. While accommodations for one of these groups may be beneficial to the others as well, they each have unique enough needs that they should be considered individually when thinking about text accessibility.


One of the most significant ways to increase access for blind users is to ensure that your digital text is compatible with text-to-speech programs (as well as speech-to-text, if user input and interaction is required). This can mean a few things. Of course, in the most basic sense, you should ensure that whatever software or platform you are publishing on can be accessed by a text-to-speech reader. One excellent resource for testing this is Natural Reader, a free text-to-speech program that can read a variety of document types as well as web pages. They even offer a button you can add to your web browser, allowing you to simply click it while viewing any webpage and get a text-only version of the site, which can then be read aloud by the program. Another way you can optimize your site for these kinds of readers is to ensure that images are accompanied by text descriptions, so users who cannot view images don’t miss out on important aspects of the text. The Border House is a feminist gaming blog with a policy that all images must be accompanied by descriptive captions; just how detailed those descriptions are is up to the individual blogger, as some images have larger rhetorical significance to the piece they accompany than others do. Finally, consider the way your text is arranged for optimal clarity when interpreted by a text-to-speech reader. This may seem strange at first, but consider the example of Audyssey, a web site, magazine, and mailing list about games for the visually impaired. On first glance, this site almost seems broken to a sighted user; in fact, it is optimized specifically for text-to-speech readers and the visually impaired. Take a look at the archives of Audyssey’s magazine and you’ll notice some interesting characteristics. The things that normally would improve clarity and comprehension for a sighted reader–spacing, varying font sizes and styles, color, etc–have all been omitted from these documents. Instead, they have implemented a text-to-speech-reader-friendly navigation system based on characters, plus signs in this example, that allows a blind user to more easily navigate the sections of the magazine. This system works as follows:

“Note: This magazine uses plus-signs as navigation markers. Four plus signs
are used to denote featured content such as Articles, and the Chatting with
Creators sections. Three
plus-signs are placed above any regular articles or sections like the News
from Developers, or Reviews & Announcements. Within these
sections, two plus-signs denote the start of a new sub-section like the next
letter or game nnnews. Smaller
divisions are marked by a single plus-sign. This allows people to use
their search capabilities to go quickly to the next division they are
interested in. For instance, the “Letters” section is preceded by three
plus-signs. Each letter within it has two plus-signs before it. Answers
to letters if there is a response will have a single plus-sign before them.”
Audyssey; Games Accessible to the Blind, Issue 53, 1st Quarter 2008

Audyssey is an amazing example of how we can rethink the way we create digital texts–even strictly alphabetical texts–to increase accessibility for people with visual disabilities.

Low Vision

Above I mentioned that certain things–spacing, font size and style, color–really enhance the clarity of a text for a sighted reader; for a low-sight reader, these are exactly the qualities we want to target for enhancement and customizability. While using a particular font style for text may serve a rhetorical purpose, Includification recommends game designers include features to allow users to change fonts to something simpler and easier to see (recommended fonts are Arial or Times New Roman–they are often used as defaults for a reason!). When composing a digital or multimodal text, you may not always have this option; we’re not often building a system from the code up. However, you can give a little extra thought to your font choices. What is the rhetorical purpose for the style you are using? Could that purpose be served with a similar but simpler font? At the same time, think about font size. If publishing to the web, most browsers allow you to zoom in and out by holding Ctrl (Command, on a Mac) and using the mouse scroll wheel or the + and – keys. Take a moment to try this out on your text and ensure that your content is still clear, readable, and well-proportioned. You may find that, when the screen is zoomed to make text a readable size for a low-vision user, other content that may be relevant is being cropped out. Playing with zoom is one of the easiest ways to consider accessibility concerns for users with low vision.

Color Deficiency

Color blindness is interesting because it is so often overlooked, yet it is incredibly common. One of the most striking stories I heard while attending an AbleGamers panel at PAX East was about a game designer who was told his game was not color-blind accessible… and who then replied, “I know, I’m color blind. I can’t play my own game!” I think this story makes a great point, not only about the prevalence of color blindness and other disabilities, but about how we so often design our work for an imagined idealized audience, some generic default viewer whose qualities are dictated by social and cultural values we may not even be consciously aware of. This just makes it all the more important to take a moment (or several, throughout the creation and composition process) to consider your work from a variety of perspectives–including your own!

The use of color is so significant to many digital and multimodal texts, and the rhetorical uses for and implications of color are just immense. It is for that very reason that considering color blindness is so important. There are many different types and gradations of color blindness; some people can’t distinguish between the colors red and green, some can’t distinguish different shades of the same color, others can’t see any color at all. One way to make your text more accessible for color blind users is to consider where it is important to discern contrasts in your work. If it is important to the meaning-making of your work to distinguish clearly between two different colored texts, images, symbols, etc, make sure those two items–whenever reasonably possible–are not red/green, not shades of the same color, and not too similarly toned (a muted slate blue and a soft maroon may look like the same shade of gray to a totally colorblind viewer). Even simple alphabetic digital texts should consider these factors in choosing text, background, link, and highlight colors. Includification offers great examples of how problematic colors can dramatically change a user’s experience: in one scenario, a player finds it nearly impossible to use a game’s targeting system because the green target crosshairs are indistinguishable from the different shades of green found in the game’s grassy environment; in another, team members are indicated with green markers for friends and red for enemies, making a red-green colorblind player feel completely lost.


These are really only a few examples of ways a digital or multimodal text can consider accessibility concerns for those with vision disabilities. I’ve really just barely scratched the surface! Vision impairment is so very common and visual modalities are often essential to our composition, this is one type of accessibility I think it is absolutely necessary to think about. Of course, as I’ve mentioned before, there is no perfectly accessible text, and you cannot be expected to tailor every aspect of your work exclusively to accessibility. But by keeping some of these tips in mind, and stopping to ask yourself some important questions about the choices you are making, I think a little consideration can go a long way to increasing text accessibility for a whole lot of people.

Next up: audio! Stay tuned!


[About the new theme: After writing this post, I could no longer ignore the accessibility issues with my previous theme. Links were bright blue against a pale blue background, the text was tiny (even I struggled to read it!), and as cute as the balloons were, they presented too much potential for distraction and obstruction. This new theme–which is awesome and free, called Hemingway Rewritten–features a simple and clear font in black on a white background, bold headings, and bright blue-green links that aren’t competing with any other colors. I’m especially fond of the header; the image (a public domain find) allows you to add visual interest, but by providing a solid black field for the title text rather than placing it directly over the image, it remains very high-contrast and easy to read. It may not be perfect, but I hope in making this change I’ve taken a bit of my own advice and made this blog a little more accessible for anyone who wants to read it.]

Access Granted: Making Media Accessible, Part 2

In my last post I talked a bit about why accessibility is so important, and hopefully you were totally convinced. But the question remains: how do we increase accessibility in multimodal/multimedia compositions? So here’s the one perfect way to make all your media completely accessible for everyone…

just kidding.

There is no magic trick to make your work 100% accessible for every person. Mark Barlet, founder and executive director of AbleGamers, suggested in a talk at PAX East that accessibility is like a spectrum. No one work is going to be universally accessible, covering the entire spectrum, but you should still strive to cover as large a chunk of that spectrum as you reasonably can. Additionally, we should think about the bigger picture and work towards having a wealth of accessible media available at every point on that spectrum. So rather than looking at something as either accessible or not, there are gradations of accessibility, as well as different types of accessibility to be considered. So let’s look at some of those, and some of the ways we can begin to address them.

In the panel I attended, as well as on their excellent website Includification, representatives from AbleGamers discussed four general categories of disability: mobility, hearing, vision, and cognitive. While specific individuals may have widely varying needs depending on their particular abilities, these categories provide a great framework for starting to think about ways you can increase accessibility.

Before I go on, I think there’s an elephant in this room we need to talk about. AbleGamers is an amazing organization… but what if you’re not making games? Isn’t this blog about writing? How is any of this relevant to composition?

Well, of course creating a game is an act of composition, but you don’t need to be a game developer to find these tips useful. Working with any kind of digital or multimodal composition will always come with issues of accessibility, as disabilities can prevent people from receiving information and actively engaging through all possible modalities. When you compose in one mode–say, plain text–it’s much easier to find an accommodation for that mode (text-to-speech programs, for example) and unlock access to the entire message. When you compose in multiple modes, the problem becomes more complex, and you risk losing part or all of the intended message and experience of the work if one or more modalities can’t be accessed. But all is not lost! Digital and multimodal media also comes with a wealth of options for increasing accessibility, and for providing rich and engaging experiences, when the affordances of those modalities are carefully considered and employed wisely.

Now that we’ve got that out of the way, let’s talk a bit about these different categories of disability and how we can increase accessibility in those areas.


Disabilities that fall into the mobility category include any condition that can limit, impair, or alter physical mobility and motor functions. In the gaming world, this can cause a wide variety of problems, as games are designed to be deeply interactive and mobility disabilities can be incompatible with the game’s designated methods for interaction. For example, some people with muscular dystrophy have an extremely limited range of motion, such that they can only move a computer mouse about 1/16th of an inch in any direction. Meanwhile, people who experience tremors may have a great deal of difficulty making precision movements with a mouse and instead tend to make larger, more sudden movements. One way game designers are asked to address this problem is to allow users to set their own level of mouse sensitivity; a very high sensitivity setting will allow a user to move around the whole screen with only the slightest of movements, while a very low setting will require a significantly large movement to move the mouse a moderate amount, thereby filtering out a great deal of involuntary movement caused by tremors.

So again, what if you’re not making a game? When composing a digital work, you are likely using existing software and not designing your own system from the ground up, so what can you do to help an audience with mobility disabilities? Are they even relevant? To answer that, let’s look at a digital text with some significant interaction: Erin Anderson’s The Olive Project [disclaimer: this is ONLY being used as an example and is in no way meant to be critical of this particular project]. On the main page, the areas to be clicked on are relatively large, and I can also navigate to each of them with the keyboard by using the tab key and hitting enter to click on the link I want. A good start. However, once I enter the main interactive part of the work, my access becomes more limited. I can still select and click links by using the keyboard (especially helpful because clicking on a small word in a paragraph requires some precision of movement), but to play the accompanying audio clips I have to use the mouse to click on the very small play button of the audio player. There is no way (that I can find) to access the audio player with keyboard commands. This means I only have one way into that content, and it requires some degree of fine motor skill.

Some helpful questions to ask yourself when creating a digital work: what interactive elements of this work are essential? how many ways are there to interact with those essential elements? can I access it with a keyboard, with a mouse, with a voice control program? can I increase the number of ways my audience can access/interact with this content? what am I requiring my audience to do?


In my next post, we’ll talk about two more categories of accessibility essential to digital and multimodal composition: audio and visual.

Access Granted: Making Media Accessible, Part 1

So let’s talk about accessibility.

Last weekend, I attended a super fun gaming convention called PAX East. In addition to being super fun, PAX is a great opportunity to attend panels on really fascinating topics relating to the world of games and game-related mediasmall_ablegamers_logo3 and culture. This year I attended several panels hosted by an organization called AbleGamers, an advocacy group for greater accessibility in games. These panels really had an impact on me and the way I think about media accessibility, so I’d like to share some of what I learned there.


Me & my glasses: always and forever

The first panel I attended was about why you should care about accessibility. Disability touches just about everyone in some way, even if it’s not immediately obvious. One example, casually mentioned by a panelist, is wearing glasses. I had to think on that one for a bit. My first reaction was to scoff–I’ve worn glasses for most of my life, and it seems presumptuous and offensive to suggest that you could categorize something so trivial with the experience of people with more serious disabilities. But then I had to stop and ask myself, is it so trivial? And why?

The fact is, if I didn’t have something to correct my vision, I would be quite seriously disabled. My vision is poor enough that it is impossible for me to read even a large print book without glasses; as someone currently pursuing a dual master’s degree in English, that kind of disability would have an incredible impact on my life. Still, I see it as so trivial I almost don’t acknowledge it at all. Because I have an easy and accessible accommodation, the disability disappears. What this all revealed to me is just how relative the term “disability” is. There is no objective measure of who is “able” and who is “disabled”; those terms only refer to how compatible a person’s physical abilities are with their environment. So if we can all work towards greater accessibility for all kinds of people, it is possible to effectively reduce or even eliminate disability! Amazing, right?

Beyond that, AbleGamers’ founders also repeatedly pointed out that accessible design isn’t just about creating special accommodations for certain people–it’s just plain good design. Media that is accessible on multiple levels in multiple modes, and interactive experiences that can be customized to suit the needs or preferences of the user, don’t just benefit the disabled, they benefit everyone. They make the media experience richer and more engaging for all audiences, regardless of ability level. Striving for greater accessibility–and doing it right from the start, building it in from the ground up–is just plain good design, for everyone.

Now that we’ve established just how important and awesome accessibility is, my next post (with lots more great info from AbleGamers) will address the obvious question: how the heck do we do it?

Fair Use Week!

Hey folks, apparently this week is Fair Use Week! Appropriate, right? In honor of this event, I want to share something I’ve had kicking around in my drafts folder waiting for context.

First, here are some scans from the book Stolen Sharpie RevolutionThis book is mostly a how-to guide for the world of zines. It covers creating, designing, printing, and distributing zines. If you don’t know what a zine is, YIKES. You definitely should, because they are amazing.

Strongly (although not exclusively!) affiliated with punk scenes in the late 80s and early 90s, zines were* handmade and photocopied mini magazines usually distributed for free or at cost ($1-$3) at shows and parties, or collected and sold/distributed by zine distros. One of the most important features of zines was that pretty much anyone with access to a copy machine could make them; they were usually handwritten and drawn and/or involved LITERAL cutting and pasting from other sources (with actual scissors and glue). The egalitarian publishing ethic and the use of materials clipped from other sources make zines and zine culture very relevant to digital writing today, as zine makers were dealing with issues of fair use and independent multimedia publishing long before the internet brought those concerns to much wider publishing platforms. You can see on the first page of Stolen Sharpie Revolution that the copyright information is actually copyLEFT information.

ssr4 (1)


Here’s the rest of that intro if you want to read more about what Stolen Sharpie Revolution is all about:

ssr3 (2)

Finally, here’s Stolen Sharpie Revolution‘s take on copyright and fair use as they apply to zine production.

ssr5 (1)

So do the fair use and copyright concerns of zine makers translate directly to digital media? No. As you can see above, zines usually had pretty small runs and circulated primarily within local music and art scenes. Even the most popular zines sold through distros didn’t ever actually produce that many copies, nothing compared to more professional independent publishing options today, and certainly not comparable to the almost infinite and unpredictable audience possibilities of online publishing. But I think by asking the right questions, and by experimenting and occasionally ignoring the consequences, zine culture has done a lot to pave the way for creative digital publishing, and is worth a good long look as we start to think about what we can really do with digital media and where our boundaries are (as well as where they aren’t).


*An important note: I am generally referring to zines in the past tense here because I am particularly referencing the fact that zine-making began before internet publishing was possible. But zines aren’t dead! People are absolutely still making and distributing amazing zines. In fact, zine-making could be a great way to address issues of multimodal composition, fair use, and real-world publishing in a classroom setting that doesn’t have the tech resources to do digital media composition. Multimodal doesn’t have to be digital!!! Here’s a list of resources for places you can find zines and zine info (in addition to the Stolen Sharpie Revolution page, which has a TON of great links and info):

Papercut Zine Library (located in Allston!)

Boston Zine Fest

A/K Press

Doris Distro

Microcosm Publishing


Stranger Danger Zine Distro

Gender and the Concept of Self

While reading the fantastic McCloud “Vocabulary of Comics” piece, I kept thinking about this TED talk and so I thought I would share it with you. McCloud talks about the concept of the self as something relatively abstract; more like a cartoon than a photograph, something that resides in the world of ideas rather than images. This is what leads me to think about Caroline Heldman’s comments on women and body monitoring. Her whole talk is not long and has a lot of interesting information about objectification, but if you just want a quick overview, the video embedded here is cued up to the part about body monitoring.


If you don’t feel like (or can’t ) watch the video right now, basically body monitoring is a heightened awareness your own body, and specifically its appearance. Anyone may engage in body monitoring now and then, but according to Heldman, men do it very little while women do it almost constantly–on average about every 30 seconds*. You may for a moment think about and envision the angle of your body in your chair, the way your hair is falling, your posture as you walk, your facial expression–always from an outside perspective. Heldman suggests that women conduct these small checks on their own appearance so frequently that it can consume a significant amount of their cognitive energy–not to suggest that women have impaired cognitive functioning of course, but rather to point out the serious damaging effects self-objectification.

In light of McCloud’s statements about identity, I have to wonder: do women that engage in frequent body monitoring have a more image-based sense of identity than most men? And if that’s true, even a little, how might that effect some of McCloud’s other points about the way people can more easily connect and identify with cartoon characters? Is that process more difficult for women?

I can’t help but think of the frequent debates over representations of women in comics and video games. While there is no doubt that they are on the whole pretty problematic, for a lot of the reasons Heldman discusses in her talk, there is another argument I’ve heard often from well-meaning guys who just don’t understand why diverse images of women as playable characters in video games are so important to women gamers. They say it really doesn’t matter much to them what their characters look like, so why do women care so much? This may simply be because they are speaking from a position of privilege where they have always had diverse and plentiful representations of people like themselves (especially if they are able-bodied and white), so an occasional aberration is no big deal for them. But maybe there’s something else happening here too. Maybe women actually do have a harder time identifying with a character that doesn’t resemble themselves (or at least a slightly idealized version) because their sense of identity is just generally more strongly tied to image and appearance. McCloud talks about the ability of a person to more easily inhabit a simpler figure, because it is closer to our abstract conception of ourselves. But what if that isn’t true for everyone? What are the implications?

Obviously I’m shooting off a lot of questions here, so please let me know what you all think!


*DISCLAIMER: Of course none of this is meant to suggest that ALL women are constantly body monitoring or that men never do, but rather that they are general trends that occur as a response to socialization and cultural ideals.


**In my previous post, I shared a music video called ‘Jed’s Other Poem’, which got some really great responses in the comments. I decided to repost a part of my response from there as its own post to get more eyes on it, and because it relates to a conversation some of us were having in class last week. I also recommend checking out the comment discussion on that post though, so you can see others’ excellent contributions. This particular response followed Mike’s comment about whether computer code should be considered text, since its intention is not to be read, but to be run.**

That’s a really great point about the function of code. It makes me think of reading sheet music; the desired effect and meaning is the music itself, an audio performance, but at the same time a person who knows how to read it can get a general sense of the piece by just looking at the written notes. Someone who is familiar with the particular coding language may be able to get a sense of the program just from reading the code, though the intended/desired effect is likely the actual running program. This brings me back to the aesthetic aspect. Certainly sheet music and musical notes are used decoratively, for aesthetic and rhetorical purposes, often by those who can’t actually read them at all (and they may be appreciated by readers/audiences who can’t truly “read” them either). Here’s an example:

meta note dress
The notes on this dress are not meant to be read or played, but appreciated aesthetically, and for their rhetorical associations (culture, the arts, elegance perhaps?). I don’t read music myself, so I can’t say whether the notes on the dress are gibberish or not; it’s entirely possible they were taken from an actual piece of music, deliberately or randomly chosen. Still, their primary intended purpose here is NOT to be read as a text or a language, but to be viewed as an image. the same may be said for the code in Jed’s Other Poem; since you don’t see the complete code, and the program has already run so you know what it does, the code’s purpose seems more image-based and aesthetic.
This brings me back to another point some of us discussed (when the class split in two), the blurred line between what is text/word and what is image. I am a particular fan of visual art that incorporates text, a great example being Tracey Emin’s neon signs and quilts. Here’s some more pictures!

emin sign

emin quilt

How do you evaluate things like this as written text? As a visual image? As a physical artifact? This is why so many of us seemed to agree with Prior’s response to Kress in indicating that his distinctions between modes were inaccurate and almost arbitrary, suggesting instead that multimodality “is better pursued through more complex and less certain classifications”. I certainly agree with that assessment; the delineations between modes and their individual affordances are at the most blurry and at the least non-existent.

I feel the need to come to some conclusion here about how we SHOULD think about modes, multimodality, and affordances, but I definitely don’t have that conclusion. Just more food for thought, I guess.

Jed’s Other Poem; A Study in Multiple Modes


I’m not sure what I want to say about this yet, but I know I want to share it. It’s giving me a lot to think about regarding modes of composition. To create this video, a simple computer program was written to move the cursor and type the lyrics with appropriate timing for the song. The program was written and run on a vintage Apple II computer (one of many deliberate rhetorical choices). Near the end of the video, the computer actually displays all of the code for the program that has just run. This is the part that is fascinating me the most, the exposed medium. Here’s a rundown of some of the modes being used in this one short video:

audio: instruments, vocals, lyrics

visual: video recording of Apple II computer, computer program

artifact: 1979 Apple II computer, compatible program

textual: selected song lyrics (in running program), program code

How do we think about computer code as composition? Is it textual or a mode of its own? I’ll need to look into that more; someone must have written about it somewhere, right?

Revival: a proposal for the future of this blog

For this class, I am choosing to revive the blog I started for ENG817. I’ve renamed the blog “Exploring Composition: thoughts on writing and the teaching of writing.” While I intend for this to continue to be an academic-based blog in which I will reflect on material covered in class, it will also include thoughts (reflections, ideas, inspirations, questions, etc) about composition in general, based on my personal experiences and on other class work. My goal for this blog will be to collect ideas, resources, and questions about writing in one place, so I may revisit it often as I continue exploring various issues in composition in both my academic and personal lives. I also hope to use the blog to make connections, and to track the evolution of my perception of writing and teaching writing, possibly even the evolution of my own writing itself.

At the moment, I expect my primary audience to be this class and myself, which I am perfectly happy with. It absolutely supports my goals to use this blog as tool for self-reflection and for gathering the thoughts and opinions of my classmates. However, I hope that in the future this blog may have interest to others as well. This may just be any anonymous browser who happens to have an interest in composition and wants to see someone else’s thoughts on the subject, it may be a teacher who is looking for ideas on how to innovate their approach to writing and who is looking for a fresh perspective. Particularly though, I think this blog could be useful to me in the future as an addition to a writing portfolio, teaching philosophy, or digital CV. Explaining your ideas and approaches to a broad topic like composition can be complex, and having this document to point to for future co-workers, collaborators, or potential employers could make that explanation simultaneously simpler and more nuanced.

When I first developed this blog in ENG 817, my classmates began similar blogs. Looking back at them, there are many similarities between their blogs and my own. They are academic in nature, considering issues relating to the teaching of composition in a reasoned and academically-minded way. They also have a slightly informal touch to the mostly formal content; serious academic issues are interspersed with relevant personal anecdotes and occasionally a light, joking tone. These are in many ways qualities I intend to maintain in my own blog, an academic approach in an informal voice, but there are also place I plan to diverge. These blogs are very course-centered; each post is a response to a prompt, reading, or issue raised in class. I want to broaden the scope of my blog to explore ideas about writing (both the teaching AND the doing of writing) wherever I encounter them. This will, of course, include responding to issues raised in Digital Writing, but it will also include things I might want to address from another class I’m taking, Creative Writing Pedagogy & Theory. I edit a literary magazine, Mock Orange Magazine, which I expect to discuss in my blog. I’m a big fan of video games, and will likely talk about the development of digital narratives there. I like making and crafting objects, and would like to explore more deeply ideas about composition as making.  There are a thousand ways that ideas relating to composition pop up in my life, and while I plan to have an essentially academic approach, I hope to address all of those many and varied issues in this blog.

I plan to hold myself to an absolute minimum of one post per week, though I strongly suspect I will exceed that. To extrapolate on that goal, I’d like to shoot for at least one post of significant length per week, with “significant length” being at least three written paragraphs. Other posts can be shorter, perhaps even just links or video clips with minimal commentary, but I want to be sure I’m keeping some log of my progress by periodically doing some more in-depth reflection. I aim to complete those written reflection posts on Fridays or Saturdays, as that will allow me some time to reflect on issues raised in my courses on Wednesdays and Thursdays.