Taking Advantage of Reflexive Responses

While prospecting for information on a Web site that I visit fairly regularly, I recently fell victim to a fascinating Web page trick. The trick took advantage of the kind of reflexive (“programmed”) response we each develop once we grow so familiar with a task that we no longer consciously think about what we’re doing. While I waited for the results of a search to appear, a banner ad popped up–banner ads are those rectangular advertisements that infest the tops of browser windows in an attempt to coerce you into reading the ad while you wait for the rest of the window to appear. I guess I was tired, because I didn’t immediately recognize this particular box as an ad, even though it bore all the obvious hallmarks: It was the right size, occupied the right location, had animation, and even loaded before any useful information appeared. So why did I miss it? Because the ad resembled the dialog box you see when a site tells your Web browser to open a second window. The contents of this wolf in sheep’s clothing (the fake window) didn’t interest me, so without considering the matter any further, I reflexively clicked the Close box to banish the window from my screen.

To my horror, and subsequent delight, I ended up on an advertiser’s page–and since I was browsing online while at work, you can imagine how relieved I was that this wasn’t a porn site. Someone more alert would have immediately realized that the entire dialog box was actually a screenshot that had been turned into an image map; clicking the Close box activated the link attached to the image rather than closing the window. I admire the cleverness of the anonymous programmer who understood and manipulated my behavior so expertly, but I’m not going to repeat the URL because I loathe this kind of trick marketing.

That’s the horror part; the delight came when I recognized that this kind of predictable response has a variety of implications in technical communication, such as helping users to avoid common “reflexive” mistakes and guiding them in the direction they need to go to accomplish a task.

A Word of Caution

Before continuing, let me be emphasize two points. First, the goal in technical communication must always be to take advantage of “reflexive,” “learned,” or “programmed” responses to better adapt documentation and user interfaces to user needs. You can certainly use the same principles to trick users, but that’s not my goal in this article. Second, taking advantage of these responses sometimes comes in the form of documentation–our main task as writers–but sometimes requires us to don our other hat: That of an advocate for the product’s users. To perform the latter role, we must develop and maintain sufficiently good relationships with our colleagues in the product development group so that they’ll actually listen to our advice.

Jef Raskin (2000) eloquently describes what our goal should be: “Our mandate as designers is to create interfaces that do not allow habits to cause problems for the user. We must design interfaces that (1) deliberately take advantage of the human trait of habit development and (2) allow users to develop habits that smooth the flow of their work. The ideal humane interface would reduce the interface component of a user’s work to benign habituation. Many of the problems that make products difficult and unpleasant to use are caused by human-machine design that fails to take into account the helpful and injurious properties of habit formation.”

Isaac Asimov, one of the pioneering authors writing science fiction tales about robots, developed his famous “laws of robotics” some 40 years ago to account for the likelihood that an increasingly technological civilization would inevitably come up with technological means of doing ourselves in (such as potentially homicidal robots). Probably the most important of these laws from our perspective as communicators can be paraphrased as “allow no human to come to harm through action or inaction,” which parallels the doctor’s Hippocratic oath, commonly stated as “above all else, do no harm.” As Raskin noted, humans inevitably develop perfectly reasonable and predictable habits. Ethically speaking, our audience analysis should always consider where these very human responses could lead someone to grief.

Preventing Inappropriate Responses

Most users of Windows-based computers in 2000 encountered one of the more dramatic examples of taking advantage of reflexive responses. Viruses like Melissa, “the love bug,” and the “Shockwave movie” virus spread so swiftly because each of us trusts our close friends and colleagues not to send us something harmful. Many victims felt–quite reasonably–that anything they received from a friend must be safe.

Anecdotal evidence strongly suggests that trick banners such as the one that prompted this article initially have very high response rates; Jeffrey Veen, writing in January 2000, reported that responses to these types of ads have routinely averaged two or three times as high as responses to standard banner ads. But people resent being tricked, and trickery is hardly likely to establish your credibility with a potential customer. You don’t even want to think what will happen if that customer is a journalist who actively criticizes your product and encourages readers to stay away.

Many people call this problem “getting hijacked,” and it arises when accepting the default response to a user-interface prompt by pressing the Return key isn’t what you expected. To avoid such undesirable responses, you must first understand each task that you’re documenting well enough to know what your audience generally expects to happen at the end of that task. Once you understand this, you can encourage the product’s developer to set the default behavior so that users end up where they expected to go. If you’re unsure of that destination or result, you can take measures to slow down or even eliminate reflexive clicks on the “OK” button. Where you can’t make such changes, you can at least write your documentation to emphasize the non-default action, so that readers at least have a warning that their initial response may not be the appropriate one.

For example, software installations generally present a series of windows that require users to click the “Next” or “OK” button to proceed to the next step. Many users reflexively click these buttons without even reading the text in the dialog box in an effort to speed up the installation; this behavior arises from the reasonable assumption that software developers design installation scripts to meet the needs of most users, and that each of us represents a typical user. That’s often not the case, and when it’s not, you should find a way to make users pay attention to your message. Here are a few suggestions:

  • Keep messages brief, and eliminate any text that more properly belongs in a summary that reports the results of the installation. For example, rather than explaining the default directory in which software will be installed, simply display the default directory and provide two buttons: “Continue” and (for those who would prefer to change that directory) “Change directory.”
  • Pick logical names for buttons that more closely answer the question you’re asking readers to answer, as illustrated in the previous suggestion; avoid the lazy approach of labeling a button “OK” (the default in many programming languages) if the answer you want to receive from the user is really “yes,” “continue,” “accept,” or “change.” I would be unsurprised to discover–though I lack evidence to support this–that all buttons in dialog boxes should bear verbs that directly reflect the action: “Do something” would always be more effective than “OK,” since even someone who skips the text above the buttons will still have a direct indication of what will happen after clicking the button.
  • Require users to select one or more items from a list of options before proceeding, and preselect only options that you strongly recommend that users select. The goal is to make it easier for them to select the right components.
  • Don’t set any button as a default that activates as soon as the user presses the Return key. This approach forces users to at least read the button labels and decide which button to click; with a default button, it’s all too easy to simply hit the Return key and proceed without reading. Of course, if the default will be correct for the vast majority of users, highlighting the “OK” button is likely to save time and be appreciated by those users.
  • Provide a summary page at the end of the installation so users can review what they’ve chosen. Many installers and Web forms present this information in the last dialog box before beginning the installation or before submitting the form so that users can approve all the settings in a single click, or return to a previous screen to correct errors.
  • Install a timer that informs the software whether the reader actually had time to have read the screen. One colleague reported a program that displayed a message similar to “That was awfully fast. Are you sure you read that last message thoroughly?” Another Web-based form that I’ve used didn’t allow me to submit the form until I’d read and approved a “license” page.
  • Place buttons or links to the next step beyond the bottom of a screen of text, rather than at the bottom of the first window; in this manner, users can’t click the buttons without at least scrolling. This would force readers to scroll through longer screens such as license agreements before they can move to the next screen. On the other hand, this differs from more familiar interfaces, and might not represent the best choice for routine use.

Promoting Appropriate Responses

In the previous section, I pointed out how inappropriately taking advantage of user reflexes creates a feeling of betrayal when you hijack the user into doing something inappropriate. But if the reflex leads to a result that achieves the user’s goal, the most likely response is either gratitude for a smoothly functioning product or–more likely–failure to notice that there was ever a chance to be led astray. Both are good outcomes.

The most obvious example of promoting an appropriate response requires an understanding of where failing to take the appropriate response would cause personal injury, loss of data, or other undesirable consequences. In these cases, the default behavior that results when the reader responds by reflex must prevent injury, data loss, or other problems. For example, it’s natural for users of a word processor who are finished working with a document to want to save the document, close the document’s window, and exit from the program. Given that this behavior is most likely to occur right before lunch or at the end of a long work day, when users are most in a hurry and paying the least attention, this combination of commands should be made as foolproof as possible. Yet in Word 97, you can often crash the software and lose data if you attempt to close a document window or exit the program while the software is still saving the file. Ideally, the keystrokes for closing a file or closing Word should be disabled while the save operation proceeds.

Since we can’t always influence such a major part of the design of a user interface, a more relevant example for technical writers would be to anticipate equally predictable responses and produce documentation that at least gives the reader a chance to avoid the problem. Since readers may not consult the documentation before proceeding with a task, confirmation messages and other dialog boxes should go beyond simply hinting at the consequences of an action: They should explicitly guide the user. For example, the standard confirmation message when deleting a file used to be “Are you sure you want to delete this file?” In recognition that it’s easy to inadvertently select the wrong file from a long list, modern operating systems now name the file in the confirmation message. But even so, Windows 98 makes “Yes” the default value, rather than the safer “No”; this lets hurried users simply hit the Return key. Making “No” the default would be more onerous for users, since it would require them to read the dialog box and consciously choose to delete the file, but where data integrity is crucial, this tradeoff is easy to justify. Moreover, those who don’t want to deal with such confirmation messages can often learn how to turn them off or modify their behavior through the operating system’s preferences settings.

Why not go one step further and think beyond our familiar habits? For example, asking users to confirm all file deletions can grow quite annoying, since the odds are excellent that if we’ve tried to delete a file, that’s really what we wanted to do. So rather than interrupting the user’s work by asking confirmation, why not simply move the deleted files into a directory named “deleted files?” After a user-specified amount of elapsed time or disk space utilization or both, the operating system could notify the user that they should either let the system delete the oldest files or give them a chance to retrieve one or more of the older files. For additional security, why not add a second level of directory to give users a day or two to reconsider their decision to delete a file once and for all? The Windows “recycle bin” and Macintosh “trash can” that inspired it adopt this approach to some extent, but not for all files and not for any data within active files, which indicates an incomplete attempt to protect users from their mistakes. It also indicates a reflex response to the facts of life in the early days of computing, when hard drive space was far too expensive to justify such an approach. Now, most new computers come with such huge hard drives that few of us ever come close to filling them. Thus, it no longer makes sense to delete files quite so irretrievably when we can devote a small fraction of the hard drive to saving them.

Technical communicators can also use the principles of information design to place information where readers expect to find that information. For example, English readers read from the top left of a page or screen moving downwards and to the bottom right of the screen. This is why dialog boxes place titles at the top of the window (to provide context about the screen’s contents), the content that explains that context in the middle, and the buttons that proceed or cancel the action at the bottom right of the window. Documentation should follow the same path, with contextual information appearing at the top or left of a window or page, the relevant content towards the middle or right side, and the call to action towards the bottom or right. Thus, a typical topic in a printed manual or online help begins with a title that lets the reader determine whether they’ve found the correct topic, continues with navigational aids such as headings that let readers skim quickly to reach the relevant part of the topic, and concludes with the required information. This is equally true for long chunks of text, numbered steps in a procedure, and online help screens.

A related “trick” involves using visual cues to attract a reader’s attention. For example, traditional warnings were often embedded directly within a large chunk of expository text, where they were easy to miss. One classic example of such writing is the phrase “of course, you shouldn’t try this [the preceding instructions] before you’ve made a backup copy of your hard disk”; by the time someone reads this warning, they’ve already completed the procedure and it’s too late for them to make a backup. Readers generally assume that we’ll warn them what precautions to take before they begin the actual task. That being the case, it’s a natural and logical reflex to simply dive into a procedure on the assumption that if there’s no cautionary information right up front, there are no precautions to take. Since graphics attract attention so strongly, particularly on text-heavy pages with very few images, labeling notes with a stop sign or a triangle containing a boldfaced exclamation point (!) draws the reader’s attention quite effectively. After the first experience in which they discover the value of this information, readers subsequently look for such icons, even if they know they may not heed the attached advice.

Conclusions

Don’t try to take advantage of reflexive responses unless the value of doing so, as seen from the user’s perspective, clearly outweighs the risk of annoying or harming the user. You don’t have to conduct a formal audience analysis to understand what responses users will value, but you do at least have to put yourself firmly in the user’s position so you can try to understand what they will be expecting, and their likely response to the situation.

Knowing your audience also helps you learn their actual reflexive responses. One colleague reported the commonly held belief that the “tip of the day” boxes that appear when some software starts up are pointless: Users either ignore them or turn them off after enduring the first few interruptions. Yet some writers have found that users actually do appreciate and use these tips, even if only because they don’t know how to turn off this feature at first. By being forced to read the tips, some readers actually begin to see their usefulness and start appreciating what would otherwise prompt the reflexive response of “how do I turn this off?” Given that users will eventually grow beyond the need for such tips, one potentially successful strategy would be to prioritize the tips: Create an order of appearance that reflects tips most likely to prevent serious problems with the software or that answers the most common questions received by your technical support group. By the time users decide to disable this feature, they’ll already have received the really important messages that you need to communicate.

None of us likes to admit that we have conditioned reflexes that override our higher cognitive abilities, yet such denials notwithstanding, each of us does occasionally respond without thinking something through clearly. As technical communicators, it’s important for us to accept this fact of human nature and plan for it in our documentation, and to work with the developers of the products that we document to both take advantage of the helpful reflexes and find ways to ward off the harmful ones.

Acknowledgments

Thanks to Roger Bell, Sharon Burton-Hardin, Bruce Byfield, John Cornellier, Elliott Evans, Joyce Fetterman, Connie Giordano, Amy Griffith, Guy K. Haas, Lisa Higgins, Michele Marques, Sarah Newman, John Posada, Annamaria Profit, and Al Rubottom for providing additional insight into this subject.

 

Literature cited

Raskin, J. 2000. The Humane Interface. Addison-Wesley, New York, N.Y. 233 p.

Geoff Hart

During a sometimes checkered career, Geoff has worked for IBM, the Canadian Forest Service, and the Forest Engineering Research Institute of Canada. In 2004, he threw away all that job security stuff for the carefree—not!—life of the freelancer. Geoff works primarily as a scientific editor, but also does technical writing and French translation, and occasionally falls into the trap of leading or managing groups.

Read more articles from Geoff Hart