Getting Feedback on Our Work

FeedbackNovember 2009, I attended an STC Montreal presentation by John Druker, from SAP Labs Canada. Like many of us, John struggles with the task of getting feedback from his readers about the documentation that he and his colleagues produce. Unlike most of us, John has made a considerable effort to find ways to get that feedback and incorporate it in his work. This article is based on John’s presentation, supplemented by my personal experience.

John works primarily with internal clients (SAP employees), but also with occasional external clients and paid consultants; these clients are a diverse group, ranging from neophytes to experts and including people from the many national cultures in which SAP works. But his general goals for feedback are relevant to technical communicators working in any situation: getting constructive criticism on the work he’s producing, its format, and the overall process being used to produce the work.

Although any feedback is valuable, the constructive type is most valuable because it’s actionable: it gives you insights into how you can improve what you’re doing. It must also be something that becomes part of the information development process so it can be used to continuously improve that process and its outputs, rather than being a post facto way to fix problems after it’s too late to mitigate their impacts. By obtaining actionable feedback and integrating it with the information development process, it becomes possible to use feedback as a means of ongoing improvement rather than a way to make ad hoc fixes.

Tools for Getting Feedback

John’s group at SAP uses surveys, interviews, round tables, and Web 2.0 tools to gather information from their audience:

  • Surveys provide a simple way of getting standardized, easy-to-analyze responses from a large group of people. They let respondents reply at their own convenience, and particularly if they’re Web-based or integrated with a database, are easy to administer. If integrated with a database, they also permit long-term tracking of your progress towards improvement. Since no survey ever completely addresses all the audience’s concerns and opinions, you should include both fixed questions (e.g., multiple choice, Likert scales) and open-ended “essay” questions that let respondents identify concerns you didn’t anticipate. However, because surveys are impersonal and inflexible, it can be difficult to motivate people to participate.
  • Interviews are a more personalized form of a survey: the questions may be the same, but if the questions are skillfully presented in the form of a dialogue, participants are more motivated to respond. Because the questions are potentially more open-ended, they offer a greater opportunity to elaborate on points the survey designer might have failed to consider. Unclear answers can be explored until they become clear. On the down side, they are more time-consuming and difficult to administer, and may therefore require more staff resources to implement. It can also be difficult to take notes while simultaneously listening to the participant. John didn’t mention a variety of useful tools from the social sciences, such as recording interviews to help ensure you miss nothing and using “coding” forms that help evaluators achieve consistency in how they interpret the interviews.
  • Round tables (what I more often hear described as “focus groups”) are similar to interviews, but with a larger group of people participating. The advantages are that they can be easier to organize (because the discussion occurs once, in a single location, rather than once per participant in a range of locations), participants can build on each other’s comments if they’re inspired to do so, and disagreements can be explored to reveal key insights. However, as the group grows in size, it becomes proportionally more difficult to record all the responses and analyze the results. Moreover, groups require a skilled moderator to ensure that no one person dominates the discussion and that everyone has a chance to participate honestly. Multicultural groups pose additional challenges, since culture can strongly influence responses. John confirmed my personal experience, namely that Indian and Asian participants tend to be reluctant to express criticism openly, although they may be willing to do so in private. But he also noted that each culture is unique; his German colleagues, for instance, are sometimes hesitant to criticize because of fear their responses will be used against them.
  • Web 2.0 tools are online technologies such as blogs and wikis that let people collaboratively develop a body of information, while simultaneously automating the process of recording responses. Combined with tools such as the polls provided by a blog site such as LiveJournal (http://www.livejournal.com/), they offer ease of management and delivery combined with ease of analyzing the responses. These tools offer the possibility of follow-up interviews, along with some of the advantages of round tables. The downside is that access control must be carefully considered to balance the need for anonymity with the need to control access; the latter is particularly important for tools such as wikis, in which the content could theoretically be modified by any participant.

For all of these methods, it’s important to remember that different jurisdictions have different laws related to privacy and the preservation of a respondent’s anonymity. Anonymity can be important for practical rather than purely legal reasons: people are more likely to respond honestly if they feel that nobody will be able to harm them for expressing their opinions. It’s also important to understand that most people are not inherently experts in any of these types of information gathering, and that some training will be required before you begin gathering information. This is particularly true if many people will be involved as (say) interviewers, since the training will provide intellectual and other tools that both let people do their job right and help them to provide more consistent and usable data.

Planning the Campaign

To gather feedback most effectively, you should plan a “campaign” (a planned and carefully organized activity, not something ad hoc). Planning should start with a consideration of which audience members to prioritize, how to motivate them (get “buy in”), and how to get the feedback early enough to make a difference. In forums such as techwr-l, I’ve dealt with the latter point from the perspective of asking for feedback early in the documentation development process so that what you learn early can make your subsequent efforts more efficient; any problems you identify and correct early in the process are problems you won’t repeat (and therefore won’t need to correct repeatedly) later in the process.

John described a familiar three-step process for such campaigns:

  • Start with a small group to test what you’re doing. This is analogous to the pre-testing process survey designers use to ensure that they’re getting the responses they want and need. It may also provide insights into important questions that might not have been considered, which can then be added into subsequent steps. In particular, this is a good way to learn which questions and question terminology aren’t nearly as clear as you thought they were.
  • Expand the campaign to a much larger group. This becomes the primary data-collection phase, and if the pre-test helped you to refine your questions, the data will be more usable than it might otherwise be, not to mention easier to compile and analyze.
  • Return to a smaller group of key respondents (e.g., the ones who provided important insights or who were most helpful) for follow-up questions. This kind of post-mortem analysis and follow-up is common when the initial data gathering did not provide all the data you need on certain key points, or revealed key points that it wasn’t possible to explore as part of the primary data collection phase.

If you have a multilingual or multicultural audience, it’s important to keep differences among these groups in mind. Apart from the cultural issues I described earlier, it’s worthwhile finding ways to gather information in the native language of the participants. This will increase the accuracy of the feedback because participants make fewer interpretation errors while deciphering your questions, and fewer errors while choosing words with which to frame their response. Both factors will also help them feel more comfortable participating, and seeing that you’re interested in helping them to respond (i.e., by translating your questions into their linguistic and cultural terms) will motivate them more strongly to participate.

Before deciding what questions to ask, you must decide what things you’re trying to improve. Typical questions focus on the quality of the documentation, including its ease of use; these are most relevant if you primarily want to do a better job of what you’re already doing. But if you’re more interested in whether you’re doing the right things, you might want to ask questions about how frequently people use your documentation, what parts of it they use, and what specific problems they encounter while using it; this will tell you whether you’re wasting your time creating information that people never use and missing opportunities to provide information that they’d like to use (more comprehensive documentation). My article Accentuate the Negative provides some insights into this process.

One interesting point that John raised focuses on what’s been broadly described elsewhere as “corporate memory”. In his own feedback campaigns, he’s often found that experts in a technology tend to create their own documentation and their own work patterns, and stop using the standard documentation that comes with a product. This information is important knowledge both for newcomers, who can benefit from the hard-won knowledge of experts, and for when you’re creating documentation for experts, who are likely to have similar needs to experienced users. When I proposed that this kind of thing is precisely what wikis are good at capturing, and that SAP should find ways to motivate these experts to capture their knowledge, John agreed that this was something they’d been thinking about at SAP. The notion of creating tag clouds, as in Delicious, would be relevant here.

A careful consideration of how you’ll compile and analyze the data you gather is an important part of your planning. For example, an interview process may be well-designed and may provide exactly the answers you need, but if it takes prohibitively long to extract those facts from the interview transcripts, you may have to choose a different approach. For example, holding one or two interviews may reveal which of your questions are most useful and what questions you forgot to ask, and you can then ask the most effective questions via an automated survey that will gather responses into a database for ease of analysis. The pre-test phase of your feedback campaign will help you test your process to ensure that you both obtain usable data and can gain this data in a reasonable amount of time. Triage may be necessary; you can always return (as in the third step of the campaign) to gather more details if something important turns up during your analysis.

Summarizing and Presenting the Results

When it comes time to report your feedback, don’t just present reams of charts and tables: if you want your manager (or other stakeholders such as authors and editors) to read the information and give you permission to make changes or even want them to change their own behavior, turn the data into a compelling story. Start with a concise, clear explanation of the context and the problem, followed by the most important data that demonstrates the problem is real. Then recommend potential solutions, and provide data from your results that support those solutions.

Because feedback should be an ongoing part of your process improvement, try to find ways to make the process “sticky”. Here, John used “sticky” as a marketing buzzword that can be understood most simply as finding ways to make people “stick to it” (i.e., to continue being willing to participate). Making it easy to participate in your feedback process (e.g., create short surveys or interviews, delivered at times that are convenient to participants) and desirable (e.g., by providing rewards and personal interaction) are good starting points. But perhaps more important is giving participants the sense that you’re actually listening to them and doing something about what you hear. People will naturally stop participating if they see participation as a waste of their time. But if they see changes being made that make their lives easier or more enjoyable, this will provide a strong incentive to keep participating. Even when you can’t make any changes, explaining why may encourage them to keep participating. So always provide follow-up after you’ve obtained feedback: thank the participant, and tell them the results of your work (what you’re planning to do), and offer opportunities to continue providing feedback to improve the results.

Subscribe to TechWhirl via Email