Editor’s Note: The following is the second of a three-part series by Tony Chung on crowdsourcing and its impact on technical communication and technical writers. And because we are talking crowdsourcing, we invite you to participate by commenting here on on the email discussion list.
In the first post in this series, I suggested that the term “crowdsourcing” might just be a marketing-oriented name to describe the age-old concept of collaborative authoring. Or is it that simple? In this post we explain the difference between active and passive research techniques, and the importance of knowing your audience before you start, in order that you may reach “the crowd.”
Crowdsourcing as community building
My interest in crowdsourcing began honestly: It was forced upon me by the powers that be. In my work as a Content Strategist for a large government website, our communications department asked me how they could harness the power of the crowd to give community planners insight into the interests of the constituents within specific neighborhoods. Ordinarily, the project itself would be a boring mix of user polling and statistics, of interest only to a relatively small group who knew about urban planning.
Community planners would typically use active research techniques, like surveys and questionnaires, in order to reach this select group of individuals. A researcher would cross-post a question into multiple mailing lists, and poll the opinions of several users at the same time, with the important caveat that active research is at the mercy of those within the group who chose to participate. If the query should bomb terribly, the researcher would be left to wonder if the problem was with the question, or the audience.
Crowdsourcing and the art of knowing your audience
As a real-world example, let’s consider the research for this topic. I posted a request for crowdsourcing opinions and examples to two mailing lists: TechWhirl proper, and the Content Strategy Google group. In the case of this extremely specific topic, the audience was a key factor in generating the volume and type of responses to the question.
The more traditional technical writers on TechWhirl completely ignored the discussion. The few responses I received on this list didn’t speak to crowdsourcing at all. Jen Jobart summarized the lack of feedback when she posted a link to Jakob Nielsen’s page on Participation Inequality.
On the other hand, the content strategists on the Google group provided varied and helpful feedback in only a few very detailed replies. Content strategists often deal with out-of-the-box projects, and many had experience with shared writing, by managing multiple authors using wikis or web forms.
If the feedback to my specific query was any indication, I imagined the community planners at my workplace would have a difficult time, as their questions would be more open-ended. I can’t imagine how planners could encourage large groups of users to participate in a community planning survey. And if, by chance, the people responded, would the feedback be useful and easy to process?
I was convinced that there had to be a better system for gauging the interests of large populations, and elicit response, even from people who wouldn’t think about reading a planning document.
Crowdsourcing puts computers to work for you
Rather than limit ourselves to active research, where we ask the questions, we now have at our disposal tools for passive research that listen for murmurs of a given type. These tools aggregate, or collect and compile, articles from news and social media feeds in real time. Users don’t even realize their comments are being monitored and integrated into research. This technique is much like RSS feeds, where you would subscribe to specific information sources. Where this technique differs is that rather than subscribing to sources, you set up specific filters, regardless of the source.
The type of research you collect depends on the platform you use. For instance, the Ushahidi platform polls social media feeds, and SMS messages sent to a specific number, to collect information and plot points of interest onto a world map in real time. Ushahidi is open source, and you can download the application to host on your own webserver. You can also register a free account on Ushahidi’s cloud-based crowd mapping system, which was started with a few examples designed to track global emergencies.
But keep in mind that this kind of passive research carries its own risk. You are polling the general public at random, and don’t have an initial profile of your survey target. So, you would have to assume some of the automated research might be suspect. However, when research is raw and unfiltered, it is usually less prone to bias. The only qualifier is that all the unwitting contributors would be social media or cell phone users.
Take your own stab at crowdsourcing
Focusing on this topic has really sharpened my understanding of the differences between technical communications disciplines. Not everyone who holds the title of Technical Writer performs the same tasks. Often we are very different.
I invite you to share your differences in the comments below. Here are my initial jabs:
- What type of research appeals most to you: active, or passive?
- Can we trust passive information collection at the expense of tried-and-true structures?
- Are we confined to the tired structures of outline and narrative, even as the world distances itself from traditional publication methods?
- Jakob Nielsen’s article was posted in 2006. Do you see examples of the 90-90-1 rule in effect today?
- The only real example of crowdsourcing discussed in this post is Crowd Mapping. What other methods for crowdsourcing have you experienced?
- What crowdsourcing resources would you recommend for those just starting out?