Editor’s Note: The following is the third of a three-part series by Tony Chung on crowdsourcing and its impact on technical communication and technical writers. And because we are talking crowdsourcing, we invite you to participate by commenting here on on the email discussion list.
In the last two posts I suggested crowdsourcing as both a pre-existing concept and a means for enlisting the participation of others without their knowing, (and in some cases, without their consent). This post continues with the discussion of active versus passive research techniques, and the power of group think.
Active and Passive Research Techniques
There is a significant difference between active research, which is highly controlled, and passive research, which is unpredictable. Surveys and questionnaires are active research techniques, as they are designed to gather responses from a select group of people. On the other hand, passive research techniques continually scan the public sphere for the desired information, and collects without notifying the contributor.
Where active research requires that the researcher knows the specifics of the audience before asking the questions, passive research is a good way to find out those people you haven’t considered who may be part of your audience. While passive research could be called a breach of privacy, most people are well aware that once information is on display, it is fair game for anyone to harvest.
Could either research technique be considered crowdsourcing? Imagine a sliding scale labeled “Crowdsourcing”, where control and chaos sit at opposite extremes. For each research exercise, the singular “Crowdsourcing” scale would slide between the amount of traditional information gathering on the control side, and the level of radical chaos allowed in the creative process.
While a single “Crowdsourcing” scale would be easy to picture, I don’t believe that the levels of control and chaos need to be mutually exclusive. In my experience, a typical research exercise contains healthy amounts of both control and chaos. I believe that a dual scale model would better represent the effect of our research, as it would show that control and chaos can and does exist simultaneously.
The Many Are Smarter Than the Few
Wikipedia is the best known example of the power of collective, but controlled, chaos. The concept is simple: Every contributor edits every contribution; we are all experts. Since its formal birth in 2001, Wikipedia has built up a community of several million named contributors (At last count 16,720,806 Wikipedians).
Where once we thought we could never trust the value of user contributed documentation, Wikipedia has shown us that the varied perspectives contributing to a single article adds validity, especially when users follow the guidelines that limit editorializing and require citations from authoritative sources. These self-moderated controls have brought credibility to the crowd-based wiki. Wikipedia articles rank higher than other search engine results, and are often an essential first step in any research project.
Lesson for Technical Communicators: There is Power in Group Think
When I posted my request to the TechWhirl mailing list and the Content Strategy Google group, I was prepared for the insight from my colleagues who develop systems and processes for harnessing user generated content. Anne Gentle has written what I consider to be the book on the topic of collaborative documentation, Conversation and Community: The Social Web for Documentation (Note that Version 2 is almost ready for release).
However, I was not prepared for the discussion about what motivates users to contribute in the first place. On the Google group, Seth Grimes wrote about his ongoing study of analytics and user sentiment analysis. His company hosts a conference on the subject, the Sentiment Symposium.
Ed. Note: On the topic of user sentiment analysis, back in 2008, Sami Viitamäki published his master’s thesis (Helsinki School of Economics) on the FLIRT model of crowdsourcing: Focus, Language, Incentives, Rules and Tools. Viitamäki defined an interesting model that takes into account some of these ideas of goals, control, and motivation.
To gather additional feedback, I privately sought out two content strategists I knew, specifically for their insight into how crowdsourcing fit into their user-generated content models. I also contacted a TechWhirler who had mentioned she found contract editing jobs through a crowdsourcing service.
The exercise proved to me that audience matters. TechWhirlers were more skeptical of why random third parties would contribute information as part of a crowd. Everyone involved in content strategy, however, drew a line between collaborative authoring and crowdsourcing:
- Collaborative authoring: Where users within a specific community contribute information into a shared repository.
- Crowdsourcing: Where companies choose to expand the writer base beyond the specific community, as a form of outsourcing.
In the past, companies would outsource, or hire workers from outside the company, usually overseas. The private TechWhirler told me about the Cloud Crowd service that acts as an intermediary between companies who pay for outsourced writers and editors who accept assignments at their leisure. Through crowdsourcing, companies can harness the feedback from the more vocal users of their products for use in their own product documentation. In some cases, the company would provide incentives for those users to participate in improving their documentation and processes.
Janet Swisher responded in the Google group, “In general, if you need specific content by a specific date, your best bet is to pay someone to do it.” The outsourcing comparison makes crowdsourcing appear to be the blank check of information gathering–or a box of chocolates–“you never know what you’re gonna get.” But once you, as a technical writer, apply structure and consistency, the crowdsourcing effort morphs into collaborative authoring. The technical writers who are prepared to integrate crowdsourcing and collaborative authoring strategies into their workflow will add value to their organizations.
Crowdsourcing our thinking on crowdsourcing
- This ends the current series. But the discussion is far from over. Once again, here are more starters to crowdsource our thinking on crowdsourcing:Is crowdsourced content in technical communications manageable? Does the community police itself, or is that the role of technical communicators?
- Who is better equipped to manage crowdsourcing for technical content: content strategists or technical communicators?
- Can we use models like FLIRT, generated outside of technical communications, to influence how we approach crowdsourcing as a channel for content generation?
- What crowdsourcing resources would you recommend for those just starting out?
I can’t wait to get your thoughts!
Resources
- Anne Gentle, Conversation and Community: The Social Web for Documentation
- http://www.slideshare.net/janetswisher/presentations
- http://cloudcrowd.com
- Sami Viitamäki, FLIRT model of crowdsourcing: Focus, Language, Incentives, Rules and Tools
Special thanks to the following for their comments:
- Jessica Behles
- Anne Gentle
- Seth Grimes
- David Hendler
- Jen Jobart
- Tom Johnson
- Rick Sapir
- Bill Swallow
- Janet Swisher
- Andrew Warren