Developing and testing crowdsourcing projects

In this section I will discuss key points and general principles for implementing crowdsourcing projects, including task design, documentation and tutorials, quality control and ensuring that designs work as well as possible through usability testing. It is important to note that basic usability (minimising dissatisfaction) is rarely enough; websites should both offer pleasing features that encourage users to return and minimise annoyances for users. The details of effective task design will depend on your goals and source materials. ‘User experience’ design, also known as UX, includes the visible aspects of backend workflow, instructional and marketing text, and so on in addition to interface and interaction design.

Critical points when a quality user experience matters include successfully ‘onboarding’ a participant so that they can complete their first task, and maintaining participation despite changes over time. As crowdsourcing is a voluntary activity, it is vital to minimise barriers to participation, points of friction and demotivators.

Barriers to participation include compulsory registration (Budiu, 2014), so some projects do not require registration, and some Zooniverse projects have successfully deployed a design pattern called ‘lazy registration’ (‘Lazy Registration design pattern', n.d.). Being clear about how data will be used helps. Rose Holley's 2010 summary of research on participation in Distributed Proofreaders, FamilySearch Indexing, Wikimedia and

Trove reported that volunteers 'do not want to feel that their work can be commercially exploited’ (2010). A study of Old Weather found that stopping participating is strongly associated with an anxiety about the quality of contribution (Eveleigh, Jennett, Blandford, Brohan, & Cox, 2014). Competitive models like gamification-style leaderboards are an easy way to recognise individuals who have completed more tasks, but they favour those with more free time, and there is some evidence that some participants are deterred by competition (Eveleigh, Jennett, Lynn, & Cox, 2013) (Preist, Massung, & Coyle, 2014).

Usability tests can be conducted throughout the development process, as you can test existing projects, paper prototypes and work-in-progress. Tests can be informal (‘guerrilla’ usability tests are free apart from the time required to talk to participants) or formal, but the benefits are invaluable. Usability tests allow you to understand and devise creative solutions to problems uncovered.They will help you identify and remove barriers to participation, define rewards appropriate to your goals and community and ensure that the project maximises the return on investment.

Designing the ‘onboarding’ experience

In user experience design,‘onboarding’ refers to orienting people to the features of a site and helping them start to use it (Hess, 2010). Ideally, the first page that potential participants see shows (not tells) them what the project aims to do, how their help can make a difference, and where to start the task. For example, fWwt’s on the Menu has manicules (pointing hands) pointing to a button labelled ‘Help transcribe’. As discussed earlier, a good communication strategy should include a strong strapline that give a sense of the larger challenge that tasks will contribute to, and ideally connect to probable motivations for action.

The landing page should also include ‘social proof’ that others have already chosen to participate (Mitra & Gilbert, 2014).'’ For example, the front page of FUidt’s on the Menu prominently lists the number of dishes transcribed so far and Trove lists the number of corrections already made on a given day, the number of items tagged that week, and the number of comments added that month, showing how updates can be tailored to the frequency of different tasks (a method that supports less active projects).

Some projects feed participants tasks from a queue of material, while others leave the choice of material up to the participant. Providing initial tasks from a queue minimises the number of decisions a participant has to make, which helps reduce cognitive load (the amount of mental effort required to operate a system or learn new information; Whitenton (2013)).This, in turn, leaves more mental resources for learning the task.'6 Feeding the first tasks to participants also allows a project to begin with ‘golden tasks’, tasks to which the answer is known, so they can assess the participant’s performance (De Benetti, 2011).

The Smithsonian Transcription Center provides many ways for a participant to find content that they might be interested in, including themes (such as ‘Civil War Era’ or ‘Field Book Project ), source organisations (specific museums or archives), featured projects and those with recent activity. The Notes from Nature collection pages list the average time per record (ranging from 3 minutes to 15 minutes) as well as the average ‘difficulty’ (ranging from ‘easy’ to ‘very hard ).

 
Source
< Prev   CONTENTS   Source   Next >