Turning crowdsourcing ideas into reality
My analysis of non-commercial crowdsourcing projects (2015) found that successful projects have several features in common, including good publicity (whether through luck or design), well-designed task interfaces and processes, and messaging that presents the impact of the project on a shared, significant goal that links to participant motivations. Key challenges include recruiting and maintaining volunteer participation over time and integrating the results of crowdsourced tasks back into core catalogues, repositories or IT systems within the institution.
This section discusses important milestones in the process of planning, implementing and running crowdsourcing projects. Defining ‘success’ for your project will influence design decisions, as will the choice of source material and your desired outcomes. The exact order of decisions will vary according to the specific project, but you should expect that some decisions will be revisited as more information is gathered and allow for this when allocating resources. Designing iteratively also allows you to fine-tune the prioritisation of efficiency and engagement, adjust workflow and quality controls measures as necessary, improve usability, and update text and tasks for specialist or generalist audiences as you learn from showing your project to potential participants.
Just as interfaces need to be carefully designed to maximise productivity and engagement, projects need to be carefully designed to ensure long-term success. Project design considerations include how the organisation sets up and resources a project, its coordination with other staff and work, and how it evaluates and responds to results. Decisions made in the planning phase will affect the implementation and running phases, so some points to consider for these later stages are discussed under the heading of planning.
Planning crowdsourcing projects
Key stages in the planning process include defining success for your project, managing any impact on the organisation, choosing source material and determining desired outputs, workflows and data re-use, communications and participant recruitment, and applying practical and ethical ‘reality checks'.
Understanding the impact of logistical issues such as workflow, quality control and target systems for information collected through crowdsourcing by cultural heritage organisations should also help digital humanities researchers and practitioners interested in collaborating with GLAMs.
Defining ‘success’ for your project
Potential quantitative metrics for measuring the success of heritage crowdsourcing projects include: the number of hours participants have spent on a project; initial and sustained participation rates; participant retention; the extent and types of use of community discussion platforms; the number of tasks completed; and the percentage of tasks validated against required quality standards. Efficiency can be measured as the number of tasks accurately completed per volunteer minute. Valuable but less easily measured outcomes include the extent to which participants gain related skills and knowledge, or the number of new research questions or discoveries that emerge during a project. Qualitative measures include the extent to which participants expressed support or appreciation for the project, the number of participants who pursue activities related to their new interest, or some wider impact on participants’ behaviour or attitudes.
Three definitions of success seem to have the most utility for project stakeholders: productivity, reach and engagement. However, two of these metrics are inherently opposed: time spent posting on discussion platforms or learning about collection items means less time is available to spend on the core task.2' However, there is also an argument that both engagement and contributions are needed for citizen science projects to count as a success (Simmons, 2015). Accordingly, measurements of success should be judged and weighted according to the overall goals of an individual project.
Productivity is the simplest to define and to measure, and the easiest metric to design for. How many tasks have been completed to the standards required? Figures for prominent projects can be impressive, with Trove and Zooniverse contributions numbering in the hundreds of millions.24
Reach measures the number or type of people contributing to projects. This might be the 1.7 million (at the time of writing) volunteers contributing to Zooniverse or a small group of volunteers drawn to a highly specialist project. Reach can extend beyond individual participants to include those who access research that results from projects, or who are more easily able to find cultural heritage collections online.
Finally, you can consider how many participants become more engaged with the subject of the collections or disciplines (such as history or science) related to them. Engagement might appear as learning, attitude change, or other changes in behaviour linked to feelings or knowledge gained (Bitgood, 2010; Museums, Libraries and Archives Council, 2008;The Culture and Sport Evidence (CASE) programme, 2011). Once you have determined the most appropriate mix of success metrics, you can decide how you will measure and evaluate progress against them.2’