

A study of consensus building under two different hypotheses: truthful annotators (as a model for most voluntary citizen science projects) and self-interested annotators (as a model for paid crowdsourcing projects).
Many citizen science projects have a crowdsourcingą component where several different citizen scientists are requested to fulfill a micro task (such as tagging an image as either relevant or irrelevant for the evaluation of damage in a natural disaster, or identifying a specimen into its taxonomy). How do we create a consensus between the different opinions/votes? Currently, most of the time simple majority voting is used. We argue that alternative voting schemas (taking into account the errors performed by each annotator) could severely reduce the number of citizen scientists required. This is a clear example of continuous human-in-the-loop machine learning with the machine creating a model of the humans that it has to interact with.
We propose to study consensus building under two different hypotheses: truthful annotators (as a model for most voluntary citizen science projects) and self-interested annotators (as a model for paid crowdsourcing projects).
The results collected so far, which will be published by this year, suggest that Majority rule is the best option as long as all the agents are competent enough to address the task. Otherwise, when the number of unqualified agents is no longer negligible, smarter procedures must be found out.
This Humane-AI-Net micro-project was carried out by Consejo Superior de Investigaciones Científicas (CSIC) and Consiglio Nazionale delle Ricerche (CNR)