Quality-control mechanism utilizing worker's confidence for crowdsourced tasks

We propose a quality control mechanism that utilizes workers' self-reported confidences in crowdsourced labeling tasks. Generally, a worker has confidence in the correctness of her answers, and asking about it is useful for estimating the probability of correctness. However, we need to overcome two main obstacles in order to to use confidence for inferring correct answers. First, a worker is not always well-calibrated. Since she is sometimes over/underconfident, her level of confidence does not always accurately reflect the probability of correctness. In addition, she does not always truthfully report her actual confidence. Therefore, we design an indirect mechanism that enables a worker to declare her confidence by choosing a desirable reward plan from the set of plans that correspond to different confidence intervals. Our mechanism ensures that choosing the plan matching the worker's true confidence maximizes her expected utility.