Jiang, YuSun, YulingYang, JingLin, XinHe, Liang2023-03-172023-03-172018https://dl.eusset.eu/handle/20.500.12015/4506In micro-task crowdsourcing markets such as Amazon's Mechanical Turk, how to obtain high quality result without exceeding the limited budgets is one main challenge. The existing theory and practice of crowdsourcing suggests that uneven task difficulty plays a crucial role to task quality. Yet, it lacks a clear identifying method to task difficulty, which hinders effective and efficient execution of micro-task crowdsourcing. This paper explores the notion of task difficulty and its influence to crowdsourcing, and presents a difficulty-based crowdsourcing method to optimize the crowdsourcing process. We firstly identify task difficulty feature based on a local estimation method in the real crowdsourcing context, followed by proposing an optimization method to improve the accuracy of results, while reducing the overall cost. We conduct a series of experimental studies to evaluate our method, which show that our difficulty-based crowdsourcing method can accurately identify the task difficulty feature, improve the quality of task performance and reduce the cost significantly, and thus demonstrate the effectiveness of task difficulty as task modeling property.entask difficultybudgetmicro taskscontextqualitytask featurecrowdsourcingassignmentEnabling Uneven Task Difficulty in Micro-Task CrowdsourcingText/Conference Paper10.1145/3148330.3148342