Conference Paper

Enabling Uneven Task Difficulty in Micro-Task Crowdsourcing

Loading...
Thumbnail Image

Fulltext URI

Document type

Text/Conference Paper

Additional Information

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Association for Computing Machinery

Abstract

In micro-task crowdsourcing markets such as Amazon's Mechanical Turk, how to obtain high quality result without exceeding the limited budgets is one main challenge. The existing theory and practice of crowdsourcing suggests that uneven task difficulty plays a crucial role to task quality. Yet, it lacks a clear identifying method to task difficulty, which hinders effective and efficient execution of micro-task crowdsourcing. This paper explores the notion of task difficulty and its influence to crowdsourcing, and presents a difficulty-based crowdsourcing method to optimize the crowdsourcing process. We firstly identify task difficulty feature based on a local estimation method in the real crowdsourcing context, followed by proposing an optimization method to improve the accuracy of results, while reducing the overall cost. We conduct a series of experimental studies to evaluate our method, which show that our difficulty-based crowdsourcing method can accurately identify the task difficulty feature, improve the quality of task performance and reduce the cost significantly, and thus demonstrate the effectiveness of task difficulty as task modeling property.

Description

Jiang, Yu; Sun, Yuling; Yang, Jing; Lin, Xin; He, Liang (2018): Enabling Uneven Task Difficulty in Micro-Task Crowdsourcing. Proceedings of the 2018 ACM International Conference on Supporting Group Work. DOI: 10.1145/3148330.3148342. Association for Computing Machinery. pp. 12–21. Sanibel Island, Florida, USA

Keywords

task difficulty, budget, micro tasks, context, quality, task feature, crowdsourcing, assignment

Citation

URI

Collections

Endorsement

Review

Supplemented By

Referenced By


Number of citations to item: 4

  • Yuling Sun, Xiaojuan Ma, Kai Ye, Liang He (2022): Investigating Crowdworkers' Identify, Perception and Practices in Micro-Task Crowdsourcing, In: Proceedings of the ACM on Human-Computer Interaction GROUP(6), doi:10.1145/3492854
  • Ziyi Kou, Lanyu Shang, Yang Zhang, Dong Wang (2022): HC-COVID, In: Proceedings of the ACM on Human-Computer Interaction GROUP(6), doi:10.1145/3492855
  • Shakir Karim, Zaheeruddin Asif (2021): Investigating the Relationship between Capability and Motivation of Crowd Worker to Get Better Performance: A Mathematical Approach, In: Mathematical Problems in Engineering, doi:10.1155/2021/1548546
  • Lionel P. Robert, Andrea Forte, Claudia Müller, Michael Prilla, Adriana S. Vivacqua (2018): GROUP 2018 Special Issue Guest Editorial, In: ACM Transactions on Social Computing 3(1), doi:10.1145/3290870
Please note: Providing information about citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. These lists may be incomplete due to unavailable citation data.source: opencitations.net, crossref.org