Listwise Learning to Rank from Crowds | |
Wu, Ou1; You, Qiang1; Xia, Fen2; Ma, Lei1; Hu, Weiming3,4 | |
刊名 | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA |
2016-08-01 | |
卷号 | 11期号:1页码:1-39 |
关键词 | Listwise Learning To Rank Crowdsourcing Multiple Annotators Probabilistic Ranking Model Side Information |
DOI | 10.1145/2910586 |
文献子类 | Article |
英文摘要 | 11; Learning to rank has received great attention in recent years as it plays a crucial role in many applications such as information retrieval and data mining. The existing concept of learning to rank assumes that each training instance is associated with a reliable label. However, in practice, this assumption does not necessarily hold true as it may be infeasible or remarkably expensive to obtain reliable labels for many learning to rank applications. Therefore, a feasible approach is to collect labels from crowds and then learn a ranking function from crowdsourcing labels. This study explores the listwise learning to rank with crowdsourcing labels obtained from multiple annotators, who may be unreliable. A new probabilistic ranking model is first proposed by combining two existing models. Subsequently, a ranking function is trained by proposing a maximum likelihood learning approach, which estimates ground-truth labels and annotator expertise, and trains the ranking function iteratively. In practical crowdsourcing machine learning, valuable side information (e.g., professional grades) about involved annotators is normally attainable. Therefore, this study also investigates learning to rank from crowd labels when side information on the expertise of involved annotators is available. In particular, three basic types of side information are investigated, and corresponding learning algorithms are consequently introduced. Further, the top-k learning to rank from crowdsourcing labels are explored to deal with long training ranking lists. The proposed algorithms are tested on both synthetic and real-world data. Results reveal that the maximum likelihood estimation approach significantly outperforms the average approach and existing crowdsourcing regression methods. The performances of the proposed algorithms are comparable to those of the learning model in consideration reliable labels. The results of the investigation further indicate that side information is helpful in inferring both ranking functions and expertise degrees of annotators. |
WOS关键词 | MODELS |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000382878300004 |
资助机构 | National Science Foundation of China (NSFC)(61379098) |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/12019] |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 |
通讯作者 | Wu, Ou |
作者单位 | 1.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, 95 Zhongguancun East, Beijing, Peoples R China 2.Baidu Inc, Big Data Lab, 10 Shangdi 10th St, Beijing, Peoples R China 3.CAS Ctr Excellence Brain Sci & Intelligence Techn, 95 Zhongguancun East, Beijing, Peoples R China 4.Chinese Acad Sci, Inst Automat, NLPR, 95 Zhongguancun East, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Wu, Ou,You, Qiang,Xia, Fen,et al. Listwise Learning to Rank from Crowds[J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA,2016,11(1):1-39. |
APA | Wu, Ou,You, Qiang,Xia, Fen,Ma, Lei,&Hu, Weiming.(2016).Listwise Learning to Rank from Crowds.ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA,11(1),1-39. |
MLA | Wu, Ou,et al."Listwise Learning to Rank from Crowds".ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 11.1(2016):1-39. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论