CORC  > 清华大学
Training sparse MS-SVR with an expectation-maximization algorithm
Zheng, D. N. ; Wang, J. X. ; Zhao, Y. N.
2010-05-06 ; 2010-05-06
关键词multi-scale support vector regression (MS-SVR) hierarchical-Bayes model maximum a posteriori (MAP) estimation expectation-maximization (EM) algorithm RELEVANCE VECTOR MACHINE REGRESSION Computer Science, Artificial Intelligence
中文摘要The solution of multi-scale support vector regression (MS-SVR) with the quadratic loss function can be obtained by solving a time-consuming quadratic programming (QP) problem and a post-processing. This paper adapts an expectation-maximization (EM) algorithm based on two 2-level hierarchical-Bayes models, which implement the l(1)-norm and the l(0)-norm regularization term asymptotically, to fast train MS-SVR. Experimental results illuminate that the EM algorithm is faster than the QP algorithm for large data sets, the l(0)-norm regularization term promotes a far sparser solution than the l(1)-norm, and the good performance of MS-SVR should be attributed to the multi-scale kernels and the regularization terms. (c) 2006 Elsevier B.V. All rights reserved.
语种英语 ; 英语
出版者ELSEVIER SCIENCE BV ; AMSTERDAM ; PO BOX 211, 1000 AE AMSTERDAM, NETHERLANDS
内容类型期刊论文
源URL[http://hdl.handle.net/123456789/9900]  
专题清华大学
推荐引用方式
GB/T 7714
Zheng, D. N.,Wang, J. X.,Zhao, Y. N.. Training sparse MS-SVR with an expectation-maximization algorithm[J],2010, 2010.
APA Zheng, D. N.,Wang, J. X.,&Zhao, Y. N..(2010).Training sparse MS-SVR with an expectation-maximization algorithm..
MLA Zheng, D. N.,et al."Training sparse MS-SVR with an expectation-maximization algorithm".(2010).
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace