Learning Deep Architectures via Generalized Whitened Neural Networks | |
Ping Luo | |
2017 | |
会议地点 | 澳大利亚 |
英文摘要 | Whitened Neural Network (WNN) is a recent advanced deep architecture, which improves convergence and generalization of canonical neural networks by whitening their internal hidden representation. However, the whitening transformation increases computation time. Unlike WNN that reduced runtime by performing whitening every thousand iterations, which degenerates convergence due to the ill conditioning, we present generalized WNN (GWNN), which has three appealing properties. First, GWNN is able to learn compact representation to reduce computations. Second, it enables whitening transformation to be performed in a short period, preserving good conditioning. Third, we propose a data-independent estimation of the covariance matrix to further improve computational efficiency. Extensive experiments on various datasets demonstrate the benefits of GWNN. |
语种 | 英语 |
内容类型 | 会议论文 |
源URL | [http://ir.siat.ac.cn:8080/handle/172644/11771] |
专题 | 深圳先进技术研究院_集成所 |
作者单位 | 2017 |
推荐引用方式 GB/T 7714 | Ping Luo. Learning Deep Architectures via Generalized Whitened Neural Networks[C]. 见:. 澳大利亚. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论