×
验证码:
换一张
忘记密码?
记住我
CORC
首页
科研机构
检索
知识图谱
申请加入
托管服务
登录
注册
在结果中检索
科研机构
自动化研究所 [15]
内容类型
期刊论文 [11]
会议论文 [3]
学位论文 [1]
发表日期
2022 [3]
2021 [2]
2020 [4]
2017 [1]
2015 [2]
2014 [1]
更多...
×
知识图谱
CORC
开始提交
已提交作品
待认领作品
已认领作品
未提交全文
收藏管理
QQ客服
官方微博
反馈留言
浏览/检索结果:
共15条,第1-10条
帮助
限定条件
专题:自动化研究所
第一署名单位
第一作者单位
通讯作者单位
已选(
0
)
清除
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
作者升序
作者降序
题名升序
题名降序
发表日期升序
发表日期降序
提交时间升序
提交时间降序
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme
期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:
Peng, Xinyu
;
Wang, Fei-Yue
;
Li, Li
收藏
  |  
浏览/下载:11/0
  |  
提交时间:2022/06/06
Training
Estimation
Deep learning
Standards
Optimization
Noise measurement
Convergence
Deep learning
generalization performance
nontypicality sampling scheme
stochastic gradient descent (SGD)
A PID-incorporated Latent Factorization of Tensors Approach to Dynamically Weighted Directed Network Analysis
期刊论文
IEEE/CAA Journal of Automatica Sinica, 2022, 卷号: 9, 期号: 3, 页码: 533-546
作者:
Hao Wu
;
Xin Luo
;
MengChu Zhou
;
Muhyaddin J. Rawa
;
Khaled Sedraoui
收藏
  |  
浏览/下载:56/0
  |  
提交时间:2022/03/09
Big data
high dimensional and incomplete (HDI) tensor
latent factorization-of-tensors (LFT)
machine learning
missing data
optimization
proportional-integral-derivative (PID) controller
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization
期刊论文
IEEE/CAA Journal of Automatica Sinica, 2022, 卷号: 9, 期号: 5, 页码: 812-833
作者:
Xinlei Yi
;
Shengjun Zhang
;
Tao Yang
;
Tianyou Chai
;
Karl Henrik Johansson
收藏
  |  
浏览/下载:40/0
  |  
提交时间:2022/04/24
Distributed nonconvex optimization
linear speedup
Polyak-Łojasiewicz (P-Ł) condition
primal-dual algorithm
stochastic gradient descent
Drill the Cork of Information Bottleneck by Inputting the Most Important Data
期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:
Peng, Xinyu
;
Zhang, Jiawei
;
Wang, Fei-Yue
;
Li, Li
收藏
  |  
浏览/下载:16/0
  |  
提交时间:2022/01/27
Training
Signal to noise ratio
Mutual information
Optimization
Convergence
Deep learning
Tools
Information bottleneck (IB) theory
machine learning
minibatch stochastic gradient descent (SGD)
typicality sampling
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning
期刊论文
IEEE/CAA Journal of Automatica Sinica, 2021, 卷号: 8, 期号: 2, 页码: 402-411
作者:
Xin Luo
;
Wen Qin
;
Ani Dong
;
Khaled Sedraoui
;
MengChu Zhou
收藏
  |  
浏览/下载:27/0
  |  
提交时间:2021/04/09
Big data
industrial application
industrial data
latent factor analysis
machine learning
parallel algorithm
recommender system (RS)
stochastic gradient descent (SGD)
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling
期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:
Peng, Xinyu
;
Li, Li
;
Wang, Fei-Yue
收藏
  |  
浏览/下载:14/0
  |  
提交时间:2021/01/06
Training
Convergence
Approximation algorithms
Stochastic processes
Estimation
Optimization
Acceleration
Batch selection
machine learning
minibatch stochastic gradient descent (SGD)
speed of convergence
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization
期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 7, 页码: 2557-2568
作者:
Tao, Wei
;
Pan, Zhisong
;
Wu, Gaowei
;
Tao, Qing
收藏
  |  
浏览/下载:17/0
  |  
提交时间:2020/08/03
Convergence
Extrapolation
Optimization
Acceleration
Machine learning
Task analysis
Machine learning algorithms
Individual convergence
machine learning
Nesterov's extrapolation
nonsmooth optimization
sparsity
Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence
期刊论文
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 卷号: 50, 期号: 2, 页码: 835-845
作者:
Tao, Wei
;
Pan, Zhisong
;
Wu, Gaowei
;
Tao, Qing
收藏
  |  
浏览/下载:14/0
  |  
提交时间:2020/03/30
Convergence
Convex functions
Machine learning
Optimization methods
Linear programming
Cybernetics
Individual convergence
machine learning
mirror descent (MD) methods
regularized learning problems
stochastic gradient descent (SGD)
stochastic optimization
Rethinking the pid optimizer for stochastic optimization of deep networks
会议论文
London, United kingdom, July 6, 2020 - July 10, 2020
作者:
Shi, Lei
;
Zhang, Yifan
;
Wang, Wanguo
;
Cheng, Jian
;
Lu, Hanqing
收藏
  |  
浏览/下载:12/0
  |  
提交时间:2021/01/27
A Semi-Supervised Predictive Sparse Decomposition Based on Task-Driven Dictionary Learning
期刊论文
COGNITIVE COMPUTATION, 2017, 卷号: 9, 期号: 1, 页码: 115-124
作者:
Lv Le
;
Zhao Dongbin
;
Deng QingQiong
收藏
  |  
浏览/下载:12/0
  |  
提交时间:2017/05/08
Semi-supervised Learning
Predictive Sparse Decomposition
Neural Networks
Dictionary Learning
©版权所有 ©2017 CSpace - Powered by
CSpace