Deep Deformation Detail Synthesis for Thin Shell Models
Chen, Lan1,2; Gao, Lin2,5; Yang, Jie2,5; Xu, Shibiao4; Ye, Juntao1; Zhang, Xiaopeng1; Lai, Yu-Kun3
刊名COMPUTER GRAPHICS FORUM
2023-08-10
页码13
关键词Computing methodologies Physical simulation Artificial intelligence
ISSN号0167-7055
DOI10.1111/cgf.14903
通讯作者Gao, Lin(gaolin@ict.ac.cn) ; Xu, Shibiao(shibiaoxu@bupt.edu.cn)
英文摘要In physics-based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning. Data-driven techniques make efforts to reduce the computation significantly by utilizing a preprocessed database. One type of methods relies on human poses to synthesize fitted garments, but these methods cannot be applied to general cloth animations. Another type of methods adds details to the coarse meshes obtained through simulation, which does not have such restrictions. However, existing works usually utilize coordinate-based representations which cannot cope with large-scale deformation, and requires dense vertex correspondences between coarse and fine meshes. Moreover, as such methods only add details, they require coarse meshes to be sufficiently close to fine meshes, which can be either impossible, or require unrealistic constraints to be applied when generating fine meshes. To address these challenges, we develop a temporally and spatially as-consistent-as-possible deformation representation (named TS-ACAP) and design a DeformTransformer network to learn the mapping from low-resolution meshes to ones with fine details. This TS-ACAP representation is designed to ensure both spatial and temporal consistency for sequential large-scale deformations from cloth animations. With this TS-ACAP representation, our DeformTransformer network first utilizes two mesh-based encoders to extract the coarse and fine features using shared convolutional kernels, respectively. To transduct the coarse features to the fine ones, we leverage the spatial and temporal Transformer network that consists of vertex-level and frame-level attention mechanisms to ensure detail enhancement and temporal coherence of the prediction. Experimental results show that our method is able to produce reliable and realistic animations in various datasets at high frame rates with superior detail synthesis abilities compared to existing methods.
资助项目National Natural Science Foundation of China[U21A20515] ; National Natural Science Foundation of China[62171321] ; National Natural Science Foundation of China[62162044] ; National Natural Science Foundation of China[62102414] ; National Natural Science Foundation of China[62061136007] ; Beijing Municipal Natural Science Foundation for Distinguished Young Scholars[JQ21013] ; Royal Society Newton Advanced Fellowship[NAF\R2\192151] ; Innovation Funding of ICT, CAS[E361090]
WOS研究方向Computer Science
语种英语
出版者WILEY
WOS记录号WOS:001046199300001
资助机构National Natural Science Foundation of China ; Beijing Municipal Natural Science Foundation for Distinguished Young Scholars ; Royal Society Newton Advanced Fellowship ; Innovation Funding of ICT, CAS
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/53963]  
专题多模态人工智能系统全国重点实验室
通讯作者Gao, Lin; Xu, Shibiao
作者单位1.Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Beijing, Peoples R China
3.Cardiff Univ, Cardiff, Wales
4.Beijing Univ Posts & Telecommun, Beijing, Peoples R China
5.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Chen, Lan,Gao, Lin,Yang, Jie,et al. Deep Deformation Detail Synthesis for Thin Shell Models[J]. COMPUTER GRAPHICS FORUM,2023:13.
APA Chen, Lan.,Gao, Lin.,Yang, Jie.,Xu, Shibiao.,Ye, Juntao.,...&Lai, Yu-Kun.(2023).Deep Deformation Detail Synthesis for Thin Shell Models.COMPUTER GRAPHICS FORUM,13.
MLA Chen, Lan,et al."Deep Deformation Detail Synthesis for Thin Shell Models".COMPUTER GRAPHICS FORUM (2023):13.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace