高级检索
    张世学 赵金宇. 基于变形距离分析的多分辨率动画模型生成[J]. 计算机研究与发展, 2012, 49(7): 1432-1437.
    引用本文: 张世学 赵金宇. 基于变形距离分析的多分辨率动画模型生成[J]. 计算机研究与发展, 2012, 49(7): 1432-1437.
    Zhang Shixue and Zhao Jinyu. Multiresolution Animated Models Generation Based on Deformation Distance Analysis[J]. Journal of Computer Research and Development, 2012, 49(7): 1432-1437.
    Citation: Zhang Shixue and Zhao Jinyu. Multiresolution Animated Models Generation Based on Deformation Distance Analysis[J]. Journal of Computer Research and Development, 2012, 49(7): 1432-1437.

    基于变形距离分析的多分辨率动画模型生成

    Multiresolution Animated Models Generation Based on Deformation Distance Analysis

    • 摘要: 在计算机图形学中,关于网格简化的方法已有很多,但绝大多数都是针对静态网格进行的,对于动画变形网格的简化工作还很少.提出了一种基于变形距离的动态表面多分辨率模型生成方法.基于重复边收缩操作来对模型进行简化,使用变形距离来度量三角形面片在整个变形序列的变形程度,将此权值考虑到累加的边折叠代价中,可以有效地保持一些形变较大区域的细节特征.在此基础上,提出了一种在动画模型序列上的网格优化算法,在调整三角形形状的同时,提高了动态模型输出的时间一致性,减小了相邻帧之间的视觉跳变.实验结果证明该方法具有较高的效率,易于实现,并且可以输出任意分辨率的高质量简化模型序列.

       

      Abstract: In computer graphics, methods for mesh simplification are common. However, most of them focus on static meshes, only few works have been proposed for simplifying dynamic models. In this paper, we propose an efficient method for the multiresolution representation of deforming surfaces based on deformation distance analysis. Our method gets different level of detail models by performing iterative edge contraction operations. We use deformation distance to analyze the deformation degree of the triangle planes during the whole animation, and define a deformation weight to be added to the aggregated edge contraction cost. So the features in areas with large deformation can be preserved well in the output animation model. We also propose a mesh optimization method for dynamic models. We first compute the mean value coordinate weight of the first frame model, and then use it to move the vertices on the second frame model. Similarly, we use the weights calculated on the second frame model to move vertices in the third frame model, and so on. So we can transfer the triangle shapes in the current frame to the next frame. This can efficiently improve the temporal coherence of the output animation and reduce visual artifacts between adjacent frames. The results show that our approach is efficient, easy to implement, and good quality dynamic approximations with well-preserved fine details can be generated at any given frame.

       

    /

    返回文章
    返回