高级检索

    一种基于混合模型的实时虚拟人服装动画方法

    Real-Time Garment Animation Based on Mixed Model

    • 摘要: 实时服装动画生成技术能够为三维虚拟角色实时地生成逼真的服装动态效果,在游戏娱乐、虚拟服装设计展示等领域有着广泛的应用前景.其难点在于如何建立服装动画计算模型,在实时计算的前提下获得最佳的服装动画生成效果.在对服装模型与人体模型在运动过程中发生的位置冲突(collision, 也称碰撞)进行分析的基础上,研究并提出了一种基于混合模型的实时虚拟人服装动画计算模型.首先,根据服装动画样本数据中服装与人体发生位置冲突的信息,对服装与人体的运动相关性进行分析;在此基础上,提出并实现一种新的混合策略,将具有较好服装动态模拟效果的动力学计算模型与具有较高计算效率的几何变形方法进行混合,建立支持实时计算且效率可动态控制的服装动画计算模型.实验结果表明,该计算模型能够实时地生成具有较好视觉逼真性的服装动画.

       

      Abstract: Real-time garment animation could generate vivid cloth animation on 3D virtual avatars under the limitation of computational time cost. It potentially has broad applications in fields of games, entertainment, garment industry, etc. The problems concentrate on how to establish a computational model which could generate visually plausible garment animation under the restriction of real-time computing. Currently, cloth animation models could be divided into two categories based on the properties of fabric. They are physical-based models and geometric-based models. The former has the advantage of vision realism and the latter has the advantage of high efficiency. Mixed models, which combine physical-based models and geometric-based ones together, are effective ways for real-time cloth animation. In this paper, a new mixed model for garment animation based on the sample data is presented. In sample data, collision between cloth and body are investigated by a probability analysis method for predicting correlation between cloth motion and body motion. So that cloth could be compartmentalized reasonably under such correlation and mixed two different types of cloth animation models. The new mixed model could support real-time cloth animation and has a mechanism for dynamic control of efficiency. It has the following advantages: Firstly, it could automatically mix two different models; Secondly, it could support real-time cloth animation; Thirdly, the efficiency could be controlled dynamically; Finally, the compartmentalization of cloth are more subtly and reasonable. Experiments show that compared with the method based on static distance, cloth animation results of our method are more close to physical-based animation results under the same efficiency.

       

    /

    返回文章
    返回