Real-time garment animation could generate vivid cloth animation on 3D virtual avatars under the limitation of computational time cost. It potentially has broad applications in fields of games, entertainment, garment industry, etc. The problems concentrate on how to establish a computational model which could generate visually plausible garment animation under the restriction of real-time computing. Currently, cloth animation models could be divided into two categories based on the properties of fabric. They are physical-based models and geometric-based models. The former has the advantage of vision realism and the latter has the advantage of high efficiency. Mixed models, which combine physical-based models and geometric-based ones together, are effective ways for real-time cloth animation. In this paper, a new mixed model for garment animation based on the sample data is presented. In sample data, collision between cloth and body are investigated by a probability analysis method for predicting correlation between cloth motion and body motion. So that cloth could be compartmentalized reasonably under such correlation and mixed two different types of cloth animation models. The new mixed model could support real-time cloth animation and has a mechanism for dynamic control of efficiency. It has the following advantages: Firstly, it could automatically mix two different models; Secondly, it could support real-time cloth animation; Thirdly, the efficiency could be controlled dynamically; Finally, the compartmentalization of cloth are more subtly and reasonable. Experiments show that compared with the method based on static distance, cloth animation results of our method are more close to physical-based animation results under the same efficiency.