• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search

2012  Vol. 49  No. 7

Abstract:
For the limitations of skeleton extraction on discrete point models and low-resolution models, this paper presents a novel skeleton extraction approach for point models based on the attributes of surface and tangency. Firstly, the definition and the calculating method of the two attributes are given. And then, geometry contraction is done by surface smooth contraction and skeleton attraction. Finally, the center points after iteration contraction are connected, and the skeleton of the model is obtained. Experiment results show that the skeleton extracted from this approach could express the geometry and topology of original model better. Even if the model lacks connection information and in low-resolution, the approach could also get good skeleton.
Abstract:
Cloth animation has been one of the hottest topics in the field of computer graphics for many years. Simulating inextensible cloth with wrinkles is one of the challenges. To prevent the over-stretching in cloth animation, effective constraints are usually introduced into the system. However, existing cloth solvers have established the constraints in disregard of the relationship between stretching and bending deformation, so that the final animation is not satisfactory. In this paper, a novel method of cloth animation based on implicit constraint dynamics is proposed. Firstly, the bending deformation of cloth is analyzed from micro-mesh view, based on which an adaptive constraint is established to structure and shear tensile. By applying the constraints, over-elongation deformation can be avoided while wrinkles are not prevented on cloth. At the same time, global collision constraints are established to prevent the penetration, which can avoid the over-elongation that can be easily caused by locally moving particles. Secondly, constraint dynamic equation including implicit constraints is developed that is solved by constraint manifold iterative refinement method. Finally, experiments are conducted and the analysis and evaluation are given. The results show that our method is stable and effective, which ensures the realistic and effective cloth simulation.
Abstract:
3D hand tracking is one of the major research topics in the field of human-computer interaction. We present a novel model-based hand tracking method in this paper, which embeds hierarchical optimization method into the particle-filter-based tracking frames to improve the efficiency of particles sampling from the hidden state space. Firstly, the low dimension hidden state space is introduced to approximately describe the hand configuration state in the original high dimension configuration space, and the dynamic hand model in the hidden state space is presented according to the physiological constraints of hand motion. Secondly, to obtain more efficient particles during tracking, hierarchical genetic optimization method is regarded as the importance sampling function to modify the sampling algorithm of particle-filter. Experiments demonstrate that our approach can have fast tracking performance even under the clutter background when hand part self-occlusion exists.
Abstract:
Most existing methods for animating a human body require a lot of manual work, and only deal with human body in some specific poses. We present a novel automated method for animating human bodies. Firstly, five feature points at the end points of head and limbs on human body surface are automatically extracted and recognized. Secondly, by taking the feature points as source points, the geodesic distance isolines on the surface are calculated and the central lines are subsequently constructed. Thirdly, along the central lines, joint positions are roughly determined based on the anthropometry knowledge, and then the joint positions are refined using kinematic characteristics of skeleton and motion data. Forthly, the skin attachment is made by solving for heat equilibrium. Finally, we match the extracted skeleton to motion data by aligning local coordinate frames and then animate the human body using motion data. Experiment results demonstrate that our method is pose-independent and fully automated, and can generate more realistic animation than previous methods.
Abstract:
In order to overcome the two problems, parameters selection in the local thresholding segmentation methods which have many parameters usually and discontinuous problem among neighbor regions in the segmentation results, an interactive document images thresholding segmentation algorithm based on image regions is proposed in this paper with the priori knowledge or experience from the users. Firstly, the presented method divides the image into several regions roughly. Secondly, it sorts all the image blocks according to their standard deviation values, which are taken as a measure to tell how much information of the background and object every block contains. Thirdly, the users input interactive information to separate all regions into three parts: blocks containing background or target only, blocks containing a small amount of background or target, and blocks containing distributing equilibrium background and target. Finally, the binarization of every block is conducted according to different criterion. Extensive experimental results show that the proposed scheme yields more promising binarization outcomes and also has better performance for document images under normal and inadequate illumination conditions, compared with global methods, simply thresholding approaches based on regions, and Chou's method. Moreover, the introduced approach is also effective for part of the non-text images.
Abstract:
In computer graphics, methods for mesh simplification are common. However, most of them focus on static meshes, only few works have been proposed for simplifying dynamic models. In this paper, we propose an efficient method for the multiresolution representation of deforming surfaces based on deformation distance analysis. Our method gets different level of detail models by performing iterative edge contraction operations. We use deformation distance to analyze the deformation degree of the triangle planes during the whole animation, and define a deformation weight to be added to the aggregated edge contraction cost. So the features in areas with large deformation can be preserved well in the output animation model. We also propose a mesh optimization method for dynamic models. We first compute the mean value coordinate weight of the first frame model, and then use it to move the vertices on the second frame model. Similarly, we use the weights calculated on the second frame model to move vertices in the third frame model, and so on. So we can transfer the triangle shapes in the current frame to the next frame. This can efficiently improve the temporal coherence of the output animation and reduce visual artifacts between adjacent frames. The results show that our approach is efficient, easy to implement, and good quality dynamic approximations with well-preserved fine details can be generated at any given frame.
Abstract:
The progressive increase of Internet bandwidth makes remote visualization possible, which has advantages of remote share, mobility and convenience. A Sort-Last based distributed rendering architecture composed of rendering nodes, blending nodes and task nodes, and the corresponding algorithms including GPU based blending and anti-aliasing, are proposed to address the large-scale data rendering problem. The framework is highly parallel and extensible at each level such as rendering, blending and task balance, that is, arbitrary nodes can be deployed as stack structure according to performance requirement. On that basis, a remote visualization system, namely Waterman, is implemented to provide Internet services of terrain rendering and sewage visualization of sea outfalls. The design methods and techniques of Waterman are introduced in detail, which includes GPU based Raycasting algorithm for fast terrain rendering, land mask based technique to remove water area of aero photo texture for sea surface rendering and visualization, and hybrid rendering method using both polygon model and image based rendering. Finally, extensive experiments are settled to analyze benchmark of the distributed rendering framework, as well as time performance and data traffic throughput of the total remote visualization system. The results indicate that the proposed method is practicable, robust, and can serve as a good study case for similar systems.
Abstract:
Shock feature visualization plays an important role in flow visualization. To perform shock extraction, existing methods usually carry out shock detection with the normal Mach first and then filter the noise. When computing the normal Mach, they do not distinguish between the pressure gradient and the density gradient. Moreover, their noise filter has poor adaptability and accuracy with the availability depending on the test data. It usually makes the weak shock filtered along with the noise especially when there are multi-shock features with different strengths in flows. Besides this, the shock surface may be unconnected or split even for the single-shock flows. It is necessary to perform shock detection with the pressure gradient. On the basis of the shock attributes, a novel visualization method is proposed for multi-shock features using two-level sampling on the framework of recasting. The work is performed on GPU for the 3D unstructured-grid data with the complicated topology. The experimental results show that our method can automatically filter the noise and its adaptability and accuracy are much better than those of the existing methods even for the multi-shock flows. Meanwhile, a real-time performance is achieved for the large unstructured-grid data.
Abstract:
Uncertain data has already widely existed in many practical applications recently, such as sensor networks, RFID networks, location-based services and mobile object management. Uncertain data query as an important aspect of uncertain data management, plays an important role in information retrieval, data mining, decision making, environment monitoring and many other applications, and is emerging as a hotspot in the research of database, network computing and many other areas. Starting from the introduction of various query types and the analysis of the query characteristics in the current research of uncertain data queries, the four typical types of uncertain data query, i.e. uncertain skyline query, uncertain top-k query, uncertain nearest neighbor (NN) query and uncertain aggregation query are summarized. In this paper, the definitions and the specific features of the four kinds of uncertain data queries are discussed, and the main current studies of different uncertain data query types, as well as the advantages and disadvantages of various methods are addressed and analyzed. Finally, based on the analysis of the latest research work on the uncertain data queries, the future work is discussed and outlined.
Abstract:
Skyline queries over uncertain streams has become a challenge because of the uncertainty and dynamics of data. Uncertain data is usually represented by a multivariate probability density function (PDF). The current skyline queries over uncertain data streams are modeled by discrete PDF. However, the discrete PDF may not suit for many streams, because these streams may be continuously changing. The research of the skyline query on continuous PDF model proposes an efficient skyline query method over Gaussian model uncertain stream (SGMU). Firstly, a dynamic Gaussian modeling (DGM) algorithm is proposed to build the Gaussian model by sampling sliding window in streams. Secondly, a Gauss-tree based skyline query algorithm (GTS) is raised to build a spatial indexing structure for uncertain data stream. Experimental results demonstrate that SGMU not only can model the uncertain objects efficiently to support skyline queries, but also can greatly improve the skyline queries by pruning objects efficiently.
Abstract:
Traditional agent cooperation model emphasizes the high degree of autonomy of agent, and its spontaneous collaborative process is completely out of internal “selfish” motivation. No macro guidance, free from external control, it is difficult to meet the reliability requirements of current software system. The computational complexity of the formal theory and no effective mechanism for conflict resolution are also obstacles for agent technology to be applied in the real software systems. In this paper, using organization and policy metaphor, we build an organized and policy-oriented agent collaborative model. Through the organization and policies, macro-guidance and external control are given to the agents to enhance system dependability. While using extended defeasible logical framework, we establish a formal theory for this collaborative process which has non-monotonic reasoning ability and linear time computational complexity. The built-in priority manner of this logical framework provides an effective mechanism for conflict resolution. Based on this model, a policy-oriented cooperation process in agent organization is formally described with four cooperation stages including organizational state updating, organizational goals creating, agent obligation distributing and agent action executing. And then we analyze computational complexity and prove the consistency of the model, as well as some other simple and interesting nature. Finally, the main characteristics of this model are verified by an example.
Abstract:
Hinge loss is central to the success of support vector machines (SVM) in the area of machine learning. L1 regularization plays a crucial role in sparse learning, which is essentially important for large scale classification problems. However, both hinge loss and L1 regularization are non-differentiable, and higher order of gradient information is unavailable. In this paper, the optimization problem in the form of L1 regularization plus hinge loss is systematically investigated by using the sub-gradient method. We first describe algorithms for the direct sub-gradient method and the projected sub-gradient method in a stochastic setting. To confirm the algorithms' correctness, we conduct convergence analysis as well as convergence rate of the stochastic projected sub-gradient method. Experimental results on large scale text classification data demonstrate that the stochastic projected sub-gradient method has better convergence rate and high sparsity, and many of the elements in the weight vector are zero, when processing large scale sparse problems. Further, we also demonstrate how the project threshold affects the algorithms' sparsity.
Abstract:
The classification of remote sensing images is one of the most important issues in the remote sensing field. Due to inherent variability and uncertainty of the data, training data is hard to obtain in most real-world applications, which impact the classification accuracy of traditional classifiers greatly. In this paper, a novel fuzzy associative classifier based on fuzzy association rules, namely fuzzy associative remote sensing classification (FARSC), is developed for the classification of remote sensing images. The proposed algorithm employs fuzzy C-means to partition quantitative attributes according to their intrinsic characteristics, adopts new jointing and pruning techniques without generating useless candidate itemsets, and introduces a weighted parameter to score the fuzzy association rules, which fuses multiple rules to avoid the bias towards some classes. To evaluate the performance of the proposed algorithm, an experiment on the remote sensing image of Zhalong Nature Reserve is performed, compared with two other image classification algorithms: support vector machine and extreme leaning machine. The experimental results show that the proposed algorithm not only has higher classification accuracy, but also is insensitive to the variation of amount of the training data. Hence FARSC can effectively overcome the problem of the lack of training data set in the actual remote sensing classification and get a high classification accuracy.
Abstract:
Most traditional digital games are little concerned about the description of high-level dynamic interactions, and do not support describing collaboration relations and planning of multi-level of groups. Furthermore, few of them are based on a complete formal theory. The description logic of tasks combines the knownlege representation structure of the description logic and the task semantics of the logic of tasks. A formal definition of the collaboration and collaborative planning in digital games is provided, which defines the completeness of the task. Meanwhile, a decision theorem of the completeness of the planning under joint strategies is also given by the method. Experimental results demonstrate that task interaction semantics in high-level relationship of the virtual group is described accurately, and decidable deduction service for the completeness of the collaborative planning is provided as well.
Abstract:
The dynamic creditability evaluation of software has become a hot issue in the information security field. In order to improve the accuracy of evaluation, a creditability evaluation model based on software behavior trace (CEMSBT) is demonstrated in this paper. We introduce software behavior trace (SBT) to describe the software behavior. Given that the operational process and background of running software are key factors in creditability evaluation, SBT consists of operation trace and function trace. Operation trace is the operation sequences of the running software, which can be denoted by ordered check point vectors; function trace is depicted by a series of scenes which have the ability of characterizing the software functions. With the purpose of reducing the time and space overheads of creditability evaluation, we give reduction rules of SBT. Our model applies identification evaluation rule and scene evaluation rules to check the practical behavior of software. The branch point brings software some randomness which can be used by intruders, so it is necessary to judge which branch will be run next. We propose the scene similarity method to determine the direction of the branch, which can reduce the false negatives. The simulation results indicate the accuracy and efficiency of CEMSBT.
Abstract:
Vulnerabilities in ActiveX controls are large in number and tend to exhibit high level of severity. They are frequently exploited in Web based attacks to compromise client computers, thus motivating the research into techniques for discovering such flaws automatically. In this work, the authors propose and implement an ActiveX vulnerability detection tool named ActiveX-Fuzzer. It is a blackbox fuzzing tool that automatically feeds the interface exposed by an ActiveX control with crafted semi-valid data, attempting to identify potential vulnerabilities including buffer overflow, integer overflow and format string flaws. The tool is tested against a broad range of commonly used ActiveX controls and detects a number of highly severe vulnerabilities that are previously undiscovered, affecting Tencent QQ, WinZip, Microsoft Office and other software products, as well as online services from several major banks. The test result well proves the effectiveness of such an approach.
Abstract:
Our research on the distributed query authentication aims at decreasing the authentication cost of the existing schemes, such as authenticated skiplist and signature chaining. Both the definition of the distributed query authentication and the formalized description of the authenticity, which has to be satisfied, are proposed in this paper. A new authenticated data structure called hierarchical Hash list (HHL), is designed to guarantee the integrity and authenticity of the answers to the query, while decreasing the computation and authentication cost as much as possible. The algorithms for its construction, authentication and updating, as well as its definition, are also designed. According to the analysis of the redundant Hash nodes in the HHL, the basic HHL is improved to be more efficient on the cost. For that reason, statistical methods and hierarchical data processing are used and the cost decreases to O(log n). The security analysis is carried out by simulating adversaries' attacks against the authenticity of the data. The analysis results show that the HHL could detect different kinds of behaviors which could destroy the authenticity of the query answers, and this also proves the proposed scheme's security. Experiments show that compared with the typical distributed query authentication scheme—signature chaining, our scheme is proved more efficient on the authentication cost.
Abstract:
Data anonymization is one of the important solutions to preserve privacy in data publishing. The basic concept of data anonymization and the application models are introduced, and the requirements that an anonymized dataset should meet are discussed. To resist the background knowledge attack, a new data anonymization approach based on impurity gain and hierarchical clustering is brought out. The impurity of a cluster is used to measure the randomicity of sensitive attributes, and the clusters' combination process is controlled by the restrictions that the information loss caused by generalization should be minimized and the impurity gain should be maximized. With the method, the anonymization results of a dataset can meet the requirements of k-anonymity model and l-diversity model, meanwhile, the information loss is minimized and the values of the sensitive attributes in each cluster has a uniform distribution. An evaluation method is provided in the experiment section, which compares anonymized dataset with the original one to evaluate the quality by calculating the average information loss and impurity. The experimental results validate the availability of the method.
Abstract:
ECC is one of the most important public key cryptosystems which has higher security strength per bit compared with RSA and ElGamal. The break of ECC needs a great amount of computational resources. With the help of SIMD instructions, a bitsliced version of parallelized Pollard rho algorithm is proposed for the break of ECC over binary finite fields and some key operations are optimized. Based on the idea of bit swap, an efficient algorithm of converting data format between bitslice and non-bitslice is proposed, which can be implemented with basic logic instructions in CPU and avoid conditional branch. The computational complexity of the algorithm is O(n log n) and the slightly modified version can be used for the quick transposition of 0-1 matrix. The involved elliptic curve algorithm is improved with the help of Karatsuba-Ofman algorithm for multiplication and Montgomery trick for parallel inversion. The improved Pollard rho algorithm based on SIMD instructions is applied to the ECC2-109 and ECC2-131 of Certicom' ECC challenges, and the experimental results show that the iteration speed on the platform of one core Pentium 4 30GHz CPU reach 133 million and 098 million per second respectively, which is about one time faster than the program of Chris Monico, who is the winner of the challenge ECC2-109.
Abstract:
Parallel simulation for large scale network has become the main method of Internet research. Aiming at the imbalance of traditional network topology partition method, a topology partition algorithm for parallel network simulation based on abstract subtraction and traffic estimation is put forward. The node with one degree is recursively abstracted to conjoint router by abstract subtraction technology. The weights of node and link in the topology are initialized by estimating algorithm, and the traffic between nodes is changed to weight, which will be accumulated to the corresponding node and link. At the same time, the weight should be normalized to avoid weights gap. Experimental results prove that this partition algorithm can abstract node by 93.7 percent and reduce by subdomain by about 56.9 percent, rlink by about 22.9 percent, and simulation time by about 12.63 percent. Compared with the traditional partition algorithm, the algorithm improves the scale and efficiency of simulation.
Abstract:
Data aggregation is a fundamental and yet time-consuming task in WSNs, especially in high-density WSNs. Therefore, people have focused on the problem of minimum-latency data aggregation. The problem has been already proved that it is an NP-hard. This paper proposes a cluster-based data aggregation scheduling algorithm called MPMC in multi-channel and multi-power WSNs to minimize the data aggregation latency. The paper adopts the idea of that the low power is used for packet transmission in inner-cluster and high power is used for packet transmission between clusters. This paper analyzes the number of channel under different topologies that approaches a constant. In simulation experiments, MPMC compares with the best algorithm based on single channel and the best algorithm based on multi-channel. Simulation results show that the MPMC algorithm proposed in this paper achieves the minimum average latency.
Abstract:
In distributed file system, the management of small file storage access has encountered some problems, such as poor access performance, low disk space utilization rate, high file transfer delay, etc. To solve these problems, this paper proposes a strategy of small file storage access (SFSA) with performance optimization. SFSA can try to store logical continuous data on continuous space of physical disks as far as possible, and use a cache to act as metadata server and improve utilization rate of cache by using simplified file information node. Therefore it can improve the performance of small file storage access. In order to solve the problem of low disk space utilization rate, SFSA still uses a method of writing optimization which combines the dirty data with its related data in file folder domain into a single I/O request, so, it can reduce the number of file fragments. In addition, according to the principle of data locality, we also propose a method which sends the highly accessed small files ahead of time. It reduces the overhead of network connection and improves the file transfer performance.Theoretical analysis and experimental results show that the design idea and method of SFSA strategy can improve the performance of small file storage access effectively.
Abstract:
Due to the requirement of storing massive data in information age, a lot of new storage theories and technologies are presented. The storage system of the intelligent network disk (IND) will be an important technological method to solve the problem of massive data storage. There are few researches on data disaster tolerance of this kind of storage system so far. After introducing and analyzing the storage system of the intelligent network, and studying the characteristics of data distribution in this storage system and the existing technology about data disaster tolerance, we put forward a self immunity algorithm and a backup IND algorithm of data disaster tolerance in the storage system of IND. Finally, it is proved that these algorithms are correct and well performed by the experiments on high speed local area network. The theories and experiments have also proved that the backup IND algorithm of data disaster tolerance is more reliable than the self immunity algorithm of data disaster tolerance. The study in this paper will be a good reference or source of theory for the researches on disaster tolerance of the storage system of IND in the future, and perfect the theory and technology of IND.
Abstract:
The time and cost trade-off problem in workflow scheduling with deadline constraint in utility grids is an NP-hard, but attractive problem. In the past studies, the priority rule, best fit based on time-dependent coupling strength (referred as BFTCS), was adopted to solve this question because BFTCS considers both the features of resources and workflow structures and uses them as two important factors in the iterative heuristic for cost optimization. However, BFTCS did not consider another important factor: the temporal feature of the workflow. In this paper, we define temporal mobility (TM) to describe the temporal feature of the workflow, and incorporate TM into the iterative stage as an important factor to improve the workflow execution cost further. This novel priority rule (referred as best fit based on time-dependent coupling strength and temporal mobility, BFTCSTM) gives priority to the tasks with larger TM and smaller TCS (time-dependent coupling strength) during the iterative stage. Therefore, compared with BFTCS, BFTCSTM slows down the increasing rate of the workflow execution time greatly, and provides more opportunities to other tasks to optimize cost, which decreases the total cost further. The experimental results demonstrate the advantages of BFTCSTM based iterative heuristic.