Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2005, Volume 42 Issue 2
Paper
3D Human Motion Simulation and a Video Analysis System for Sports Training
Wang Zhaoqi, Zhang Yongdong, and Xia Shihong
2005, 42(2):  . 
Asbtract ( 638 )   PDF (425KB) ( 1105 )  
Related Articles | Metrics
Introducing digital graphics and image techniques in the sports training, and developing a sport-oriented 3D human motion simulation and video analysis system offer a strong high-tech assurance for athletes' preparations for the 2008 Olympic Games. First, the background and significance of developing the system is introduced. Then the key functions of the system from the video analysis and 3D motion simulation are summarized. Finally, the problems to be solved and the solutions methods are discussed in detail, which include motion human body extraction and tracking in the video, 3D human motion simulation and so on. These problems and solutions are crucial to the realization of these functions.
An MDP Public-Key Digital Signature Scheme
Zheng Jiande
2005, 42(2):  . 
Asbtract ( 309 )   PDF (331KB) ( 527 )  
Related Articles | Metrics
A new public-key digital signature scheme is developed in this paper by exploring the hard matrix diagonalization problem (MDP) over Z\-n, the ring of integers with modulo-n addition and multiplication, where n is an RSA modulus. In the proposed system, a digital signature is obtained by embedding simple functions of a message into two 2×2 matrices as their eigenvalues, while the eigenvectors of the matrices are computed from the signer's private key. Security of the new scheme depends on the problem of solving three simultaneous quadratic equations with total five variables. It remains to be proved that the new crypto problem is as hard as solving Rabin equations, but it is apparently harder than solving pure bi-variate quadratic equations, i.e., Ong-Schnorr-Shamir equations, and it is not vulnerable by Pollard-Schnorr attack. The most important feature of the new scheme is its high efficiency. Since no high crder exponentiation is involved, it takes only a dozen of multiplications to sign a message, among which only 4 multiplications should be performed online, the rest can be pre-computed, while more than 1000 modulo-n multiplications are required to obtain an RSA signature. new scheme can also be used for identity-based digital signature.
Structure Optimization of Radial Basis Probabilistic Neural Networks by the Maximum Absolute Error Combined with the Micro-Genetic Algorithm
Zhao Wenbo, Wang Liming, and Huang Deshuang
2005, 42(2):  179-187. 
Asbtract ( 380 )   HTML ( 1)   PDF (572KB) ( 548 )  
Related Articles | Metrics
The maximum absolute error algorithm (MAEA) is used to optimally selecting the hidden centers vectors of the radial basis probabilistic neural networks (RBPNN). The MAEA is combined with the micro-genetic algorithm (μGA), which is used to optimize the controlling parameter of the kernel function of the RBPNN, i.e., MAE-μGA, so as to carry out optimizing the overall structure of RBPNN. The experiments demonstrate that the RBPNN, optimized by the MAE-μGA, has the best simple structure compared with results by the other optimization methods introduced. Furthermore, in the aspect of the generalization performance of the optimized networks, the RBPNN by the MAE-μGA is a little better than ones by the other methods. In addition, the MAE-μGA can also be used to optimize the radial basis function neural networks (RBFNN).
Self-Organizing Isometric Embedding
Hou Yuexian, Ding Zheng, and He Pilian
2005, 42(2):  188-195. 
Asbtract ( 469 )   HTML ( 1)   PDF (575KB) ( 428 )  
Related Articles | Metrics
Most algorithms of nonlinear dimensionality reduction suffer some flaws in common. First, these algorithms need to solve large-scale eigenvalues problems or some variation of eigenvalues problems, which is usually of quadratic complexity of time. The quadratic complexity limits the applicability of algorithms in the case of large-size sampling sets, which are very prevalent in the applications of real world, e.g., texts clustering, images recognitions, biological information and so on. Second, current algorithms are global and analytic, which are often sensitive to noise and disturbed by ill-conditioned matrix. To overcome the above problems, a novel self-organizing nonlinear dimensionality reduction algorithm: SIE (self-organizing isometric embedding) is proposed. The time complexity of SIE is O(NlogN), which can bring a N/logN speedup against most current algorithms. Besides, the main computing procedure of SIE is based on locally self-organizing scheme, which improves the robustness of the algorithm remarkably and circumvents the trouble introduced by ill-conditioned matrix. The simulations demonstrate that the reconstructing quality of SIE is as optimal as the best global algorithm in the case of clean sampling sets and superior to global algorithms prominently in the case of noisy sampling sets.
Support Vector Machines Based on Posteriori Probability
Wu Gaowei, Tao Qing, and Wang Jue
2005, 42(2):  196-202. 
Asbtract ( 812 )   HTML ( 1)   PDF (427KB) ( 623 )  
Related Articles | Metrics
To solve uncertain classification problem, an SVM (support vector machine) is trained to behave like a Bayesian optimal classifier based on the training data. The idea is to weigh each unbalanced training sample by a posteriori probability. A whole framework of posteriori probability support vector machine (PPSVM) is presented and SVM is reformulated into PPSVM. The linear separability, margin, optimal hyperplane and soft margin algorithms are discussed. A new optimization problem is obtained and a new definition of support vector is given. In fact, PPSVM is motivated by statistical learning theory and is an extension of regular SVM. An empirical method is also proposed for determining the posteriori probability. Two artificial examples show that PPSVM formulation is reasonable if the class-conditional probability is known, and some real experiments demonstrate that the weighted data cases by some empirical methods can produce better results than regular SVM.
A Multi-Agent Reinforcement Learning Method Based on Role Tracking
Zhang Shuangmin and Shi Chunyi
2005, 42(2):  203-209. 
Asbtract ( 403 )   HTML ( 1)   PDF (371KB) ( 492 )  
Related Articles | Metrics
In a multi-agent system, agent can add and improve his knowledge and capability continuously by learning, and then chooses the reasonable action to maximize his benefit. However, current works are mostly restricted to single agent mode, or only confrontations between two single agents are considered and group confrontations are not considered. Current works do not consider agent roles in agent group; they estimate policy of opponents only by benefits through observation, which makes algorithm converge to some static policy slowly. For these limitations, single agent learning process is extended to group agent learning process in incommunicable environment for group confrontation. Considering the particularity among different learning problems, a reinforcement learning algorithm based on role tracking and experimental analysis is given. Role concept is added in the learning model by tracking the opponents' roles dynamically in learning process. The learning rate is determined by computing the matching value of opponents' roles with their actions, and finally the benefit value of every state is updated by using minmax-Q algorithm so as to fast the speed of convergence, which improves the work of Bowling and Littman.
An Enhanced Approach for Mining Local Outlier
Jiang Shengyi, Li Qinghua, Wang Hui, and Meng Zhonglou
2005, 42(2):  210-216. 
Asbtract ( 550 )   HTML ( 2)   PDF (464KB) ( 616 )  
Related Articles | Metrics
In many cases, outliers are more important than the normal data, as they may demonstrate either deviant behavior, or the beginning of a new pattern, they may be cause damage to user. Outlier detection has become an important branch of data mining. In this paper, a new generalized method measuring the difference of two objects with mixed attributes is presented, and the weighted power mean is introduced to data mining. Based on these, a new outlier detection approach based on the nearest neighborhood is proposed. The approach measures outlier degree of an object by generalized local outlier factor (GLOF), and detects outlier by the rule of “Bσ”; also it needn't threshold or the prior knowledge about the number of outlier in dataset. GLOF generalizes LOF (local outlier factor) and COF (connectivity-based outlier factor). The theoretic analysis finds out some interesting properties of GLOF. The experimental results show that:(1) The definition about the difference of two objections can be used to dataset with ixed attributes. (2) In some cases GLOF measures the local outlier more accurately than LOF,CBLOF,RNN do. (3) The rule of “Bσ” is simple and promising in practice.
SFP-Max—A Sorted FP-Tree Based Algorithm for Maximal Frequent Patterns Mining
Qin Liangxi, and Shi Zhongzhi
2005, 42(2):  217-223. 
Asbtract ( 672 )   HTML ( 4)   PDF (445KB) ( 667 )  
Related Articles | Metrics
FP-growth is a high performance algorithm for mining frequent patterns at present, but it can't acquire high efficiency when it is applied to maximal frequent patterns (MFPs) mining. The cause of low efficiency is analyzed and according to the analysis an algorithm, SFP-Max, is presented. The main idea of this algorithm is that, (1) It is a sorted FP-tree based algorithm for mining MFPs. (2) The properties of MFPs are applied to reduce the size of MFI candidates. (3) A temporary set is added to reduce the size of initial test itemsets, so that the time consuming for candidates test can be reduced. In the performance study, SFP-Max is compared with MAFIA, one of the most efficient algorithms for MFPs' mining. The empirical results show that SFP-Max is an efficient algorithm, it has comparable performance with MAFIA, and in most cases it outperforms MAFIA.
Knowledge Increase Ability of Artificial Neural Network
Huang Hua, Luo Siwei, Liu Yunhui, and Li Aijun
2005, 42(2):  224-229. 
Asbtract ( 496 )   HTML ( 1)   PDF (292KB) ( 522 )  
Related Articles | Metrics
Knowledge increase ability is of great importance in the field of artificial neural network(ANN), which is also an open problem and absorbs most research attentions. Such researches will promote the further development of ANNs both in theory and practice. The knowledge increase ability of ANNs is discussed in depth. Theoretical analysis is firstly made in view of generalization ability, which results in a most promising solution of multiple network approach. Knowledge increase can be realized via knowledge accumulation and inheritance between single network and the network system. The conception of autonomy is of great significance for knowledge increase ability of ANNs. An autonomous artificial neural network(AANN) model is introduced to avoid centralized confidence assignment, which enables distributed confidence assignment and makes the system extensible. An experimental system is built on AANN units to testify its feasibility and the results are encouraging.
Mining Frequent Patterns Based on Graph Theory
Wang Wei, Zhou Haofeng, Yuan Qingqing, Lou Yubo, and Sui Baile
2005, 42(2):  230-235. 
Asbtract ( 693 )   HTML ( 1)   PDF (344KB) ( 1300 )  
Related Articles | Metrics
Mining the frequent pattern from data set is one of the key success stories of data mining research. Currently, most of the efforts are focused on the independent data such as the items in the marketing basket. However, the objects in the real world often have close relationship with each other. How to gain the frequent pattern from these relations is the objective of this paper. Graphs are used to model the relations, and a simple type is selected for analysis. Combining the graph-theory and algorithms to generate frequent patterns, two new algorithms are proposed. The first algorithm, named AMGM, is based on the Aproiri idea and makes use of matrix. For the second algorithm, a new structure SFP-tree and an algorithm, which can mine these simple graphs more efficiently, have been proposed. The performance of the algorithms is evaluated by experiments with synthetic datasets. The empirical results show that they both can do the job well, while SFP performs better than AMGM. Such algorithms are also applied in mining of the authoritative pages and communities on Web, which is useful for Web mining. At the end of the paper, the potential improvement is mentioned.
Research on Visibility for Large-Scale and Complex Scenes
Pu Jiantao and Zha Hongbin
2005, 42(2):  236-246. 
Asbtract ( 482 )   HTML ( 2)   PDF (575KB) ( 983 )  
Related Articles | Metrics
The real-time rendering technique of large-scale and complex scenes is the basis of many important applications such as virtual reality, real-time simulation, and 3D interactive design. As one way to accelerate the rendering speed, the research on visibility has gained great attention in recent years. Its aim is to discard invisible geometric objects as many as possible based on given viewing points so that complex scenes or objects can be rendered or transferred through Internet at real time. In this paper, a survey related to this topic is presented. First, some typical visibility algorithms are analyzed in detail. Second, some topics related to visibility research are introduced and the necessary components and steps that a visibility algorithm needs are described. Third, some criteria to evaluate the visibility methods are proposed. Finally, some key techniques that need further research efforts are pointed out.
Remote Sensing Image Processing Using Wavelet Fractal Interpolation
Zhang Can, Tu Guofang, and Liu Xiaozhou
2005, 42(2):  247-251. 
Asbtract ( 604 )   HTML ( 1)   PDF (376KB) ( 749 )  
Related Articles | Metrics
Considering that the image of natural scenery has fractal character, a wavelet fractal interpolation algorithm is presented, which is applied in remote sensing image processing. The wavelet transform processes the high frequency information and the low frequency information simultaneously, which is like the human visual system. The fractal interpolation algorithm can preserve the texture information of image. So, the wavelet fractal interpolation algorithm uses fractal interpolation algorithm to anticipate the high frequency part on high resolution from the high frequency part on low resolution. It has been found that this new wavelet fractal interpolation has better performance than the existing bilinear interpolation algorithm.
Anycast Routing Algorithm with Special Composite Distance
Zhang Li, Jia Weijia, Yan Wei, and Li Xiaoming
2005, 42(2):  252-258. 
Asbtract ( 363 )   HTML ( 1)   PDF (374KB) ( 491 )  
Related Articles | Metrics
Anycast members are all equivalent servers. The quality of service data is more important than that of the request data which is anycast datagram. It is also widely recognized that the communications are asymmetric and the downloading flows from the servers may require more network resource than the upload request transmission in terms of clients in many cases. Routing for an anycast datagram also is a process to locate a server for the client, whose result will affect the quality of service requested. A novel anycast routing algorithm called anycast routing algorithm with special composite distance (ASCD) is proposed. The route for an anycast destination is selected by a shortest composite distance which is computed by metrics including hop number, reverse data transmission delay, reverse residual bandwidth, and server load. Differing from other approaches, ASCD uses values of bandwidth and delay on the reverse direction from the destination node (servers) of an anycast datagram to its source node (client), rather than the normal direction from the source to the destination. The best server/path selected by ASCD can increase the probability of giving more network resources for the service data. ASCD can balance the server load to some extent, too.
Performance Analysis of a Double-Layered Satellite Network
Wu Fengge, Sun Fuchun, Sun Zengqi, Yu Ke, and Li Lei
2005, 42(2):  259-265. 
Asbtract ( 642 )   HTML ( 2)   PDF (437KB) ( 929 )  
Related Articles | Metrics
The performance evaluation on a LEO/MEO satellite network is carried out by model-based analysis and software simulation, respectively. Firstly, the GSPN model of a double-layered satellite network is constructed and simplified, and then used to evaluate the network performance with SPNP4.0 software package. Then, the effectiveness of the results obtained by model-based analysis is verified in comparison with those by network simulation using OPNET, and some results are derived that the performance of double-layered constellation is better than that of single-layered one because processing and waiting delay become the main delay under high traffic condition.
Clustered-Loss Retransmission Protocol over Wireless TCP
Sun Fanglei and Zeng Ping
2005, 42(2):  266-272. 
Asbtract ( 486 )   HTML ( 1)   PDF (426KB) ( 433 )  
Related Articles | Metrics
In this paper, Clustered-Loss Retransmission Protocol (CLRP), a novel localized link layer retransmission protocol, is introduced to enhance the TCP performance over wireless networks. If considering the characteristics of bursty losses and high packet loss rate on wireless links, cooperating with TCP deployed on mobile hosts, CLRP can provide explicit loss reason and effective multiple wireless loss information for retransmission and better retransmission control to wireless losses. What's more, CLRP does not require any modifications to TCP deployed on fixed hosts.
ANLMCC—An Active Network-Based Layered Multicast Congestion Control Scheme
Ye Xiaoguo, Jiang Aiquan, and Wu Jiagao
2005, 42(2):  273-279. 
Asbtract ( 444 )   HTML ( 1367)   PDF (399KB) ( 421 )  
Related Articles | Metrics
Effective multicast congestion control mechanism is urgently needed with the deployment of multimedia multicast application in Internet. Layered multicast is considered to be an efficient approach to cope with the network heterogeneity. Receiver-driven layered multicast schemes cannot respond to the congestion quickly because of both the delay for detecting congestion and the long IGMP leave delay. And they are significantly instable and unfriendly to TCP. In this paper, a novel congestion control scheme for layered multicast, called active network-based layered multicast congestion control scheme (ANLMCC), is proposed, taking advantage of the programmability and flexible service customization capability of active network. ANLMCC adopts novel mechanisms such as active layering with mark, filtering layer with priority, and hop-by-hop interactive signaling mechanism between the active nodes. Simulation results show that ANLMCC can respond to the congestion more quickly and is more stable and TCP-friendly, and can greatly improve the performance of layered multicast applications.
Achievements and Prospects of Smoothed Analysis of Algorithms
Yang Zhiying, Zhu Hong, and Lei Xiangxin
2005, 42(2):  286-293. 
Asbtract ( 754 )   HTML ( 2)   PDF (467KB) ( 781 )  
Related Articles | Metrics
There are many algorithms that work exceedingly well in practice but are known to perform poorly in the worst-case or lack good worst-case analyses. One of the most typical examples is the simplex method for linear programming. Spilman and Teng introduced the smoothed analyis of algorithms to explain the above contradiction successfully. The algorithm community pays close attention to smoothed analysis. Some concepts and the main achievements related to smoothed analysis are presented. The random perturbation model TSSP is proposed, which can overcome some limitations of the Partial Permutation model. The TSSP model is used in the smoothed analysis of algorithms like quick-sorting, whose performance is mainly determined by the initial order of the elements of an instance. A smoothed analysis of quick-sorting under the TSSP model is performed and the smoothed time complexity of quick-sorting is proved as O(2/λn×log\-2(n)), where λ is the random perturbation magnitude. Several prospects on smoothed analysis of algorithms are presented.
Research on Chinese Postman Problem Decision Process Algorithm
Fei Rong and Cui Duwu
2005, 42(2):  294-299. 
Asbtract ( 1322 )   HTML ( 6)   PDF (307KB) ( 2633 )  
Related Articles | Metrics
Chinese postmen problem was presented by professor GUAN Mei-Gu in 1960s, and solved by J.Edmonds and E.L.Johnson in 1970s. Being a mature theory system, dynamic programming has two essences: the thought of ruling separately and the solution of redundancy. It has been used in many domains. In this paper, a system of algorithms is proposed for solving Chinese postmen problem. In the system, a new algorithm CPDPA(Chinese postman decision process algorithm)is presented at the first time, which makes it possible to solve Chinese postman problem with decision process thought. First of all, the system gives algorithm CEPA(convert edge to point algorithm)for making the model of Chinese postman problem applyin decision-making, then, it gives MDPMCA(multistep decision process model convert algorithm)to make this model according to the demand of the multistep decision process. In this way, CPDPA can be used to solve Chinese postman problem. Additionally, the accuracy of this system is verified by the experimental results, the theories of these algorithms are proved, and principle of optimality is broadened in Chinese postman problem.
An Execution Semantics of UML Activity View for Workflow Modeling
Zhao Zhikun, Sheng Qiujian, and Shi Zhongzhi
2005, 42(2):  300-307. 
Asbtract ( 475 )   HTML ( 2)   PDF (426KB) ( 573 )  
Related Articles | Metrics
UML is a widely used modeling language in software engineering, but its main problem is the ambiguity because UML has no formal semantics. A formal execution semantic of UML activity diagram is presented for modeling workflow systems according to the syntax of UML activity diagram and the characteristic of the workflow system. The semantic is based on the clock transition system model. The execution of workflow system is formalized as two processes, clock transition and data transition, executing by turns. Time goes forward in clock transition, and the state of the workflow case transforms in data transition. The semantics is more expressive in concurrency than the semantics based on statechart, more adaptable to the workflow system then Petrinet or process algebra.
QC Model Based Materialized View Selection in Web Data Integration
Gao Jun, Tang Shiwei, Yang Dongqing, and Wang Tengjiao
2005, 42(2):  308-314. 
Asbtract ( 444 )   HTML ( 2)   PDF (370KB) ( 480 )  
Related Articles | Metrics
Materialized views can be used to reduce the expensive network transfer cost and improve the query efficiency significantly in a Web data integration system. How to select queries to materialize under space constraints, while at the same time maximizing the benefit of materialized views, becomes a fundamental problem. Traditional methods don't take the containment relationship among massive XML queries into account; hence the selected materialized views may contain redundant information. A new model and methods are proposed to overcome those problems. The contributions include (1) a QC (query containment) model to describe massive queries set in the Web data integration system, which captures the most common relationship (containment relationship) among the queries; (2) a method to select views from the queries set to materialize based on the QC model. This method considers the key related factors in the process of the view selection, including query frequency, query space cost, query rewriting capability and query result completeness, and proposes query bitmaps to organize the materialized views, thus generating a more reasonable views selection plan. Experimental results illustrate the validation of the method.
A Dynamic Real-Time Scheduling Algorithm with Software Fault-Tolerance
Han Jianjun, Li Qinghua, and Abbas A.Essa
2005, 42(2):  315-321. 
Asbtract ( 508 )   HTML ( 1)   PDF (411KB) ( 578 )  
Related Articles | Metrics
A hard real-time system is usually subject to stringent reliability and timing constraints due to the fact that failure to produce correct results in a timely manner may lead to a disaster. Almost all fault-tolerant scheduling algorithms at present are designed to deal with hardware faults, while less of those take possible software faults into account. Presented in this paper is a new software fault-tolerant real-time scheduling algorithm that is similar to EDF, called EBPA(expectation-based probing algorithm). The important contributions of the algorithm are probing a certain steps during the executions of primaries, which leads to improving the predictive quality of canceling ineffective primaries when heavy workload occurs and preventing early failures in execution from triggering failures in the subsequent primary executions as soon as possible. Therefore, the algorithm increases the successful percentage of tasks' completion, and meanwhile decreases the wasted CPU time slots. The simulation experiments show that the algorithm has a better trade-offs between scheduling costs and scheduling performance than the well-known algorithms so far. Moreover, some experimental parameters, such as the number of probing steps and failure probability, are also taken into account.
Research on an Application Class Communication Security Model on Operating System Security Framework
Zheng Zhirong, Cai Yi, and Shen Changxiang
2005, 42(2):  322-328. 
Asbtract ( 476 )   HTML ( 1)   PDF (436KB) ( 487 )  
Related Articles | Metrics
The classical BLP model is recognized as the theoretical foundation of solving confidentiality problem. Biba model of solving integrity is easily realized in secure computer systems. In order to solve the contradiction between information sharing and security in the application system, a new application class communication security model is constructed theoretically based on the abstraction of application class. The new model introduces integrity rules to measure the trust level of sharing information between different application classes, thus combining BLP model and Biba model with no conflict. A formal description and verification on the model is detailed, which provides both the confidentiality and integrity for the system. With the development of a secure file transfer application system, which is based on the browser/server application pattern, the way to implement the new model in the Linux operating system is described and the performance of the system is discussed.
Two Condition Code Optimization Approaches in Binary Translation
Ma Xiangning, Wu Chenggang, Tang Feng, Feng Xiaobing, and Zhang Zhaoqing
2005, 42(2):  329-337. 
Asbtract ( 541 )   HTML ( 2)   PDF (455KB) ( 551 )  
Related Articles | Metrics
It is an important performance issue how to reduce the emulation cost of the source Instruction Set Architecture's condition codes. In this paper, two optimized emulating algorithms for emulator and dynamic translator are presented, which can effectively improve the performance of the translated code. In emulator, ICDC(instant computing and delayed computing) algorithm compared with delay computing algorithm, can reduce the memory access overhead of object code. In dynamic translator, DFADC(data flow analysis and delayed computing) algorithm is presented to reduce redundant flag computing code. Data flow analysis is used in basic block, and delayed computing is used inter basic block. The two new optimized emulating algorithms can be applied in all the translation from source ISA with flag mechanism to target ISA without such mechanism. These algorithms are verified in the binary translator system——Digital Bridge, which combines emulation and dynamic translation methods. With these algorithms, Digital Bridge can do binary translation in 120% code size of the original binary code, while it could be 250% code size for non-optimized algorithms and 150% code size for UQDBT system. These data indicate that the algorithms are effective to reduce redundant code and to improve the performance of translated code.
The Distributed Lock Scheme in SAN
Yao Nianmin, Shu Jiwu, and Zheng Weimin
2005, 42(2):  338-343. 
Asbtract ( 584 )   HTML ( 3)   PDF (290KB) ( 492 )  
Related Articles | Metrics
The current study about lock schemes in SAN is summarized and it is concluded that distributing mutex information to shared disk and related nodes is the promising direction in the future. A new data structure named reticulation is presented to solve the problem that an existing distributed lock scheme has low efficiency in transferring messages among nodes in which it uses double linked lists to organize all the nodes in SAN. Reticulation structure's algorithms of adding and deletting nodes and its design of fault tolerance are given and their performance analyses are done. A simulation model is also implemented to prove the conclusions. What is more, the reticulation structure can also be used in applications in which messages need to be broadcasted and the information of nodes is distributed.