Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2008, Volume 45 Issue 9
Paper
Online Hierarchical Reinforcement Learning Based on Path-matching
Shi Chuan, Shi Zhongzhi, and Wang Maoguang
2008, 45(9):  . 
Asbtract ( 637 )   PDF (1197KB) ( 479 )  
Related Articles | Metrics
Although reinforcement learning (RL) is an effective approach for building autonomous agents that improve their performance with experiences, a fundamental problem of the standard RL algorithm is that in practice they are not solvable in reasonable time. The hierarchical reinforcement learning (HRL) is a successful solution which decomposes the learning task into simpler subtasks and learns each of them independently. As a promising HRL, option is introduced as closed-loop policies for sequences of actions to enable HRL. A key problem for HRL based on options is to discover the correct subgoals online. Through analyzing the actions of agents in subgoals, two useful properties are found: (1) the subgoals have more possibility to be passed through and (2) the effective actions in subgoals are restricted. As a consequence, subgoals can be regarded as the most matching action-restricted states in the paths. Considering the grid environment, the concept of unique-direction value is proposed to denote the action-restricted property, and the option discovering algorithm based on unique-direction value is introduced. The experiments show that the options discovered by the unique-direction value method can speed up the primitive Q learning significantly. Moreover, the experiments further analyze how the size and generating time of options affects the performance of Q learning.
A Formal Model for Cryptographic Protocols Based on Planning Theory
Zhao Yu, Wang Yadi, Han Jihong, Fan Yudan, and Zhang Chao
2008, 45(9):  . 
Asbtract ( 328 )   PDF (1553KB) ( 392 )  
Related Articles | Metrics
Planning theory is brought into the area of formal analysis for cryptographic protocols. Based on the running characteristics and laws of cryptographic protocols in the real networks, an attack planning theory for cryptographic protocols is presented, and a formal model, namely attack planning problem model of cryptographic protocols, is established for verifying the security properties of cryptographic protocols, including its first-order syntax, formal definitions and operational semantics. Meanwhile, the shortages of classical Dolev-Yao model are analyzed, and then the Dolev-Yao model is improved based on the basic message element strategy, which avoids the occurrences of “state space explosion” problem and enhances the completeness of attack planning problem model of cryptographic protocols. The feasibility of the improved Dolev-Yao model is guaranteed by extending the semantic of application. Finally, an instance for analyzing the NS public-key protocol based on the attack planning problem model is shown to illustrate the model utility. The formal model for cryptographic protocols presented, whose purpose is to verify cryptographic protocols and find the faults existing in them, is to prove faultiness, and can be conveniently used for both manual deductions and automatic implementations.
Dual Reinforcement Learning Based on Bias Learning
Lin Fen, Shi Chuan, Luo Jiewen, and Shi Zhongzhi
2008, 45(9):  1455-1462. 
Asbtract ( 693 )   HTML ( 0)   PDF (1435KB) ( 512 )  
Related Articles | Metrics
Reinforcement learning has received much attention in the past decade. Its incremental nature and adaptive capabilities make it suitable for use in various domains, such as automatic control, mobile robotics and multi-agent system. A critical problem in conventional reinforcement learning is the slow convergence of the learning process. To accelerate the learning speed, bias information is incorporated to boost learning process with priori knowledge. Current methods use bias information for the action selection strategies in reinforcement learning. They may suffer from the non-convergence problem when priori knowledge is incorrect. A dual reinforcement learning model based on bias learning is proposed, which integrates reinforcement learning process and bias learning process. Bias information is used for action selection strategies in reinforcement learning and reinforcement learning is used to guide bias learning process. Thus the dual reinforcement learning model could make effective use of priori knowledge, and eliminate the negative effects of incorrect priori knowledge. Finally, the proposed dual model is validated by experiment on maze problem including simple environment and complex environment. The experimental results demonstrate that the model could converge to the optimal strategy steadily. Moreover, the model could improve the learning performance and speed up the convergence of the learning process.
A Collaborative Filtering Recommendation Algorithm Combining Probabilistic Relational Models and User Grade
Gao Ying, Qi Hong, Liu Jie, and Liu Dayou
2008, 45(9):  1463-1469. 
Asbtract ( 510 )   HTML ( 2)   PDF (1103KB) ( 654 )  
Related Articles | Metrics
Collaborative filtering is one of successful technologies for building recommender systems. Unfortunately, it suffers from sparsity and scalability problems. To address these problems, a collaborative filtering recommendation algorithm combining probabilistic relational models and user grade (PRM-UG-CF) is presented. PRM-UG-CF has primary two parts. First, a user grade function is defined, and user grade based collaborative filtering method is used, which can find neighbors for the target user only in his near grade, and the number of candidate neighbors can be controlled by a parameter, so recommendation efficiency is increased and it solves the scalability problem. Second, in order to use various kinds of information for recommendation, user grade based collaborative filtering method is combined with probabilistic relational models (PRM), thus it can integrate user information, item information and user-item rating data, and use adaptive strategies for different grade users, so recommendation quality is improved and it solves the sparsity problem. The experimental results on MovieLens data set show that the algorithm PRM-UG-CF has higher recommendation quality than a pure PRM-based or a pure collaborative filtering approach, and it also has much higher recommendation efficiency than a pure collaborative filtering approach.
An Index of Cluster Validity Based on Modal Logic
Lü Zonglei, Wang Jiandong, Li Ying, and Zai Yunfeng
2008, 45(9):  1477-1485. 
Asbtract ( 545 )   HTML ( 0)   PDF (1437KB) ( 554 )  
Related Articles | Metrics
Clustering validity index plays an important role to show whether a clustering is good enough. Most of current indexes are based on statistical theory and fuzzy theory. Limited by the basic theories, these indexes would give some incorrect indication in some special cases. In this paper, a new index of clustering validity index which is based on the theory of modal logic is presented. The clustering is described by Kripke structures, where the similarity is defined as a binary relation on the data set. Each cluster is represented by a propositional sentence so that the result of clustering can be represented by logical formulas. According to minimum description length principle, the clustering validity index is built by veracity and complexity of the representation. Since this new index imposes no additional restrictive conditions of the similarity measurement for clustering, it is therefore more universal than current ones which usually contain default measurement of similarity. The experiments to compare the new index with the common indexes are also shown in this paper. The experimental results show that this new index is consistent with others in the normal case as well as more effective in some special cases such as the two rings data set.
A Possibility Fuzzy Clustering Algorithm Based on the Uncertainty Membership
Chen Jianmei, Lu Hu, Song Yuqing, Song Shunlin, Xu Jing, Xie Conghua, and Ni Weiwei
2008, 45(9):  1486-1492. 
Asbtract ( 510 )   HTML ( 1)   PDF (1110KB) ( 529 )  
Related Articles | Metrics
Clustering, as an unsupervised learning method, is a hot topic in data mining and has been widely used. Fuzzy clustering is an important branch of clustering. Many representative fuzzy clustering algorithms have been proposed, such as fuzzy c-means. Fuzzy c-means clustering algorithm and its improved versions are absolutely probability constrained clustering algorithms, which adapt the membership forms that represent the absolutely subordinative extent of the data. Some complex data distribution would make this absolutely subordinative extent invalid, for data objects perhaps can not be judged to belong to some cluster absolutely. It has been demonstrated that these problems deteriorate the clustering performance greatly. To solve these problems, uncertainty membership relationship are proposed. On the basis of uncertainty theory, a new possibility fuzzy clustering algorithm based on uncertainty membership (UMPFCA) is developed by applying two relative subordinative degree based judgment criterion parameters. UMPFCA introduces the possibility membership and uncertainty membership of data sets to the corresponding clusters into objective function during each iteration, which is possibility membership degree and uncertainty membership degree. Meanwhile, the algorithm based on new theories is implemented in which clustering process can be performed efficiently. Theoretical analysis and experimented results testify that UMPFCA has higher accuracy of clustering compared with K-means algorithm and fuzzy c-means algorithm.
Multi-Winners SOMs and Their Applications to Stock Analysis
Wang Limin, Liang Yanchun, Han Xuming, Shi Xiaohu, and Li Ming
2008, 45(9):  1493-1500. 
Asbtract ( 607 )   HTML ( 0)   PDF (1455KB) ( 481 )  
Related Articles | Metrics
In order to enhance the dynamic competition and clustering capability of self-organizing map (SOM) neural network and improve the precision of solutions, multi-winners SOM models are proposed by extending numbers of winners based on an unsupervised SOM neural network and by improving the neighbor function and weight function in the network. In addition, a tabu-mapping method is proposed to avoid that the same output node is mapped by more than one input. The clustering analysis for the stock is used to examine the effectiveness of the proposed models. The financial indexes reflecting the whole performance level of companies are used in the simulated experiments, including earning per share (EPS), net asset per share (NAPS), return on equity (ROE), cash flow per share (CAPS) and net profit. Simulation results show that the clustering effect of the SOM with 2 winners (SOM2W) is the best compared with those from the standard SOM and other proposed multi-winner SOMs. Further analysis for the experimental results also shows that the clustering performance of the SOM2W is superior to the standard SOM and those of other models. The proposed model could provide a feasible approach for analyzing and selecting stock, which has potential applications in the financial field.
Quantum Annealing Algorithms: State of the Art
Du Weilin, Li Bin, and Tian Yu
2008, 45(9):  1501-1508. 
Asbtract ( 1725 )   HTML ( 11)   PDF (1382KB) ( 4512 )  
Related Articles | Metrics
In mathematics and applications, quantum annealing is a new method for finding solutions to combinatorial optimization problems and ground states of glassy systems using quantum fluctuations. Quantum fluctuations can be simulated in computers using various quantum Monte Carlo techniques, such as the path integral Monte Carlo method, and thus they can be used to obtain a new kind of heuristic algorithm for global optimization. It can be said that the idea of quantum annealing comes from the celebrated classical simulated thermal annealing invented by Kirkpatrick. However, unlike a simulated annealing algorithm, which utilizes thermal fluctuations to help the algorithm jump from local optimum to global optimum, quantum annealing algorithms utilize quantum fluctuations to help the algorithm tunnel through the barriers directly from local optimum to global optimum. According to the previous studies, although the quantum annealing algorithm is not capable, in general, of finding solutions to NP-complete problems in polynomial time, quantum annealing is still a promising optimization technique, which exhibits good performances on some typical optimization problems, such as the transverse Ising model and the traveling salesman problem. Provided in this paper is an overview of the principles and research progresses of quantum annealing algorithms in recent years; several different kinds of quantum annealing algorithms are presented in detail; both the advantages and disadvantages of each algorithm are analyzed; and prospects for the research orientation of the quantum annealing algorithm in future are given.
Paper
An Exact Algorithm Based on Chain Indication for Min-CVCB Problem
Wang Jianxin, Xu Xiaoshuang, Feng Qilong, and Li Min
2008, 45(9):  1509-1516. 
Asbtract ( 539 )   HTML ( 1)   PDF (1230KB) ( 464 )  
Related Articles | Metrics
With the technical development of VLSI (very large scale integrated circuit), the constrained minimum vertex cover in bipartite graphs (Min-CVCB) for re-configurable arrays has attracted considerable attention in the literature. The Min-CVCB problem is a classical sub-problem of vertex cover problem, and has been proved to be NP-complete. At present, the best exact algorithm for the problem is proposed by using kernelization and branching technology with running time bounded by O((k\-u+k\-l)|G|+1.26\+\{k\-u+k\-l\}), which is still unable to satisfy the requirement of the industry development. Based on the above result, an improved parameterized algorithm is presented for the Min-CVCB problem. For the Min-CVCB problem, a more detailed analysis is given for the structure of the bipartite graph and list all the possible joint of the blocks whose weight are at least 3 in a case-by-case exhaustive manner. Based on the above analysis, full use is made of the chain implication and bounded searching technology is used to construct new recurrence relations. For the remaining blocks generated by branching, dynamic programming technology is used to handle, which can be solved in polynomial time. The total time complexity of this algorithm is bounded by O((k\-u+k\-l)|G|+1.1892\+\{k\-u+k\-l\}), which greatly improves the current best result.
A Lightweight Directory Based Algorithm for STM
Zhang Xiaoqiang, Peng Lin, Peng Yuanxi, and Xie Lunguo
2008, 45(9):  1517-1523. 
Asbtract ( 586 )   HTML ( 0)   PDF (1280KB) ( 497 )  
Related Articles | Metrics
Software transactional memory (STM) systems use lightweight, in-memory software transactions to address concurrency in multi-threaded applications, ensuring safety at all times. The implementation of STM needs to track concurrent accesses, buffer speculative updates, and manage conflicts. Two principal sources of overhead is bookkeeping and validation. Bookkeeping serves largely to implement conflict detection. Conflict detection is the problem of determining when two transactions cannot both be safely committed. Validation is the related problem of ensuring that a transaction never views inconsistent data. A model is given that shows all modifications to shared data by a STM are linearizable. Based on the model, a new lightweight algorithm named LDSTM is presented here that version information of shared data is kept in a directory buffer which can be used to verify whether the view observed by a transaction is consistent at each data item access quickly. The new algorithm detect write-write conflicts early since at most one of the conflicting transactions can ever commit, it can greatly reduce the cost of conflict detection and validation and avoid extra read-data-bookkeeping. In single-threaded mode, the overhead of LDSTM is much less than other STM except buffer-update overhead. Experimental results show that LDSTM is simple, and can substantially reduce the cost of conflict detection and validation.
An Approach for Service Composition Process Searching in Large Scale Network
Hu Songlin, Liang Ying, Jiang Wei, and Li Wei,
2008, 45(9):  1524-1531. 
Asbtract ( 434 )   HTML ( 0)   PDF (1434KB) ( 458 )  
Related Articles | Metrics
Centralized automatic service composition and so called non-trivial service discovery can retrieve compositions of services for a given request that contains particular inputs and outputs, and are now popular research topics in the area of service computing. Aiming at tackling the bottleneck and single-point-failure problems caused by centralized architecture, a distributed approach based on content-based distributed publish-subscribe technology, is proposed and in turn decentralized automatic service composition in large scale network is implemented. It is called process search in this paper. Content based distributed publish-subscribe system can deliver publication messages from publisher to interested subscribers hop by hop via a set of brokers by using content matching relationships among publications and subscriptions. It can offer a platform to support interoperability matching among service interfaces and to enable query routing in a distributed environment. In this paper, the service model is mapped to the message model of publish-subscribe system, the searching algorism is put forward based on content-based routing,and a prototype named PreSee is developed based on PADRES system. The stimulation experiments show that compared with centralized approach,this decentralized approach can reduce system latency by parallelizing searches among the distributed resources, thus improving efficiency of the whole system.
A Collaborative Filtering Recommendation Algorithm Based on Domain Nearest Neighbor
Li Cong, Liang Changyong, and Ma Li
2008, 45(9):  1532-1538. 
Asbtract ( 970 )   HTML ( 16)   PDF (1132KB) ( 1279 )  
Related Articles | Metrics
Currently E-commerce recommender systems are being used as an important business tool by an increasing number of E-commerce websites to help their customers find products to purchase. Collaborative filtering is the most successful and widely used recommendation technology in E-commerce recommender systems. However, traditional collaborative filtering algorithm faces severe challenge of sparse user ratings and real-time recommendation. To solve the problems, a collaborative filtering recommendation algorithm based on domain nearest neighbor is proposed. The union of user rating items is used as the basis of similarity computing among users, and the non-target users are differentiated into two types that without recommending ability and with recommending ability. To the former users, user similarity will not be computed for improving real-time performance; to the latter users, “domain nearest neighbor” method is proposed and used to predict missing values in the union of user rating items when the users have common intersections of rating item classes with target user, and then the needed items space for missing values predicting can be reduced to the few common intersections. Thus the sparsity can be decreased and the accuracy of searching nearest neighbor can be improved. The experimental results show that the new algorithm can efficiently improve recommendation quality.
Line Segment Nearest Neighbor Query of Spatial Database
Hao Zhongxiao, , Wang Yudong, and He Yunbin
2008, 45(9):  1539-1545. 
Asbtract ( 626 )   HTML ( 0)   PDF (933KB) ( 582 )  
Related Articles | Metrics
Nearest neighbor query of spatial database has received more and more attention in recent years. According to the abstract degree, nearest neighbor query has three main categories: nearest neighbor query between point and point, nearest neighbor query between point and line segment, and nearest neighbor query between line segment and line segment. At present, the former two nearest neighbor queries. have received more research more than the last one that lacks relative literature. In this paper, the question of nearest neighbor query between line segments and line segment is put forward. It aims at solving some instance which spatial object can not be abstracted as a point. Nearest neighbor query between line segment and line segment has wide area of application in realism. According to whether two line segments are intersected, it has two categories: intersected line segment and no intersected line segment. And the position between no intersected line segments has nine categories. Every one has been studied and implemented nearest neighbor query between line segment and line segment. The filter rule, theorems and the query algorithms of the nearest neighbor query between line segment and line segment are proposed. Furthermore, the experiment analysis is also given. The method of this paper can implement the nearest neighbor query between line segment and line segment better and has high query efficiency.
Study of Aggregation Process Model and Algorithms of Autonomy Heterogeneous Data Sources
Wang Bo and Guo Bo
2008, 45(9):  1546-1553. 
Asbtract ( 461 )   HTML ( 0)   PDF (1431KB) ( 577 )  
Related Articles | Metrics
Data sharing is a pervasive challenge faced in applications that need to query across multiple autonomous data sources. The task of integration becomes more complicated when data sources are distributed, heterogeneous, and high in number. One solution to the issues of distribution and scale is to perform data integration using P2P networks, but current P2P architectures are mostly flat, only specifying mappings directly between peers, and with no schemas abstraction provided. In this paper, a data sharing architecture similar to iXPeer is proposed to deal with integration on several levels of schema abstraction. Peers are grouped into local clusters according to their similarities. Peers with high similarities are clustered into one group, which can improve the query efficiency, reducing the computing cost. An aggregation model based on elements matching for autonomous data sources is proposed to construct clusters. TA, originally proposed in the context of database middleware, is applied to generate a list of best-ranked data source nodes, since TA may require time exponential in the size of the scale of cluster organization. TA is improved by adding labeling nodes, resulting in TAL, to generate the top-K cluster nodes. Experiments show that TA and TAL have good performances on top-K searching, especially for TAL, when the scale of clustered nodes is large.
Checking Active XML Validation
Ma Haitao , Hao Zhongxiao, and Zhu Yan
2008, 45(9):  1554-1560. 
Asbtract ( 493 )   HTML ( 0)   PDF (989KB) ( 427 )  
Related Articles | Metrics
Validation is an essential problem of XML documents. An active XML(AXML for short) document is an XML document where some of the data is given explicitly while other parts are defined intentionally by means of embedded calls to Web services. Informally, an AXML document can be seen as a set of XML documents. AXML documents greatly enhance the dynamic and flexibility of XML documents, but at the same time, they bring the new challenge of existing methods for checking validation of documents. This paper focuses on the problem of AXML documents validation. More precisely, a new tree automaton, AXML schema tree automaton (ASTA) is defined, which can efficiently represent the set of AXML documents conforming to the given schema. Based on ASTA automaton, an algorithm is proposed for checking AXML validation performing in polynomial time. The algorithm consists two parts: one is to check the validation of the current state of AXML document which means the document can be seen as a regular XML document and the other is to check the equivalent of the specification definitions of Web services. To test the efficiency of the algorithm, the algorithm is performed and the experimental results show that this algorithm gives rise to an efficient validation method for AXML documents.
Fast Scalar Multiplication Algorithm Based on Frobenius Mapping
Yin Xinchun, , Hou Hongxiang, and Xie Li
2008, 45(9):  1561-1566. 
Asbtract ( 537 )   HTML ( 0)   PDF (801KB) ( 465 )  
Related Articles | Metrics
Elliptic curve cryptosystem(ECC) is a novel public key cryptosystem, which will be the primary standard for application in the future. The capability of ECC depends on the efficiency of scalar multiplication. Furthermore, fast scalar multiplication algorithm on Koblitz curve is the top demanding task in the research of scalar multiplication. After the reduction of TNAF(k), a super operation algorithm based on Frobenius mapping is proposed, which is Comb algorithm. At pre-compute stage, in order to establish a pre-compute table, the algorithm calculates the coordinate of some points on elliptic curve corresponding to any sequence at a fixed length of r with the help of Frobenius mapping. On the other hand, at evaluation stage, the algorithm employs the reduction of TNAF(k) as well as the pre-compute table to improve the efficiency of the whole Comb algorithm. Because of high performance of Frobenius mapping, Comb algorithm doesnt relate to point doubling. And after arranging of Comb matrix, the quantity of point addition needed by the algorithm in this paper is 1/5~1/4 times of that needed by traditional algorithms. In addition, the efficiency of the algorithm is faster at least about 67% than the traditional Comb algorithm with arbitrary length of row in any coordinate.
Bandwidth-Wasting Problem Caused by Congested Data Flow in Router and Its Solvent
Cao Jijun, Su Jinshu, Wu Chunqing, and Shi Xiangquan
2008, 45(9):  1578-1588. 
Asbtract ( 552 )   HTML ( 0)   PDF (1765KB) ( 395 )  
Related Articles | Metrics
Congestion in network impacts the quality of service provisioning. And congestion control is an important of IP QoS. The traditional congestion control algorithms in router make drop decisions mainly according to the congestion status of local buffer resources independently. This multi-level independent congestion control causes the bandwidth-wasting problem (BW-CDF) when data flow congested. The BW-CDF problem is analyzed theoretically and a new congestion control algorithm (CC-AMR) is proposed, which is based on awareness of the congestion status of multi-level resources. The CC-AMR algorithm can synthetically utilizes the congestion status of resources in remote forward engines and their ports to manage the buffer of network processors, so that more reasonable congestion control decisions can be made. The CC-AMR algorithm has been implemented in the core router who adopts the switch-fabric and network processor based architecture successfully. To evaluate the performance improvement, the experiments are carried out to compare the performance of CC-AMR algorithm against SARED algorithm using the router. The experiment results show that the CC-AMR algorithm can enhance the total throughput of the router effectively during the periods of congestion compared with the SARED algorithm. For further details, the average throughput is increased by the CC-AMR algorithm for about 40.60% than the SARED algorithm under the same conditions.
Measurement and Analysis of BitTorrent
Lei Yingchun, Yang Litang, Jiang Qi, Gong Yili, and Zhang Jun
2008, 45(9):  1589-1600. 
Asbtract ( 442 )   HTML ( 2)   PDF (1954KB) ( 469 )  
Related Articles | Metrics
The P2P content distribution, represented by BitTorrent, has become a popular P2P application. In BitTorrent, peers employ neighbor selection mechanism, choking/unchoking mechanism, and piece selection algorithm to get the required file rapidly. BitTorrent brings the users good downloading experience. However, it is strongly controversial in application. Most complaints come from the ISPs. BitTorrent is not highly efficient. Its fast downloading experience is at cost of network abuse. And this kind of misuse badly affects other Internet services supported by ISPs. An in-depth analysis from the perspective of techniques is demanded to understand the reasons why BitTorrent is popular with users and content providers, while complained by ISPs. Only if all the advantages and the disadvantages of BitTorrent are figured out, it is possible to design a better P2P content distribution protocol. From the perspective of performance, the behaviors of BitTorrent are measured, its key elements are explained, and its efficiency is analyzed. The results are as follows: 1) An effective way to measure the performance of P2P content distribution systems is proposed; 2) It is confirmed that BitTorrents key mechanisms, such as neighbor selection mechanism, choking/unchoking mechanism, and piece selection algorithm, is not highly efficient; and 3) A ShareStorm protocol is designed, which overcomes those flaws of BitTorrent. Preliminary experiment shows that in terms of download time, ShareStorm outperforms BitTorrent by 50%.
Optimal Parameterizations of the Degree 2 Rational Bézier Curves
Chen Jun and Wang Guojin
2008, 45(9):  1601-1604. 
Asbtract ( 433 )   HTML ( 0)   PDF (656KB) ( 546 )  
Related Articles | Metrics
A technique of optimal parameterization of the Bézier curves is successfully extended to the case of degree 2 rational Bézier curves which are frequently used to shape design. Optimal parameterization brings a prior explicit parameterization instead of “on-the-fly” compensation for non-uniformity of the parametric speed. After making the formulae much simpler, a tractable closed-form solution rather than a numerical solution is obtained, and an appropriate Mbius transformation for degree 2 rational Bézier curves is found by computing the integrals directly. The re-parameterization by Mbius transformation maintains both the same shape and the same control points of rational Bézier curve, only changes the distribution of the parameter. The parametric speed after re-parameterization is C\+1 continuous. The deviation of parametric speed from unit-speed reaches the minimum with respect to L\-2 norm, which means the rational optimal parameterization is “closest” to the arc-length parameterization. The method is simple, convenient and efficacious. A numerical example is given to illustrate the correctness and validity of the algorithm.
Level-Set Based Point Cloud Simplification Algorithm and Its Application to Human Body Model
Wei Weishu, Luo Xiaonan, and Li Zheng
2008, 45(9):  1605-1611. 
Asbtract ( 551 )   HTML ( 0)   PDF (1159KB) ( 520 )  
Related Articles | Metrics
Simplification is an important task in computer graphics (CG). In the light of the rapid development of reality modeling technology in CG and the 3D scanners wide application in this field, a level-set based simplification algorithm is presented, which can process high dense scan point cloud. The algorithm can solve the multi-contour and hole problems that are ubiquitous in the original point cloud. It has the advantage of ordering points automatically and holding result point cloud to keep layer-based property from original one. Level of details (LOD) human body models can be generated automatically via this algorithm by setting different simplification radius. The experiment results indicate that the method can hold model shape well under high simplification precision and the result of root mean square (RMS) error between auto-measure and manual-measure is less than angle-simplification method. The proposed method has its adaptive extension according to the requirement of high curvature detail display. The final simplified point cloud can be reconstructed by polygon or curve (surface). The reconstruction of 3D digital human body model is the basis of virtual garment simulation and computer animation research. The various precision constructed human body models using the proposed method can be applied in further research.
Transformation from Local Accuracy to Classification Confidence
Liu Ming, Yuan Baozong, Miao Zhenjiang, Tang Xiaofang, and Li Kunlun
2008, 45(9):  1612-1619. 
Asbtract ( 770 )   HTML ( 0)   PDF (1221KB) ( 497 )  
Related Articles | Metrics
Local accuracy, which represents the accuracy of a base classifier for an input pattern, is one kind of valuable information used in the classifier combination methods. However, the only existing classifier combination method which uses local accuracy, namely dynamic classifier selection, can not take full advantage of the information from the base classifiers. In dynamic classifier selection, the final classification result is determined by the base classifier with the highest local accuracy, and the local accuracies of the other base classifiers are neglected. In this paper, a method of transforming local accuracy into classification confidence is proposed, in which the confidence value corresponding to the class outputted by a base classifier is proportional to the local accuracy of the base classifier, and the confidence values corresponding to the other classes are assumed to be equal. After the transformation, multiple classifier systems can be designed with measurement level classifier fusion methods such as linear combination. Compared with dynamic classifier combination, the classifier fusion methods, which make decisions by integrating the classification confidences of all the base classifiers, can use more information and achieve a higher classification rate. To evaluate this approach, a lot of experiments are conducted on six large data sets selected from the ELENA, UCI, and DELVE databases. The experimental results show that the approach outperforms dynamic classifier selection by 0.2% to 13.6% in classification rate.
A Survey of Task Scheduling Research Progress on Multiprocessor System-on-Chip
Li Renfa, Liu Yan, and Xu Cheng
2008, 45(9):  1620-1629. 
Asbtract ( 531 )   HTML ( 1)   PDF (1757KB) ( 1369 )  
Related Articles | Metrics
Multiprocessor is very common in embedded computing systems because it can meet the performance, cost and energy/power consumption goals. Multiprocessor system-on-chip is often heterogeneous multiprocessors and integrates multiple instruction-set processors on a single chip that implements most of the functionality of a complex electronic system. Current trends indicate that multiprocessor system-on-chip is being increasingly used in application such as image processing, network multimedia, embedded system, and so on. Scheduling and mapping of tasks are important key problems in multiprocessor system-on-chip design, and are substantially more difficult than scheduling a uniprocessor. The basic architecture and design challenge of multiprocessor system-on-chip task scheduling algorithm are introduced. In particular, the current research progresses are summarized according to scheduling algorithm analysis and implementation framework. The scheduling algorithm analysis is classified into three categories, and scheduler implementation framework is classified into two categories by using task modeling. Many open research problems are pointed out. Because of the large variety of timeliness requirements in real-time applications, an important goal is to find canonical representations of task considering timing constraints. It is an important target to implement high-effects scheduler based on multiprocessor system-on-chip platform. By comparing and analyzing these different projects and algorithms, researchers of related topic can gain useful information about task scheduling problem.