Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 July 2008, Volume 45 Issue 7
Paper
XN-Store: A Storage Scheme for Native XML Databases
Wang Xin, Yuan Xiaojie, Wang Chenying, and Zhang Haiwei
2008, 45(7):  . 
Asbtract ( 542 )   PDF (1490KB) ( 631 )  
Related Articles | Metrics
With the growing popularity and application of XML related standards, large repositories of XML documents have emerged on the Web. It is necessary to store these documents into a database to make them manageable. Storage schemes have become an important research topic in the XML data management field. Mapping XML documents to relational tables and storing them into a traditional RDBMS will break down the tree structure of XML data and cause a decline in query efficiency. This paper presents a novel storage scheme, called XN-Store, for native XML databases. Based on the index structure, this scheme directly stores XML nodes as records into a paged file to build up the persistent document object model, thus retaining the original tree structure of XML data. XN-Store not only reduces the storage space overhead of XML documents, but also implements the fast serialization and access of XML nodes. As a general purpose native XML storage scheme, XN-Store supports the creation of various secondary indexes to improve the efficiency of XML query processing. Extensive experiments are conducted on XN-Store and several previous XML storage schemes using a range of real and synthetic datasets, comparing the storage space, storage time, serialization time and node access time. The experimental results show that XN-Store is a high performance storage scheme for native XML databases.
A Dynamic Adaptive Caching Location Strategy by Location Database Clustering
Hong Liang, Lu Yansheng, Chen Jinfu, and Ding Xiaofeng
2008, 45(7):  . 
Asbtract ( 313 )   PDF (1070KB) ( 466 )  
Related Articles | Metrics
In mobile environment, an important method to improve the performance of locating mobile users is caching users location information. In addition, location databases in hierarchical structure can be clustered to reduce the total cost of location management. However, previous caching strategies aim at a single user, which causes caching to be inefficient, and the existing location database clustering approaches do not consider caching users location information. A dynamic adaptive caching location (DACaL) strategy based on location database clustering is proposed for mass mobile users. DACaL strategy utilizes the advantages of both caching and clustering techniques to reduce the total cost of location management in mobile environment, which is divided into the following two steps. In LDB-clustering, location databases are clustered by mining mobile users moving pattern to determine the caching level and reduce the location management cost. LDB clustering is a set covering problem. In dynamic-adaptive-caching, location databases are reorganized based on clustering result, location information is cached, and bypass pointers are created between adjacent clusters to shorten signal traveling path and to abate times of querying location databases. Dynamic adaptive caching is a TSP problem. Experiments show that DACaL strategy can reduce the total cost of location management and has better performance than other strategies.
A Novel Clustering Algorithm in the Densely Deployed Sensor Networks
Li Jianbo, Huang Liusheng, Xu Hongli, Wang Jichun, and Xu Ben
2008, 45(7):  . 
Asbtract ( 256 )   PDF (992KB) ( 515 )  
Related Articles | Metrics
Aiming to address the high overload problem in the re-clustering process in the clustering algorithm, an energy efficient, complete graph-based clustering algorithm is proposed(CGCA). CGCA is employed to divide the network into a few complete graphs at the system activation time, each complete graph independently being a cluster. By using the property that the nodes are of equivalence each other in a complete graph, CGCA is only executed at the system activation time and the cluster head role needs only to be rotated among the internal nodes in each cluster at the subsequent re-clustering phase, while the previous clustering algorithms need a global trigger to re-elect cluster heads, which incurs greatly reduced communication and computation overheads. CGCA achieves a process and message complexity of O(1) at each node. Moreover, the preferred selection of nodes close to cluster heads into the cluster, not only reduces the intra-cluster communication energy of cluster heads and their inner cluster members, but leads to the even distribution of cluster heads in the deployment region. The simulation experiments demonstrate that the number of exchanged message produced by CGCA is much less than that of HEED clustering algorithm in the densely deployed case. Finally, CGCA significantly outperforms LEACH algorithm in terms of evenly distributing cluster heads.
Distributed Credential Chain Discovery in SPKI/SDSI2.0
Geng Xiuhua, Han Zhen, Jin Li, and Wang Qinglong
2008, 45(7):  . 
Asbtract ( 384 )   PDF (848KB) ( 459 )  
Related Articles | Metrics
Trust management is an approach to access control in a distributed environment. SPKI/SDSI2.0 is the most popular trust management system at present. But the existing credential chain discovery algorithms in SPKI/SDSI2.0 are all centralized. The needed credentials are either provided by users or it is assumed that they have been distributed to local machines before search, but SPKI/SDSI2.0 is a distributed system, in which the credentials are often issued and stored in a distributed manner. To address this problem, a reasonable distributed credentials storage scheme is proposed in this paper. Each credential is stored in one place and all the credentials are subject-traces-all. Based on this scheme, DCCDS (distributed credential chain discovery in SPKI/SDSI2.0) is put forward. Unlike other algorithms, DCCDS neednt reduce credentials and compute the name-reduction closure of a set of credentials. DCCDS searches all the name credentials for one princpal, at the same time, looks for the authorization credentials to all those name credentials. Finally, depth-first search is used to determine whether there exists a chain from self to the requestor. DCCDS is goal-directed, and it could gather automatically relevant name and authorization credentials which are needed. It is shown by theoretical analysis that DCCDS has a higher efficiency; moreover, it could solve the problem of delegation depth elegantly.
Distributive Reduction of Attributes in Concept Lattice
Yang Bin and Xu Baowen
2008, 45(7):  . 
Asbtract ( 279 )   PDF (634KB) ( 541 )  
Related Articles | Metrics
Attribute reduction is one of the key problems in formal concept analysis. A few approaches have been proposed but they are only applicable to formal context in a non-distributed environment. With the wide application of distributed data storage and processing, it is necessary to develop a method to adapt to this environment. To address this problem, the characterizations of different kinds of attributes are provided from the point of view of global context and local context. The notion of super set and consistent set are introduced to determine whether an attribute is reducible in a global context. The determinant theorem of attribute reductions is derived based on core attributes and dispensable attributes. Based on these results, two algorithms are designed to compute attribute reductions of context in a distributed environment. The first algorithm, DRCL, determines attribute reductions of global context. The local reductions can be computed by using the existing approaches. The second algorithm, ADSCL, determines the super sets and all minimal consistent sets for the attributes given by a context. This information is required by the first algorithm. Theory analysis and experimental results show the feasibility and effectiveness of the two algorithms.
An Iterative Gait Prototype Learning Algorithm Based on Tangent Distance
Chen Changyou and Zhang Junping
2008, 45(7):  . 
Asbtract ( 293 )   PDF (757KB) ( 563 )  
Related Articles | Metrics
Being the only biometry certification techniques for remote surveillance, gait recognition, on one hand, is regarded as being of important potential value, hence a lot of algorithms have been proposed, on the other hand, it has encountered a lot of challenges. Among all of the challenges gait recognition encountered, one of them is how to extract features efficiently from a sequence of gait frames. To solve this problem, and also based on the fact that gait energy image (GEI) is effective for feature representation, an iterative prototype algorithm based on tangent distance is proposed. Firstly, it is assumed that different gaits lie in different manifolds. As a result, the proposed algorithm refines the definition of gait energy image(GEI) using tangent distance. Then an iterative algorithm is proposed to learn the prototypes by solving an optimization problem. Finally, principal component analysis (PCA) is performed on the prototypes to obtain gait features for classification. The proposed method is proved converged, and experiment results show the promising results of the proposed algorithm in accuracy compared with the GEIs. The rationality of the assumption that gaits lie in specific manifolds is also validated through experiments.
A Survey of Trust and Reputation Systems in Multi Agent Systems
He Lijian, Huang Houkuan, and Zhang Wei
2008, 45(7):  . 
Asbtract ( 486 )   PDF (1118KB) ( 604 )  
Related Articles | Metrics
It is of great importance to introduce trust when solving the collaboration and interaction problem in multi agent system (MAS) environment because like the human society, there are so many indeterminate factors in the MAS environment. Generally speaking, trust derives from direct trust and reputation, and the reputation system is the mechanism to support trust evaluation. The reputation system can be studied at both individual and system level, and more attention should be paid to the former. Whether cognitive view or numerical view is used to represent trust, the trust value evaluated by the reputation system should be accurate, adaptive, and so on. The reputation system can adopt concentrated, distributed or mixture architectures and they all demand the way to represent, propagate and aggregate the trust information. Up to now, the ReGreT and FIRE are both successful among the distributed architecture. As a hot, basic issue in the reputation system, inaccuracy information problem is a pressing topic, which requires to be explored further. Another limitation is that no reputation model test platform receives public recognition, even though the Agent Reputation and Trust (ART) Testbed Project has gained preliminary achievements. Based on the above introduction and review, it is proposed that more work should be done to construct a new model and mechanism, improve the existing models, and develop a new test platform.
A Heuristic Cluster Control Algorithm of Wireless Sensor Networks Topology
Liu Linfeng, and Liu Ye
2008, 45(7):  1099-1105. 
Asbtract ( 571 )   HTML ( 0)   PDF (869KB) ( 467 )  
Related Articles | Metrics
The main objective of wireless sensor network design is to fulfill the task of prolonging network lifetime. The network topology, which is the important foundation of upper layer protocols, serves as the supportive groundwork for achieving this goal. In order to design a topology control algorithm that conforms to the lifetime requirement of wireless sensor networks, the defects of previous algorithms are firstly explored. There are some defects such as deployment restriction, low reliability or poor rationality found in these algorithms. Then a WSN cluster model is constructed and analyzed theoretically according to the requirement of clustering, which ultimately turns to a clustering and cluster-head electing problem with approximate optimizing objectives. A heuristic topology control algorithm of cluster (HTCC) is proposed as a solution to the above problem. HTCC is composed of two methods: cluster constructing (CC) method and cluster-head electing (CHE) method. The clusters can be partitioned by the CC method, and the cluster-heads can be selected by the CHE method. The performance of the algorithm is analyzed and validated through experiments. The result indicates that the network topology of clusters with proper size has the characteristics of low energy consumption and high robustness, effectively prolonging the lifetime of the whole network.
Two New Push-Pull Balanced Data Dissemination Algorithms for Large-Scale Wireless Sensor Networks
Tao Zijin, Gong Zhenghu, and Lu Zexin
2008, 45(7):  1115-1125. 
Asbtract ( 618 )   HTML ( 0)   PDF (1936KB) ( 478 )  
Related Articles | Metrics
How to get a lower communication cost and a better balance between the push and pull of the event data are the common goals of the various data dissemination algorithms in wireless sensor networks (WSNs). On this basis, the load of the system hotspot should be further decreased and the number of the event data replicas should be reduced. In this paper, two well-known typical structured and unstructured data dissemination algorithms (DCS and CN) are analyzed and their advantages and shortcomings are figured out. At the same time, by combining the push-pull strategies of the two algorithms, two new data dissemination algorithms are proposed based on the hybrid structured and unstructured data push-pull strategies for the ALL-type queries in the different application situation in WSNs. The two new algorithms are Hybrid-Dcs-Cn1(HDC1) and Hybrid-Dcs-Cn2(HDC2). The theoretical analyses and simulations show that they resolve the problems of the high load of the hotspot and the large number of the event data replicas and the low performance of the query on the premise that ensures the balance between the push and pull. The total system communication cost of the two new algorithms are close to or better than the existing algorithms, so the two algorithms are more appropriate for the WSNs, especially when there are a large volume of data to be dealt with, and they are two energy-efficient data dissemination algorithms.
Research on Multicast Routing Algorithm for Mobile IP Based on Bone Node Set
Zhou Ling, and Sun Yamin
2008, 45(7):  1126-1132. 
Asbtract ( 417 )   HTML ( 0)   PDF (942KB) ( 536 )  
Related Articles | Metrics
In order to optimize the cost of multicast tree, minimize the joined latency, and reduce the transmission delay in mobile and wireless environment, bone node set is introduced to group communication for mobile IP. Based on the idea of bone node set, a multicast routing algorithm called BNSBMR (bone node set-based multicast routing algorithm) is designed in this paper. Bone node set is a dynamic set of mobile IP AR (access router) which satisfies some special conditions. BNSBMR can efficiently construct a series of multicast trees for mobile IP and characters itself in three aspects. Firstly, it optimizes the cost of multicast tree and reduces the bandwidth consumption by using bone node set. Secondly, it reduces the joined latency for mobile node to join multicast session, which is helpful to achieve a fast handover. Thirdly, the transmission delay for multicast packet is lessened by sharing those bone nodes. Correctness of BNSBMR is proved and the time complexity is analyzed in theories. Simulation experiments are designed based on a 5×5 mesh topology. Those results show that BNSBMR has optimized cost of multicast tree, reduced the joined latency and minimized the transmission delay. QoS (quality of service) of multicast routing for mobile IP is improved by using bone node set in some degree.
Formal Treatment of an Anonymous On-Demand Routing Protocol in MANETs
Zhang Yang
2008, 45(7):  1142-1150. 
Asbtract ( 405 )   HTML ( 0)   PDF (829KB) ( 386 )  
Related Articles | Metrics
Existing anonymous routing protocols have only had unsatisfactory anonymity analysis in MANETs, because adversarial models have not been given exactly,the security definition of cryptographic primitives have not been described, and rigorous proofs are lacking. To address this problem a typical anonymous dynamic source routing protocol is improved, and the formal treatment of this protocol is then proposed in this paper. The static attack power is defined for adversarial models to clarify the capacity of adversaries, and the anonymity of a routing protocol is to be achieved if the identities of end users are unlikable to data packets. According to this definition, a UC-style ideal functionality for route discovery process and the one for data transmission process are defined respectively. The route discovery process is modified to get private paths by generating UC-secure session-keys, which realizes the ideal functionality for route discovery. Then, verifiable lightweight route onions are constructed to realize the ideal functionality for data transmission, i.e., the route onions can verify that upstream nodes shuffle data packets correctly and downstream paths are intact. Finally, the anonymity of the improved protocol is proved in the universal composition framework. The methodology used is also suitable for designing and analyzing other anonymous routing protocols in wireless networks.
An e-Learning Service Discovery Algorithm Based on User Satisfaction
Zhu Zhengzhou, Wu Zhongfu, and Wu Kaigui
2008, 45(7):  1161-1168. 
Asbtract ( 438 )   HTML ( 1)   PDF (1132KB) ( 509 )  
Related Articles | Metrics
There are more and more e-Learning services used in computer supported collaborative learning, hence it is becoming important to locate proper e-Learning services in an accurate and efficient way. In the design of this paper, an annexed algorithm named eLSDAUS is proposed to improve the existing semantic-based e-Learning service matchmaking algorithm. In the algorithm, a new factor—user satisfaction which is the users feeling about the result of service discovery is led-in. This algorithm allows users to take part in the process of e-Learning service discovery, and also allows them evaluate the result of service discovery. Users evaluation in the form of user satisfaction is fed back to the system. Adopting an amendatory function which takes the user satisfaction as input, the system modifies the weights of each property of the advertise service, and then the total match degree of service discovery up to the best. 2 methods are adopted to encourage users to use the e-Learning service discovery system. Experiments indicate that compared with the traditional algorithms, the precision of the service discovery is improved more than 3 percent as the number of advertisement services is up to 10000, and with the increase of advertisement services sum, the effect will be better. After learning for 127 days, over 93% students are satisfied with the e-Learning service discovery result.
Spatial Classification and Prediction Based on Fuzzy cmeans
Hu Caiping and Qin Xiaolin
2008, 45(7):  1183-1188. 
Asbtract ( 516 )   HTML ( 0)   PDF (599KB) ( 582 )  
Related Articles | Metrics
Spatial classification and predication is one of the very important spatial data mining techniques, but the present research work on them is still in their initial stage. In this paper, a spatial classification and prediction algorithm based on fuzzy cmeans(SFCM) is proposed by introducing the concept of fuzzy membership degree of a spatial object to a fuzzy cluster. Firstly, this algorithm clusters the dataset by fuzzy cmeans, spatial information must be added into the fuzzy cmeans algorithm for spatial clustering due to spatial autocorrelation of spatial data. Secondly, it computer the fuzzy membership degree of each spatial object to all fuzzy clusters and finds the cluster that its fuzzy membership degree is the maximal. Finally, the dependent variable value of the spatial object is estimated by the dependent variable value of the mean object of the cluster. Theoretic analysis and experimental results show that SFCM is effective and efficient.
Local Entropy Based Weighted Subspace Outlier Mining Algorithm
Ni Weiwei, Chen Geng, Lu Jieping, Wu Yingjie, and Sun Zhihui
2008, 45(7):  1189-1194. 
Asbtract ( 592 )   HTML ( 1)   PDF (721KB) ( 707 )  
Related Articles | Metrics
Outlier mining has become a hot issue in the field of data mining, which is to find exceptional objects that deviate from the most rest of the data set. However, along with the increase of dimension, some unusual characteristic appearance becomes possible, such as spatial distribution of the data, and the distance of full attribute space is no longer meaningful, which is called “curse of dimensionality”. Phenomena of “curse of dimensionality” deteriorate lots of existing outlier detection algorithms’ validity. Concerning this problem, a local entropy based weighted subspace outlier mining algorithm SPOD is proposed, which generates outlier subspace and weighted attribute vector of each data object by analyzing entropy of each attribute on the neighborhood of this data object. For a given data object, those outlier attributes which constitute this object’s outlier subspace, are assigned with bigger weight. Furthermore definitions such as subspace weighted distance are introduced to make a density-based outlier processing upon the data set and get each data point’s subspace outlier influence factor. The bigger this factor is, the bigger the possibility of the corresponding data point becoming an outlier is. Theoretical analysis and experimental results testify that SPOD is suitable for datasets with high dimension, and is efficient and effective.
Approximation Algorithms for Generalized Multicut in Trees
Zhang Peng
2008, 45(7):  1195-1202. 
Asbtract ( 879 )   HTML ( 0)   PDF (713KB) ( 519 )  
Related Articles | Metrics
Given a tree T with costs on edges and a collection of terminal sets X={S\-1,S\-2,…,S\-l}, the generalized multicut in trees problem asks to find a set of edges on T whose removal cuts every terminal set in X, such that the total cost of the picked edges is minimized. The problem has its own interest since it naturally generalizes the classical multicut problem and the multiway cut problem, respectively, and also is the dual of the generalized Steiner forest problem. It is shown that the full version of the problem can be approximated within a factor of 2 by reducing it to the classical multicut in trees problem. In the prize-collecting version of the problem, a terminal set must pay the penalty π\-i if it is not cut by the picked edges. A primal-dual 3-approximation algorithm is given for the prize-collecting generalized multicut in trees problem via primal-dual schema. The k-version of the problem has to cut at least k terminal sets at the minimum cost, where k is a given parameter. It is shown that the k-version of the problem can be approximated within a factor of min{2(l-k+1),k} via a non-uniform approach. Furthermore, it is shown that there is an interesting relation between the generalized k-multicut in trees problem and the dense k-subgraph problem, implying that approximating the k-version of the problem within O(n\-\{16-ε\}) for some small ε>0 is difficult.
Semantic Annotation of Chinese Web Pages: From Sentences to RDF Representations
Jing Tao, Zuo Wanli, Sun Jigui, and Che Haiyan
2008, 45(7):  1221-1231. 
Asbtract ( 842 )   HTML ( 1)   PDF (1282KB) ( 645 )  
Related Articles | Metrics
The Semantic Web aims to leverage the World Wide Web to a Web of data, where machines are able to process annotations and relations between resources, and where implicit information can be derived from utilizing ontologies and shared vocabularies. To fulfill the vision of the Semantic Web, a method of automatic semantic annotation is needed. Proposed in this paper is a methodology for semantic annotation of Chinese Web pages, which is guided by domain ontology. The statistical method and the natural language processing technology are employed, and the mapping from sentences to RDF representations are realized through the identification phase and the grouping phase. The major technical contributions are: the domain lexicon constructed by the statistical method rather than the linguistic ontology is used as the external domain knowledge; the explicit property type tagging algorithm is used to recognize both instances and properties contained in sentences to facilitate relation extraction; after building dependency trees or dependency forests of sentences, the identified instances and properties can be grouped into RDF statements according to the dependency relationship among Chinese words. The experimental result shows that compared with the semantic annotation method based on the grammatical relationship of subject-verb-object, this method is significantly more effective.
A Robust Diffusion Tensor Estimation Method for DTI
Bai Heng, Gao Yurui, Wang Shijie, and Luo Limin
2008, 45(7):  1232-1238. 
Asbtract ( 645 )   HTML ( 1)   PDF (1501KB) ( 649 )  
Related Articles | Metrics
In diffusion tensor imaging (DTI),diffusion tensor maps are typically calculated from a sequence of diffusion weighted images. However, the diffusion weighted imaging is often influenced by both thermal noise and physiological noise such as artifacts caused by physiological motions. A robust estimation method based on the constrained M-estimator with high breakdown point and high asymptotic efficiency is proposed in this paper for acquiring more accurate DTI diffusion tensor field. First, during preprocessing phase, thermal noise in the diffusion weighted images is removed by implementing dual-tree complex wavelet transform. Then an appropriate regression starting point can be found by random sampling and considering simultaneously the positivity constraint of the diffusion tensor via the Cholesky factorization. Finally, local minimum of the objective function is obtained to achieve constrained M-estimation of the DTI diffusion tensor. Experiments are performed on the synthetic second-order tensor field and the real DTI data set, both corrupted by various levels of outliers. The calculated results show that the proposed method can remove thermal noise and physiological outliers more efficiently compared with the least square regression model and the Geman-McClure M-estimator which are more robust than the standard least square method. Therefore, the proposed method may be particularly useful for the DTI diffusion tensor estimation.
Research on Human Hand Tracking Aiming at Improving Its Accurateness
Feng Zhiquan, Yang Bo, Li Yi, Wang Zhonghua, and Zheng Yanwei
2008, 45(7):  1239-1248. 
Asbtract ( 547 )   HTML ( 0)   PDF (2035KB) ( 468 )  
Related Articles | Metrics
On the one hand, the unscented Kalman filter (UKF) is an algorithm for recursive state estimation in nonlinear systems by transforming approximations of the distributions through the nonlinear system and observation functions. This transformation is used to compute predictions for the state and observation variables in the standard Kalman filter. In this approach, the distribution is represented by a set of deterministically chosen points, which are called sigma points. These points capture the mean and covariance of the random variables and are propagated through the nonlinear system. On the other hand, interactive multiple model (IMM) filter can deal with system parameter uncertainties and obtain better precision motions. In order to combine UKF and IMM and absorb their primes, starting with both the inherent mechanism of UKF and dynamic state models of human hand, and aiming at improving accurateness of human hand tracking, some theoretical problems unsolved in UKF are firstly discussed and a novel improved UKF based on double unscented transformation (UKFDUT) is put forward. Subsequently, IMM is modified and changed into multiple model(MM). The research results show that sigma points take on many wonderful features through which some novel approaches can be explored to improve tracking precision, and that using MM for state prediction can reach higher precision than using IMM lonely. The experimental results also demonstrate the effectiveness and satisfactory tracking results.
Pose-Independent Joint Extraction from Scanned Human Body
Yu Yong, Wang Zhaoqi, Xia Shihong, and Mao Tianlu,
2008, 45(7):  1249-1258. 
Asbtract ( 610 )   HTML ( 1)   PDF (2514KB) ( 492 )  
Related Articles | Metrics
With the development of 3D scanning technique, joint extraction from scanned human body model is one of the most active research areas in virtual human modeling. Many different approaches have been proposed with the aim of extracting joints from scanned human body. But most of these methods can not guarantee pose-independence, while other methods which can ensure pose-independence require manual intervention. To solve this problem, a new framework is presented for automatic pose-independent joint extraction from scanned human body. Firstly, a new Morse function is defined on the scanned human body shape as geodesic distance from points to a source point which can be extracted by a heuristic method. Secondly, the Morse function is calculated, and then feature points and topological structure of the model shape can be extracted automatically according to Morse theory. Finally, shape is divided into segments based on Morse function isolines, and joints can be extracted from the corresponding segments of the human body shape by analyzing the circularity of Morse function isolines of the model. Experiments have been done on 20 scanned human bodies with about 5000 faces and 2300 points in each body shape. The experiment results demonstrate that this method is a pose-independent and automatic method, and is more accurate than previous methods.
Basic Boolean Operation Research in Catmull-Clark Subdivision Surface
Yuan Hong, Liu Hao, and Liao Wenhe
2008, 45(7):  1259-1268. 
Asbtract ( 740 )   HTML ( 0)   PDF (2123KB) ( 761 )  
Related Articles | Metrics
The Boolean operation is the most complex and important problem in CAD/CAM, and the quadrangle is applied in CAD/CAM engineering widely. Being without global analytic representation, the research on subdivision surface Boolean operation is more difficult than that of the parameter surface and implicit surfaces. A kind of basic Boolean operation for plane quadrangle mesh including surface intersection, trimming and gridding-level Boolean operation in the Catmull-Clark subdivision surface is presented in this paper. In the first place, the calculation of subdivision surface intersection is converted into that of control mesh intersection: the 1-neighborhood zone of intersecting quadrangle mesh on the control mesh is constructed, and then the 1-neighborhood zone is subdivided continuously to improve the precision of intersection. The intersection points between the intersection quadrangle are calculated, which are linked according to their topology relation, so the intersection line which satisfies the given precision is calculated. When the subdivision surface intersection is finished, the surface trimming will be realized by modifying the topology structure and vertices positions of control mesh at those intersection points. Finally, a kind of the subdivision surface gridding-level Boolean operation including intersection, union and difference operation is proposed, and the basic principles and application instances are given as well.
ARP: An Adaptive Runtime Mechanism to Partition Shared Cache in SMT Architecture
Sui Xiufeng, Wu Junmin, and Chen Guoliang
2008, 45(7):  1269-1277. 
Asbtract ( 612 )   HTML ( 1)   PDF (2120KB) ( 386 )  
Related Articles | Metrics
Simultaneous multithreading is a latency-tolerant architecture that usually employs shared L2 cache. It can execute multiple instructions from multiple threads each cycle, thus increasing the pressure on memory hierarchy. In this paper, the problem of partitioning a shared cache between multiple concurrently executing threads in the SMT architecture, especially the issue of fairness in cache sharing and its relation to throughput are studied. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand. LRU manages the cache unfairly, and will lead to some serious problems, such as thread starvation and priority inversion. An adaptive runtime partition (ARP) mechanism is implemented to manage the shared cache. ARP takes fairness as the metric of cache partitioning, and uses dynamic partitioning algorithm to optimize fairness. The dynamic partitioning algorithm is easy to implement, requires little or no profiling. Meanwhile it uses a classical monitor circuit to collect the stack distance information of each thread, and requires less than 0.25% of storage overhead. The evaluation shows that on the average, ARP improves the fairness of a 2-way SMT by a factor of 2.26, while increasing the throughput by 14.75%, compared with LRU-based cache partitioning.