ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2007, Volume 44 Issue 2
A Cooperative-Game-Based Mobile Agent Task Collaboration Model in Network Management
Bian Zheng'ai, Liu Bo, and Luo Junzhou
2007, 44(2):  193-200. 
Asbtract ( 416 )   HTML ( 0)   PDF (490KB) ( 420 )  
Related Articles | Metrics
In order to improve the performance of network management task, the mobile agent based complex network management task cooperation mechanism is analyzed. It can be found that the traditional agent collaboration models (such as the contract-net protocol) are not suit for the large scale complex task agent cooperation and the performance of an individual agent will not be guaranteed. The cooperative game theory is introduced to solve the above problems. In this distributed collaboration model, an individual agent is regarded as a self-interest agent and it has its own utility function to evaluate the performance of the collaboration model. A convex coalition game model is proposed and the Shapley value of cooperative game theory is proved to be the criteria of fair task allocation in the collaboration model. Based on the above theory model, a three-stage task negotiation algorithm is proposed.
Design of an Adaptive PIP Algorithm
Liu Ming, Dou Wenhua, and Zhang Heying
2007, 44(2):  201-207. 
Asbtract ( 486 )   HTML ( 1)   PDF (500KB) ( 395 )  
Related Articles | Metrics
Active queue management (AQM) is an effective method to improve the performance of end-to-end congestion control. Several AQM schemes have been proposed to provide low delay and low loss service in best-effort networks in recent studies, such as RED, PI, REM, AVQ, PD, SMVS and PIP. Among them, PIP is the fusion of PI controller and position feedback compensation and shows better performance under most network conditions, but its parameters can not change with the environments. Based on adaptive single-neuron PID controller, an adaptive PIP AQM scheme is developed using square error of queue length as performance criteria to consolidate the advantages of single neuron and PIP controller. Verified by using NS-2 simulations under a variety of network and traffic situations, the adaptive PIP can achieve faster convergence speed and smaller queue oscillation than PIP, PI, ARED and SPI(self-configuring PI, which is an improved algorithm of PI). In addition, the adaptive scheme can also be used in PI, REM, AVQ, and PD schemes and offers the possibility of optimizing these AQM schemes.
Automatic Discovery of Physical Topology in Switched Ethernets
Sun Yantao, Shi Zhiqiang, and Wu Zhimei
2007, 44(2):  208-215. 
Asbtract ( 485 )   HTML ( 0)   PDF (467KB) ( 617 )  
Related Articles | Metrics
Accurate and up-to-date knowledge of topology serves as the basis of a number of network management functions, such as performance monitoring and evaluation, fault detection and location, resource allocation and etc. In this paper, the topology of a switched Ethernet, the most prevalent kind of connection layouts of LANs, is abstracted to a tree and the connection relationships between the network nodes are determined as lineal connections and collateral connections. A set of theorems to determine the nodes' relationships is also presented. Based on the above theorems, a new algorithm for physical topology discovery is proposed, which can construct the whole topology utilizing the incomplete AFTs (address forwarding tables). As an important part of a community broadband integrated services network management system (CBISNMS), this mechanism is working successfully.
A Topology Control Algorithm Based on Link Reliability and Multi-Path for Sensor Networks
Wu Zhendong and Li Shanping
2007, 44(2):  216-222. 
Asbtract ( 414 )   HTML ( 0)   PDF (385KB) ( 458 )  
Related Articles | Metrics
Energy efficiency and load balance are two important design issues for wireless sensor networks(WSN). It is found that topology control can improve the network's energy efficiency significantly. However, the existing topology control methods usually don't consider the factors of the link reliability and multi-path which affect the network's energy efficiency and load balance. In this paper, based on an analytical link loss model, the relationship of energy efficiency, load balance and the number of neighbors is analyzed. It is found that there is a contradiction between improving energy efficiency and load balance at the same time. A layered topology control algorithm LELB (layered energy-efficient and load balance algorithm) is proposed, which adjusts network's topology for increasing energy efficiency meanwhile getting good load balance over unreliable links. Simulation shows that this algorithm can significantly improve the network's performance.
Research on Dynamic Trust-Level Evaluation Mechanism Based on Machine Learning
Chen Feifei and Gui Xiaolin
2007, 44(2):  223-229. 
Asbtract ( 430 )   HTML ( 0)   PDF (432KB) ( 948 )  
Related Articles | Metrics
For the purpose of developing a usable trust relationship between the resource consumers (users) and the resource providers (hosts) in an open computing environment, a dynamic trust evaluation framework based on machine learning is proposed. In this framework, the trust-level of users changes dynamically according to their behaviors and some other common inspect parameters. And the trust-level of resources changes dynamically with the users' assessments and the service quality that resources provide. In this paper, a fuzzy trust-level evaluating algorithm (FTEA) based on fuzzy logic is put forward to generate the evaluation rules and integrated overall trust-level. The FTEA utilizes the rule-based machine learning method and has the advantage of self-learning to get reasoning rules from large amount of input data. The experiment shows that only 1000 groups of data are adequate to generate the reasoning rules, and the execution time of FTEA increases exponentially with the expansion of determination factors, so, an open computing system only need to select 5 to 6 factors or 1000 sample data for implementation.
A Distributed CA-Key Generation Scheme for Resilience CA
Zhuang Yong and Feng Dengguo
2007, 44(2):  230-235. 
Asbtract ( 458 )   HTML ( 0)   PDF (339KB) ( 534 )  
Related Articles | Metrics
Resilience CA is a CA system which implements an intrusion tolerant algorithm to protect the private key of CA. It enhances the security of system by implementing a new method to split the private key, but it uses a key distribution center to generate the CA-key and this center compromises the private key. Based on the traditional resilence CA scheme, this scheme cancels the key distribution center, and implements a distributed algorithm to generate a shared CA key, so that any t-1(t is the threshold number) servers can't compromise the private key of CA in the initialization and running stage of CA.
Research and Implementation of the Non-Repudiation Protocol for Email Transmission Between UA and MTA
Xia Chunhe, Liu Cui, Li Xiaojian, and Tao Ran,
2007, 44(2):  236-241. 
Asbtract ( 510 )   HTML ( 1)   PDF (389KB) ( 462 )  
Related Articles | Metrics
A non-repudiation service is used to generate, collect and maintain the evidence of event or action that has been announced, and make this evidence verified by arbitration in order to confirm whether the event has occurred. The non-repudiation protocol based on cryptology can offer non-repudiation service. In this paper a new non-repudiation protocol NRPUM for user agent (UA) and mail transport agent (MTA) is proposed. The NRPUM protocol data is encapsulated in SMTP and POP3 protocol, so as to implement the mail transport non-repudiation. Through validation and analysis, it is shown that the NRPUM can effectively provide non-repudiation between UA and MTA.
Text Segmentation Based on PLSA Model
Shi Jing and Dai Guozhong
2007, 44(2):  242-248. 
Asbtract ( 636 )   HTML ( 2)   PDF (438KB) ( 857 )  
Related Articles | Metrics
Text segmentation is very important for many fields including information retrieval, summarization, language modeling, anaphora resolution and so on. Text segmentation based on PLSA associates different latent topics with observable pairs of word and sentence. In the experiments, Chinese whole sentences are taken as elementary blocks. Variety of similarity metrics and several approaches to discovering boundaries are tried. The influences of repetition of unknown words in adjacent sentences on similarity values are considered. The best results show the error rate is 6.06%, which is far lower than that of other algorithms of text segmentation.
Research on the Interconnection of Heterogeneous RTIs and Multi-Federations Based on Bridge Federate
Cai Nan, Zhou Zhong, and Wu Wei
2007, 44(2):  249-257. 
Asbtract ( 340 )   HTML ( 1)   PDF (575KB) ( 467 )  
Related Articles | Metrics
High level architecture (HLA) is based on a single federation execution, and does not refer to the interoperability and reusability between heterogeneous RTIs and multi-federations. Presented in this paper is a software architecture based on bridge federate to interconnect heterogeneous RTIs and multi-federations. Then the design and implementation of RTIBridge, an interconnecting software of heterogeneous RTIs and multi-federations, are described. The typical procedures the bridge federate take when executing RTI services are analyzed. And then a heterogeneous RTIs and multi-federations development and execution process model is provided. Finally experiment results are given.
An Edge-Based Defect Detection Algorithm for Paper Currency
Jin Ye, Liu Songbo, Liu Jiafeng, Song Ling, and Tang Xianglong
2007, 44(2):  258-264. 
Asbtract ( 465 )   HTML ( 0)   PDF (483KB) ( 863 )  
Related Articles | Metrics
Defect detection is an essential step in paper currency sorting. In this paper, an edge-based algorithm is proposed to detect the scratches and cracks appearing frequently on paper currency. An area-based image registration algorithm is used to overlay the sensed and referenced paper currency images. To ensure accurate correlation with the subjective feelings of human beings, an edge intensity differential of two images is then constructed from the edge information extracted by the Kirsch operator. The defect feature extracted from edge intensity differential is sensitive to the odd edge-information, and is robust to the gray (or edge ) intensity change. The paper currency image is divided into several overlapping subzones. Within each subzone, the defect feature is calculated to estimate the level of contamination. The proposed algorithm has already been applied to a practical sorting system, and the experimental results reveal that it is robust when applied to low quality paper currency.
Research on the Resolution of the Landmark in Chinese
Li Hanjing, Li Sheng, and Zhao Tiejun
2007, 44(2):  265-268. 
Asbtract ( 322 )   HTML ( 0)   PDF (224KB) ( 371 )  
Related Articles | Metrics
It is an essential problem to resolve the ellipsis of the reference in natural language processing. The resolution of the reference is to find the reference object, person or environment in the context in order to get a complete spatial expression and quantify the position and orientation expressed by the reference. In the present, the sentence including the spatial expressions with ellipsis of references is obtained based on many natural language processing technologies whose processing is in the level of sentence, which makes the following understanding in trouble. However, the resolution of references based on limited knowledge in this paper can resolve this problem. First of all, the spatial expressions are recognized based on Hownet and the linguistic knowledge after the text is parsed. Secondly, the candidate with the highest weight is proposed as the antecedent according to the limited knowledge including the part-of-speech, semantics and position. Finally, the experiments show that this method is satisfactory.
A Survey of Context-Aware Computing and Its System Infrastructure
Li Rui and Li Renfa
2007, 44(2):  269-276. 
Asbtract ( 917 )   HTML ( 2)   PDF (348KB) ( 1371 )  
Related Articles | Metrics
Contexts like gestures, emotions, situations are very useful when a person interacts with other persons or nearby situation. On the contrary, computer can not use contexts efficiently. But computing-aware computing now is becoming more and more important since pervasive computing was put forward by Mark Weiser in 1991. The research of context-aware computing, especially its system infrastructure, is surveyed. The main contribution of this paper is an innovative system infrastructure conceptual model and the key problems of the model are discussed in detail.
A Coalition Generation Algorithm Based on Local Optimum
Su Shexiong, Hu Shanli, Lin Chaofeng, and Zheng Shengfu
2007, 44(2):  277-281. 
Asbtract ( 405 )   HTML ( 0)   PDF (306KB) ( 398 )  
Related Articles | Metrics
Coalition formation is a key topic in multi-agent systems. To solve the problem of the number of coalition structure increasing rapidly, OCS algorithm—formation of agent coalition structure based on local optimum is given. Based on local optimum, the graph of agent coalition structure can be shirt cut and the graph of agent coalition structure is pruned by using the upper bound of coalition structures referred to the partition, which decreases the searching space. Then it is proved that the time complexity of OCS is O(3\+n), but experimentally it is already close to O(23n/2). Finally, by contrasting the data, the efficiency of the OCS is indicated. This work can be seen as the improvement of Rothkopf and Liu Jinglei's related work.
An Improved Algorithm for Mining Frequent Closed Itemsets
Song Wei, Yang Bingru, Xu Zhangyan, and Gao Jing
2007, 44(2):  278-286. 
Asbtract ( 360 )   HTML ( 0)   PDF (578KB) ( 459 )  
Related Articles | Metrics
The set of frequent closed itemsets determines exactly the complete set of all frequent itemsets and is usually much smaller than the latter. Yet mining frequent closed itemsets remains to be a time consuming task. Proposed in this paper is an improved algorithm DCI-closed-index for mining frequent closed itemset. Firstly, the “index array” is proposed. Using the subsume index, those itemsets that always appear together can be discovered. Then, by using bitmap, an algorithm for computing index array is presented. Thirdly, the items are sorted in frequency descending order according to their frequencies in subsume index. Fourthly, frequent items, which appear together and share the same supports, are merged to initial generators according to heuristic information provided by index array. Thus, the search space is reduced greatly. Finally, based on index array, reduced pre-set and reduced post-set are proposed. It is proved that the reduced pre-set and post-set are equivalent to original pre-set and post-set. Thus, the redundant set-inclusion operations are avoided greatly. The experimental results show that the proposed algorithm outperforms other state-of-the-art algorithms.
Application of an Ant Colony Algorithm in Migration of Mobile Agent
Du Ronghua, Yao Gang, and Wu Quanyuan
2007, 44(2):  282-287. 
Asbtract ( 298 )   HTML ( 0)   PDF (307KB) ( 444 )  
Related Articles | Metrics
Mobile agent provides a novel paradigm for distributed computing. It has the potential to offer a single, general framework in which a wide range of distributed systems can be implemented efficiently, easily and robustly. The traveling agent problem is a complex combinatorial optimization problem, which solves the problem of planning out an optimal migration path according to the tasks and other restrictions when agents migrate to several hosts. Ant colony algorithm is a new evolutionary algorithm and extremely suit to solve the travelling agent problem, which has the characteristic of parallelism, positive feedback and heuristic search. To avoid the limitation of ant colony algorithm such as stagnation like other evolutionary algorithm, an improved ant colony algorithm is introduced to solve the travelling agent problem by modifying pheromone updating strategy, and a self-adaptive pheromone evaporation rate is proposed, which can accelerate the convergence rate and improve the ability of searching an optimum solution, so mobile agents can accomplish the migration task with high efficiency and short time. The results of contrastive experiments show that the algorithm is superior to other related methods both on the quality of solution and on the convergence rate.
Sentence Retrieval with a Topic-Based Language Model
Wu Youzheng, Zhao Jun and Xu Bo
2007, 44(2):  288-295. 
Asbtract ( 768 )   HTML ( 0)   PDF (443KB) ( 613 )  
Related Articles | Metrics
A novel topic-based language model for sentence retrieval in Chinese question answering is presented in this paper. The main idea is to make use of the peculiar characteristics in question answering scenario, that is, the semantic category of the expected answer, to conduct topic segmentation, and then incorporate the topic information of the sentence into the standard language model. For the topic segmentation, two approaches are presented, that is, one-sentence-one-topic and one-sentence-multi-topics. The experimental results show that the performance of sentence retrieval based on the proposed topic-based language model is improved significantly.
A Collaborative Filtering Recommendation Algorithm Incorporated with User Interest Change
Xing Chunxiao, Gao Fengrong, Zhan Sinan, and Zhou Lizhu
2007, 44(2):  296-301. 
Asbtract ( 975 )   HTML ( 31)   PDF (325KB) ( 1701 )  
Related Articles | Metrics
Collaborative filtering is one of the most successful technologies for building recommender systems, and is extensively used in many personalized systems. However, existing collaborative filtering algorithms do not consider the change of user interests. For this reason, the systems may recommend unsatisfactory items when user's interest has changed. To solve this problem, two new data weighting methods: time-based data weight and item similarity-based data weight are proposed, to adaptively track the change of user interests. Based on the analysis, the advantages of both weighting methods are combined efficiently and applied to the recommendation generation process. Experimental results show that the proposed algorithm outperforms the traditional item-based collaborative filtering algorithm.
A Multilevel Model of Task Assignment in Fuzzy Situations of Workflow
Xiao Zhengjin, He Qinming, and Chen Qi
2007, 44(2):  302-309. 
Asbtract ( 423 )   HTML ( 0)   PDF (442KB) ( 606 )  
Related Articles | Metrics
The automatic assignment of user task is a key technology for increasing the running efficiency of workflow management system. Workflow engines are always obliged to assess a complex and confusing situation, identify the complex relationship of candidates and tasks, decide to assign a special task to the most appropriate candidates, and ensure workflow system executes efficiently. The effectiveness depends largely on the evaluation of the influencing factors. After systematically analyzing the influencing factors and their linguistic fuzziness of user task assignment, a sectional multilevel model of task assignment and a task assignment method corresponding to the multilevel model are put forward. Furthermore, the design method of all adaptive weighting factors in the task assignment model is also discussed briefly. Finally, the performance of the multilevel model for task assignment is given by comparing the simulation result of the multilevel model and other two task assignment methods.
An Index Method for Digital Map Spatial Data in Mobile Navigation Systems
Fang Yu, Jiang Changjun, and Chen Lin
2007, 44(2):  310-316. 
Asbtract ( 333 )   HTML ( 1)   PDF (441KB) ( 449 )  
Related Articles | Metrics
The direct application of traditional index structures like R-tree or quad-tree to mobile navigation systems has some disadvantages: ① R-tree or Hilbert-R-tree does not take multi-scale into account, which results in the data of the same scale that are always accessed together and separated; ② Some other index structures based on R-tree, such as reactive-tree, MS-R-tree or MOR-tree, support multi-level display, but they are not suitable for embedded system due to their high resource requirement; and ③ Quad-tree is insufficient in portraying the spatial neighborhood relationship between data objects. Presented in this paper is a linear index structure based on hierarchical Hilbert grid named LHHG index. This index structure follows the quad-partition data organization mechanism used in quad-tree, and introduces an expended Hilbert grid to make the partition both sequential and hierarchical. The main advantages of such index, which speedup data access for embedded systems with limited resource and NAND flash story device, lie in three aspects. Firstly, sequential and same-level-clustered data access gives neighbor data on the same level the near storage space. Secondly, the clumpy data access increases the I/O operation granularity. Thirdly, the linear and optimized index data structure provides higher searching efficiency. The testing result shows that the LHHG indexes exceed the traditional spatial index in space occupation rate and search operation performance.
IOMan: An I/O Management Method Supporting Multi-OS Remote Boot and Running
Xia Nan, Zhang Yaoxue, Yang Shanlin, and Wang Xiaohui
2007, 44(2):  317-325. 
Asbtract ( 468 )   HTML ( 0)   PDF (472KB) ( 441 )  
Related Articles | Metrics
To reduce the cost of users using and maintaining computer systems, based on transparence computing, an I/O management method, IOMan, is presented, which works on a LAN environment. IOMan only uses software solution to setup a disk access redirection mechanism which needn't modify the boot mechanism of commodity operating systems such as Windows, and also doesn't affect the other I/O operations, to support multi-OS remote boot and applications running in LAN. IOMan is constructed as client/server architecture, including two parts: I/O client and I/O server. I/O client runs on the client, and I/O server works on the server. I/O client redefines the processing program of BIOS interrupt accessing the disk, and creates a virtual local disk, to send I/O requests to I/O server. I/O server responses I/O requests from the clients and accesses the virtual disk files stored on the server, and then sends the data to the clients in the form of sectors.
Survey of Operational Transformation Algorithms in Real-Time Computer-Supported Cooperative Work
Liao Bin, He Fazhi, and Jing Shuxu
2007, 44(2):  326-333. 
Asbtract ( 679 )   HTML ( 8)   PDF (467KB) ( 553 )  
Related Articles | Metrics
The partial relations of distributed operations and the consistency model are introduced and analyzed. Based on the development of typical operational transformation algorithms, basic operational transformation functions are discussed and summarized. The intention problems are carefully analyzed and the algorithm guides for intention preservation are presented. Open issues and future research directions are also discussed.
Efficient Algorithms for Matrix Eigenproblem Solver on SMP Cluster
Zhao Yonghua, Chi Xuebin, and Cheng Qiang
2007, 44(2):  334-340. 
Asbtract ( 442 )   HTML ( 0)   PDF (422KB) ( 441 )  
Related Articles | Metrics
Tridiagonalization of symmetric matrices and computing eigenvalues of tridiagonal symmetric matrix are the keys of eigenproblem parallel solver of dense symmetric matrix. Aimed at the memory hierarchy of the SMP cluster and based on both matrix tridiagonalization using Householder transform and divide-and-conquer algorithm for tridiagonal eigenproblem, their MPI+OpenMP hybrid parallel implementations are presented. These studies focus on load balance, communication overhead and performance evaluation on the SMP cluster. Hybrid parallel algorithm design combines the coarse-grain model and dynamic task sharing, thus resolving the load balance problem and decreasing the communication overhead in MPI parallel algorithm. It is shown from the tests on Deepcomp 6800 that the parallel solver based on hybrid parallel implementation has better performance and scalability than that based on pure MPI implementation.
Texture Synthesis with Self-Correlation Distinguishing and Its Applications
Mo Canlin, Chen Min, and Guo Shaoyi
2007, 44(2):  337-341. 
Asbtract ( 316 )   HTML ( 0)   PDF (362KB) ( 411 )  
Related Articles | Metrics
Texture synthesis from sample is a most important part of computer graphics. And the key of the texture synthesis technique from samples is local texture similarity matching. In order to improve the speed of texture comparability distinguishing, a method of texture synthesis with self-relativity distinguishing is presented in this paper. With the increase in the size of neighborhood and sample, the amount of calculation in the traditional texture synthesis approach will grow quickly and the inferior position in the synthesis speed will tack on progressively. And in the neighborhood L, the distribution of pixel is consectary at the geometry but discrete at the color space. So, this algorithm sets up a simple finding-list of self-relativity distance to the sample texture. By using the self-relativity distance of pixels in the neighborhood L as the distinguishing rule, the algorithm uses finding instead of fussy calculation in the traditional approach. It reflects the inter-character and the correlation of the texture into the distinguishing rule, quickens the speed of texture synthesis, and adjusts the multiple precision texture synthesis to avoid the problem of texture joint. After the examination, the algorithm presented has vast application in texture synthesis, image repair and texture searches, and fits for the demand of real-time in application.
The Array-Based Bucket Sort Algorithm
Yang Lei and Song Tao
2007, 44(2):  341-347. 
Asbtract ( 656 )   HTML ( 3)   PDF (396KB) ( 855 )  
Related Articles | Metrics
Classical bucket sort algorithm implements buckets as dynamic lists. It sorts uniform data efficiently within O(N) time but degrades to the inefficient O(N\+2) insertion sort when handling extremely-nonuniform data. By analyzing counting sort with extra data, a method is presented to implement the buckets as sequential arrays. The efficiency is improved directly by avoiding the complex operations of dynamic memory allocation. Furthermore, O(N log N) algorithms like quicksort may be employed to manage each bucket. The composed algorithm still sorts uniform data within O(N) time but simply degrades to the original O(N log N) algorithm in the worst case. In general case, it is proved that the composed bucket sort algorithm always achieves better performance. Experiments on uniform data show the superiorities of the bucket sort algorithms, and that the array-based bucket sort algorithm beats the classical algorithm. In experiments with Gaussian data, the non-uniformity influences the array-based algorithm much less than the classical algorithm where both outperform the quicksort.
A Cache Adaptive Write Allocate Policy
Huan Dandan, Li Zusong, Hu Weiwu, and Liu Zhiyong
2007, 44(2):  348-354. 
Asbtract ( 604 )   HTML ( 0)   PDF (394KB) ( 378 )  
Related Articles | Metrics
The bandwidth becomes the major bottleneck of the performance improvement for modern microprocessors. A cache adaptive write allocate policy that improves the bandwidth of microprocessor significantly is proposed by investigating cache store misses. The cache adaptive write allocate policy collects fully modified blocks in miss queue. Fully modified blocks are written to lower level memory based on non-write allocate policy which can switch to write allocate policy adaptively. Compared with other cache store miss policies, the cache adaptive write allocate policy avoids unnecessary memory traffic, reduces cache pollution and decreases load & store queue full rate without increasing hardware overhead. Experiment results indicate that on average 62.6% memory bandwidth in STREAM benchmarks is improved by utilizing the cache adaptive write allocate policy. The performance of SPEC CPU 2000 benchmarks is also improved efficiently. The average IPC speedup is 5.9%.
Data Prefetching Technique of Nonlinear Memory Access
Wu Jiajun, Feng Xiaobing, and Zhang Zhaoqing
2007, 44(2):  355-360. 
Asbtract ( 358 )   HTML ( 0)   PDF (369KB) ( 462 )  
Related Articles | Metrics
By static analysis, the compiler can hardly correctly prefetch data that are nonlinear accessed. But by using profiling techniques one can get the regulation by which the program accesses memory, and then by using these profiling information, the compiler is guided to accurately insert prefetch instructions. Based on stride profiling technique, a new information type named stride iterative is put forward, which is more accurate than normal profiling. Together with the alias information to adjust the data prefetch for the same cache line, the compiler gets a better performance than the normal data prefetch. The CPU2000 12 INT cases get 8.54% performance improvement on the average, and mcf gets an 77.87% performance increase.
Safety Verification of Dynamic Storage Management in Coq
Xiang Sen, Chen Yiyun, Lin Chunxiao, and Li Long
2007, 44(2):  361-367. 
Asbtract ( 466 )   HTML ( 2)   PDF (433KB) ( 343 )  
Related Articles | Metrics
The increasing scale and complexity of software makes the software safety and security critical. Thus more and more research focuses on the development of high-assurance software. Since type system is not expressive enough, the existing research does not touch the verification of infrastructure codes. In this paper, a certified dynamic storage management library is built using Hoare-logic style reasoning at the assembly level with the assistance of a theorem formalization and proof tool called Coq, since Hoare logic is more expressive. This work is a significant application of program verification technique. The experiment shows that program verification can be applied in high-assurance software development.