Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 September 2012, Volume 49 Issue 9
Paper
False Positive Elimination in Static Defect Detection
Zhao Yunshan, Gong Yunzhan, Zhou Ao, Wang Qian, and Zhou Hongbo
2012, 49(9):  1822-1831. 
Asbtract ( 763 )   HTML ( 0)   PDF (1270KB) ( 870 )  
Related Articles | Metrics
False positive ratio is a key factor for measuring the performance of static defect detection tools. Based on the analysis of a series of false positive elimination techniques, we put forward a defect detection method which combines the strength of forward dataflow analysis and backward constraint query techniques. The forward dataflow analysis generates a conservative dataflow solution, which could help reporting an initial defect detection result. According to the dataflow feature of the initial defect location, by querying the potential constraints that might cause defects, the satisfiability of the initial defects could be determined by the collection of queried constraints, with the help of a general purpose constraint solver. So the initial “coarse granularity” detection result is refined. In addition, introducing the symbolic execution technique during dataflow analysis not only improves the precision of dataflow analysis, but also facilitates the constraint representation and betters the constraint querying efficiency. The comparative experiments on 11 benchmarks from SPEC CPU2000 show that our method efficiently eliminates parts of the false positives with an acceptable overhead increase, and several comparisons between similar tools reveal the scalability of our method.
Memory Leak Detection for Heap-Manipulating Programs Based on Local Heap Abstraction
Dong Longming, Wang Ji, Chen Liqian, and Dong Wei
2012, 49(9):  1832-1842. 
Asbtract ( 793 )   HTML ( 1)   PDF (1856KB) ( 719 )  
Related Articles | Metrics
There are many operations about shared and mutable data structures in heap-manipulating programs, such as allocation, combination, separation, deletion, and so on. Therefore, memory leak detection for these programs requires precise field-sensitive pointer alias information, which becomes more complex and harder to deal with. A novel field-sensitive heap abstraction approach based on extended pointer types is proposed for heap-manipulating programs in this paper. The approach computes the local layout around pointer variables in the heap, and therefore supports local reasoning for heap. The pointer alias sets are computed about the memory cells, which are reached by the pointer along various pointer fields in the given abstract distance domain. Various operation semantics about all basic statements based on extended pointer types are defined and a new algorithm runs typical forward dataflow iteration analysis to see whether there are any memory leaks. Our algorithm also supports both intra- and inter- procedural analysis. We have implemented the prototype tool (Heapcheck) for C programs in the Crystal open compiler framework to support detecting memory leaks about different pointer fields in complex data structures. Experimental evaluation about a set of C benchmark programs shows that the proposed approach has better scalability and precision than current work.
Automatically Generating Error-Traceable Test Cases Based on Compiler
He Yanxiang, Chen Yong, Wu Wei, Xu Chao, and Wu Libing
2012, 49(9):  1843-1851. 
Asbtract ( 603 )   HTML ( 2)   PDF (1293KB) ( 580 )  
Related Articles | Metrics
Automatic test case generation is an important guarantee to achieve automated testing and to verify the trustness of software. After analyzing the existing methods of automatic test case generation, we propose a compiler-based error-traceable method to generate the test cases automatically. This method relies on the compiler and inserts the test requirements into the source codes properly by expanding the existing syntax and semantics of the source codes. Therefore, when the code generator generates the object codes, it can generate simultaneously the test cases according to the results of the testing requirement analysis. This method unifies the test case generation and the object code generation and brings them in the compiler. By doing this, we can avoid the overhead of information transferring interface commonly included in the situation, in which we develop a single automatic test cases generator that uses the information of another compilers analysis. At the same time, when the test case cannot pass, it is very convenient for the programmer to correct the errors of their codes because the error positions can be found quickly by the error tracking information of the source codes. Finally, a sample program illustrates the concrete process of this method and proves that the method is effective.
Application of Interval Arithmetic in Software Testing Based on Field-Sensitive Point-to Analysis
Zhou Hongbo, Jin Dahai, and Gong Yunzhan
2012, 49(9):  1852-1862. 
Asbtract ( 541 )   HTML ( 0)   PDF (1145KB) ( 573 )  
Related Articles | Metrics
Static analysis cannot obtain variable values in actual operation, because it does not execute source codes. It is difficult to detect defects which are related with variable values. Although the technique of symbolic execution and interval arithmetic can imitate the range of variable values in actual execution program, for structures, arrays, etc., their members cannot be described independently. It makes data flow analysis field-insensitive, and generates many false negative. The interval arithmetic model based on field-sensitive point-to analysis improves the classical model by splitting the complex data type into separate variables, and proposes a type system with a set of abstract values. This system can describe the range of variable values conservatively. When calculating the data flow, combined with the abstract syntax definition of assignment statements, we also propose a derivation algorithm to the type, and apply this type system in defect testing system (DTSGCC and DTSCPP). We choose DTSCPP as the experimental platform, six C++ open source projects as test objects. Experimental results prove that our method can efficiently lower the ratio of false negative in the condition of keeping the analysis time constant.
Verifying the Correctness of Loop Optimization Based on Extended Logic Transformation System μTS
Wang Changjing
2012, 49(9):  1863-1873. 
Asbtract ( 562 )   HTML ( 0)   PDF (1101KB) ( 652 )  
Related Articles | Metrics
Loop optimization plays an important role in improving cache performance, making effective use of parallel processing capabilities, and reducing overheads associated with executing loops. Verifying the correctness of modern compilers with loop optimization is a challenge of trustworthy compiling. Formally verifying a full-fledged optimizing compiler is not feasible in nature. Rather than verifying the compiler itself, after every run of the loop transformation, formally verifing the target code produced is a correct translation of the source program. A novel approach is proposed to verify the correctness of loop optimization based on extended logic transformation system μTS, which extends a number of derived rules from logic transformation TS. After converting source program and target code into formal language Radl by predicate abstracting, one can verify the correctness of common loop transformations using the derived rules of μTS, such as loop fusion,loop distribution, loop interchange, loop reversal, loop splitting, loop peeling, loop alignment, loop unrolling, loop tiling, loop unswitching, loop-invariant code motion, etc. Loop optimization can be regarded as a series of loop transformation composition so that μTS can verify the correctness of loop optimization. Furthermore, an aided certified algorithm is put forward in order to automatic verify the correctness of loop optimization and show its proof. Finally this approach is elaborated using one typical example. Practical effects manifest its effectiveness. This approach has important instructive significance in designing high-trusted optimization compiler.
Dynamical Integrity of Codes: Model and Method
Wu Hao and Wu Guoqing
2012, 49(9):  1874-1882. 
Asbtract ( 594 )   HTML ( 2)   PDF (1331KB) ( 536 )  
Related Articles | Metrics
In the research of information security and trusted computing, measuring dynamic integrity of codes under adversariesattacks is an important problem, especially in the context, such as wireless sensor network, cloud computing etc. In the open and less coupled systems, this issue is much more urgent. Hardware-based technologies of trusted computing and methods in (software based) code attestation can successfully measure some property of program, static or code oriented, but fail to measure integrity of certain behaviors during the executing of codes. This situation partially is due to the lack of proper theory of dynamic integrity, which can be changed with sound and precise concepts and models of measuring dynamic integrity. In this paper, we develop the concept of dynamic integrity after analysis of attacks and model of measuring dynamic integrity, and use the idea of provable secure from cryptography to give a formal notion of the concept. And we discuss its relationship with several other cryptography primitives, such as message authentication code schemes, code obfuscation etc. Based on the model, a complier-aided flow embedding method is proposed to achieve a weak form of dynamic integrity scheme, and security and efficiency are analyzed with some reasonable assumptions. Finally, we discuss how to compile any program enhanced by dynamic integrity measurable scheme and issues concerning the compiling process.
A Multi-Subpopulation PSO Immune Algorithm and Its Application on Function Optimization
Wu Jianhui, Zhang Jing, Li Renfa, and Liu Zhaohua
2012, 49(9):  1883-1898. 
Asbtract ( 435 )   HTML ( 0)   PDF (2610KB) ( 607 )  
Related Articles | Metrics
Basic particle swarm optimization (PSO) algorithm, which is a global and parallel optimization of high performance, simplicity, robustness, no problem specific information, etc., has been widely used in computer science, optimization of scheduling, function optimization and other fields. However, the basic PSO algorithm has the defects of premature convergence, stagnation phenomenon and slow convergence speed in the later evolution period for complex optimization problems. In order to overcome the premature convergence problem of basic PSO algorithm, using idea of multi-subpopulation and self-adaptive for reference, a novel multi-subpopulation adaptive polymorphic crossbreeding particle swarm optimization immune algorithm (MAPCPSOI) based on two-layer model is proposed. Through the bottom layer adaptive polymorphic crossbreeding PSO operation of several subpopulations, the MAPCPSOI algorithm, firstly, could ameliorate diversity of subpopulation distribution and effectively suppress premature and stagnation behavior of the convergence process. Secondly, the MAPCPSOI algorithm, by the top layer immune clonal selection operation of several subpopulations, could significantly improve the global optimization performance and further enhance the convergence precision. Compared with other improved PSO algorithms, simulated results of function optimization show that the MAPCPSOI algorithm, especially suitable for solving high-dimension and multimodal optimization problems, has rapider convergence speed and higher solution precision.
A Fast Heuristic Parallel Ant Colony Algorithm for Circles Packing Problem with the Equilibrium Constraints
Li Ziqiang, Tian Zhuojun, Wang Yishou, and Yue Benxian
2012, 49(9):  1899-1909. 
Asbtract ( 656 )   HTML ( 1)   PDF (3202KB) ( 565 )  
Related Articles | Metrics
Circles packing problem with equilibrium constraints is difficult to solve due to its NP-hard nature. A fast heuristic parallel ant colony algorithm is proposed for this problem. Both circular radius and mass are taken as the probability factors of the roulette selection and the circles are located by arranging round existing circles in peripheral with counter-clockwise movement. Its diverse population individuals (no more than 3 circles are overlapped in each one) are constructed through the proposed heuristic method. The ant colony optimization combined with parallel search mechanism is adopted to obtain an optimal solution or an approximate optimal solution with 1-3 overlapping circles. The steepest descent method based on physical model is used to adjust the approximate optimal solution into the optimal one without overlapping. The combination of heuristic strategy, ant colony search mechanism in parallel, and fast adjustment strategy can improve the computational precision and efficiency of the proposed algorithm. The experiment results show that the proposed algorithm is superior to the existing algorithms in performance.
Multiple Objects Event Detection over RFID Data Streams
Peng Shanglian, Li Zhanhuai, Li Qiang, Chen Qun, and Liu Hailong
2012, 49(9):  1910-1925. 
Asbtract ( 683 )   HTML ( 1)   PDF (5017KB) ( 498 )  
Related Articles | Metrics
Complex event processing is a data analysis technology which is widely applied in time-critical applications such as RFID-enabled object tracking, stock trend prediction and network intrusion detection, etc. In an RFID-enabled monitoring system, RFID objects are always tracked with complex event queries. Existing RFID complex event processing techniques mainly focus on event detection and optimizations over single RFID object. However, in many RFID scenarios (such as in an RFID-enabled office or auto assembly line), complex events sequences of multiple co-located and correlated objects are always subscribed due to consistency checking and regularity requirements. In this paper, event processing issues over multiple correlated RFID objects are investigated. To support multiple correlated objects event query definition, semantics of existing event operators is extended. With pattern transformation rules, non-linear multi-objects patterns are transformed into linear multi-objects patterns which can be well evaluated within a unified evaluation framework. A multiple co-related objects complex event query evaluation model called NFA\-{b2} and the corresponding event detection algorithms are proposed. By pushing check-point constraint check down and postponing co-location constraints in event detection process, unviable runtime instances and pattern matching search space are greatly reduced which can conserve huge CPU time. Empirical experimental analyses between the proposed multiple RFID objects event detection algorithm and the slightly altered popular event detection algorithm—SASE illustrate both the efficiency and soundness of the proposed algorithm.
Mining Frequent Subtree on Paging XML Data Stream
Lei Xiangxin, Yang Zhiying, Huang Shaoyin, and Hu Yunfa
2012, 49(9):  1926-1936. 
Asbtract ( 644 )   HTML ( 0)   PDF (2542KB) ( 886 )  
Related Articles | Metrics
With the widespread use of XML data stream, discovering knowledge from it becomes important. Compared with other frequent pattern mining, mining frequent subtree over large-scale XML documents and unlimited growing XML data stream is facing difficulties: data steam can not be resolved in memory as a whole, and mining partitioned XML data stream must be considered semi-structured characteristics of XML data, etc. Inspired by this fact, Tmlist is proposed for mining frequent subtrees over paging XML data stream. Tmlist pages XML data stream, manages cross-page nodes and frequent candidate subtrees growing across page, and mines frequent subtrees page-by-page. Frequent candidate subtrees grow by inserting frequent candidate nodes in their rightmost path according to the level of their roots, avoiding the repeated recursive growth of the subtrees rooted by the low-level nodes. A subtree is represented by the topologic sequence of its rightmost path, which avoids the prefix match for the increment of subtrees, so the storing and matching cost for the prefix nodes is cut. Frequent candidate subtrees are selected according to the page minimum support, the support of frequent subtrees is decayed and branches are pruned according to the decaying factor. Accordingly, Tmlist reduces the memory cost of mining frequent subtrees in the limit of error and improves memory utilization and mining efficiency.
An LDA Based Approach to Detect the Low-Quality Reply Posts in Web Forums
Han Xiaohui, Ma Jun, Shao Haimin, and Xue Ran
2012, 49(9):  1937-1946. 
Asbtract ( 563 )   HTML ( 0)   PDF (2393KB) ( 472 )  
Related Articles | Metrics
Web forum is one of the major types of social media in Web 2.0. However, the generated contents in Web forums can vary in quality, ranging from excellent detailed opinions to topic drift contents or swear words. Therefore, a novel LDA (latent Dirichlet allocation) based approach is proposed in this paper to detect low-quality posts in Web forums. Compared with previous methods, the new one uses both semantic and statistic features of a post to evaluate its quality. The semantic features include JunkInsignificant (JI) topic proportion, topic uncertainty and topic relevance, which are computed in LDA topic space in order to overcome the ineffectiveness of TF·IDF based features in short texts. An LDA model is firstly built to predict the topic distribution of each post. Then, semantic features of a post are computed based on its topic distribution. The statistic features contain surface, syntactic and forum specific features of posts, which are selected based on the analysis of the posts contents. Since detecting the low-quality posts can be considered as a bi-classification problem, SVM is used to filter the low-quality posts. Experimental results on three different datasets show that the new approach outperforms the previous ones in terms of precision, recall and F\-1 values.
A Secure Top-k Query Protocol in Two-Tiered Sensor Networks
Li Rui, Lin Yaping, Yi Yeqing, Xiong Shuai, and Ye Songtao
2012, 49(9):  1947-1958. 
Asbtract ( 631 )   HTML ( 0)   PDF (2024KB) ( 585 )  
Related Articles | Metrics
In two-tiered sensor networks, a storage node not only collects data items from sensors, but also deals with the queries from the sink. In a hostile environment, storage nodes may be compromised by attackers and then instructed to reveal sensitive data collected by sensors and reply incomplete or forged query results to the sink. A secure Top-k query protocol, named SecTQ, is proposed in this paper. SecTQ enables storage nodes to process queries correctly while preventing them from leaking the sensitive data collected by sensors. To preserve privacy, the direct comparison of data items collected by different sensors is transformed into the comparison of the collected data items and query-comparison value provided by the sink. Moreover a polynomial function-based preserving scheme is proposed to encode for both the data items collected by sensors and the query-comparison values of the sink, which allows the storage nodes to perform queries without knowing the actual values of both the collected data and query-comparison values. To preserve integrity, a linking watermark technology is proposed, so as to effectively verify the integrity of query results.
Passive RFID Tag Anti-Collision Binary Tree Slotted Protocol without Tags Quantity Estimation
Wu Haifeng, Zeng Yu, and Feng Jihua
2012, 49(9):  1959-1971. 
Asbtract ( 450 )   HTML ( 3)   PDF (2868KB) ( 470 )  
Related Articles | Metrics
In order to enhance the tag identification throughput of radio frequency identification (RFID) and reduce system computational complexity, this paper proposes three novel tag anti-collision protocols for a passive RFID system, i.e. dynamic binary tree slotted protocol, adaptive binary tree slotted protocol and splitting binary tree slotted protocol. The three proposed protocols all adopt binary tree slotted algorithm. In this algorithm, tags select random slots firstly. Then, if tags collide in a slot, the colliding tags will be resolved by binary tree immediately and the other tags will wait until the collision finishes. Further, the three protocols use a dynamic, adaptive and splitting method to adjust a frame length to a reasonable value for the number of tags, respectively. When the length of frame is close to the number of tags, system throughput will achieve a greater value. Thus, the proposed protocol will achieve a greater value. The advantage of the proposed protocols is that, they need not tag quantity estimation, and their throughput is not affected by variance of tag quantity. Computer simulation results show that the proposed protocols’ throughput can achieve 0.425, which is greater than conventional dynamic framed slotted aloha protocol and tree slotted aloha protocol with tag estimation. Also, the results show that the protocols’ throughput does not vary much when the number of tags increases from 5 to 1 000.
Traffic Anomaly Detection Using Multi-Dimensional Entropy Classification in Backbone Network
Zheng Liming, Zou Peng, Han Weihong, Li Aiping, and Jia Yan
2012, 49(9):  1972-1981. 
Asbtract ( 586 )   HTML ( 2)   PDF (2998KB) ( 810 )  
Related Articles | Metrics
Traffic anomaly detection require not only high detection rate but also low false alarm rate in high speed backbone networks. A multi-dimensional entropy classification method is proposed to satisfy this demand, which uses entropy to measure the distribution of traffic in some traffic dimensions. An efficient algorithm is introduced to estimate entropy with low computational and space complexity. The values of entropy of all dimensions are collected to form a detection vector in each sliding window, then all detection vectors are classified into two groups: abnormal vectors and normal vectors via one-class support vector machine. In order to achieve the goal of accuracy and reduce false positive rate, we utilize a multi-windows correlation algorithm to calculate a comprehensive anomaly score when observing a sequence of windows. Some real-world traces are used to validate and evaluate the effectiveness and accuracy of this detection system through two experiments. Results of the first experiment demonstrate the effectiveness of the detection system and show that the time and space grow relatively flat as traffic and attack increase. Compared with the exited systems in the second experiment, the accuracy of the system is evaluated and our system is the most accurate method.
Anomaly Detection of User Behavior Based on Shell Commands and Co-Occurrence Matrix
Li Chao, Tian Xinguang, Xiao Xi, and Duan Miyi,
2012, 49(9):  1982-1990. 
Asbtract ( 843 )   HTML ( 6)   PDF (2151KB) ( 744 )  
Related Articles | Metrics
Anomaly detection of user behavior is now one of the major concerns of system security research. Anomaly detection systems establish the normal behavior profile of a subject (e.g. user), and compare the observed behavior of the subject with the profile and signal intrusions when the subject’s observed behavior differs significantly from the profile. One problem with anomaly detection is that it is likely to raise many false alarms. Unusual but legitimate use may sometimes be considered anomalous. This paper proposes a novel method for anomaly detection of user behavior, which is applicable to host-based intrusion detection systems using shell commands as audit data. Considering the property and the uncertainty of user behavior, the method obtains an event sequence with less variety of events after hierarchically merging shell command tokens into sets and then profiles the user’s normal behavior with a partly normalized co-occurrence matrix. In the detection stage, for event current sequence, a normalized co-occurrence matrix is constructed. Then the distances between these matrixes and the profile matrix are calculated according to the second matrix norm. Finally they are filtered with sliding windows and used to determine whether the monitored user’s behavior is normal or anomalous. The experiment results on datasets of Purdue University and SEA show that the proposed method can achieve higher detection accuracy, require less memory and take shorter time than the other traditional methods.
A Detection Model Based on Petri Nets of SMER Constraints Violation in Dynamic Role Translation
Liu Meng, Wang Xuan, Huang Hejiao, Zhao Hainan, and Zhang Jiajia
2012, 49(9):  1991-1998. 
Asbtract ( 626 )   HTML ( 0)   PDF (1352KB) ( 621 )  
Related Articles | Metrics
Kapadia et al. proposed the IRBAC (interoperable role-based access control) 2000 model, which can be used to accomplish security interoperation between two or more administrative domains via role association and dynamic role translation. Separation of duties (SoD) is one of three basic security principles supported by the RBAC (role-based access control) model. However, SSoD (static separation of duties) is not considered in the IRBAC 2000 model, so the problem of inter-domain static mutual exclusive roles constraints violation can arise while performing security interoperation between domains. This problem has been discussed in some literatures, but these researches are all from the perspective of mathematical logic and logical reasoning, which is abstract, complicated and not intuitive. On the basis of these researches, this paper introduces a novel method of analyzing the problem based on Petri net, which is very easy and visualized to be used to analyze the SMER (static mutual exclusive roles) constraints violation problem. A construction algorithm of Petri net is used to convert an IRBAC2000 model into a corresponding Petri net model, and the necessary and sufficient condition for SMER constraints violation of the IRBAC 2000 model in the Petri net model are proposed and proved. A detection model based on Petri net of violation of SMER constraints is also presented, and at last a case is used to illustrate the efficiency of the proposed model. To avoid SMER constraints violation while adding new role association or userrole assignment, the prerequisites to guarantee the security are also discussed, analyzed and detailed in this paper.
Dynamic Multi-Secret Sharing Scheme Based on Cellular Automata
Zhou Yousheng,, Wang Feng,, Qing Sihan, Yang Yixian, and Niu Xinxin
2012, 49(9):  1999-2004. 
Asbtract ( 547 )   HTML ( 0)   PDF (735KB) ( 545 )  
Related Articles | Metrics
In order to solve the problem that the previous cellular automata based multi-secret sharing schemes are unsecure and inflexible, a verifiable dynamic multi-secret sharing scheme is presented in this paper. In the proposed scheme, the shares of participants can be reused so that the computation cost of the dealer is reduced. New participants or new secrets can be added into the system without updating the shares of original participants. Cheating of dealer and participant can be detected and identified during the process of distributing the shares and reconstructing the secret. These features contribute to improve the success probability of constructing secret and security.
eTissue: An Adaptive Reconfigurable Bio-Inspired Hardware Architecture
Xu Jiaqing, Dou Yong, Lü Qi, and Feng Xue
2012, 49(9):  2005-2017. 
Asbtract ( 526 )   HTML ( 0)   PDF (5696KB) ( 525 )  
Related Articles | Metrics
In the field of fault tolerance, adaptive bio-inspired hardware is springing up in recent years. The robustness of human blood system is derived from substitution among homogeneous cells mechanism, differentiation of adult stem cells mechanism, and conversion between heterogeneous cells mechanism. Illumined by the mechanisms mentioned above, this paper presents a bio-inspired adaptive reconfigurable hardware architecture named electronic tissue (eTissue). Different from existing multicellular array, eTissue recognizes and processes data based on data tag, which loosely couples operations and processing elements, consequently equips eTissue with flexible cell replacement capability. We implement substitution among homogeneous cells, differentiation of adult stem cells, and conversion between heterogeneous cells based on this flexible cell replacement capability. These mechanisms compose the hierarchical self-healing of eTissue, and the self-evolution of eTissue is derived from differentiation of adult stem cells and conversion between heterogeneous cells. We implement the eTissue prototype system in FPGA, and conduct fault-injection experiments to attest to its self-healing and self-evolution capability. Finally, we analyze and discuss the robustness of eTissue.
An Optimized Code-Generating Algorithm for Reconfigurable Instruction Set Processors
Zhang Huizhen, Wang Chao, Li Xi, and Zhou Xuehai
2012, 49(9):  2018-2026. 
Asbtract ( 609 )   HTML ( 0)   PDF (1999KB) ( 567 )  
Related Articles | Metrics
Reconfigurable instruction set processors (RISP) can adapt to the requirements of both performance and flexibility for various applications. However, traditional compiling technology cant fit to generate efficient executable codes for them and it needs a new code-generating method for compilers. This paper presents a mixed code-generating algorithm based on the extension of the traditional three-step method in compilation. The algorithm makes the best reuse of the traditional method and generates the optimized binary codes for dynamically extended reconfigurable instructions according to the reconfigurable resources and configuration. The algorithm can obtain executable codes which are appropriate for the characteristics of the object architecture. The experiments demonstrate that the algorithm is effective for the hardware reconfiguration with the instruction codes, and it can enhance the performance of the application running on the new platform.
A Novel Flow for Asynchronous Circuit Design Using Synchronous EDA Tools
Wang Yourui, Shi Wei, Wang Zhiying, Lu Hongyi, and Su Bo
2012, 49(9):  2027-2035. 
Asbtract ( 589 )   HTML ( 0)   PDF (2930KB) ( 524 )  
Related Articles | Metrics
With the rapid development of VLSI technology and the great increase of application requirements, the synchronous circuit with global clock has encountered several crucial problems, such as great clock skew and high power dissipation. Heightened interest in low power consumption and growing concern over clock skew have encouraged the use of asynchronous techniques as a viable approach to future circuit design. But the wide acceptance of asynchronous circuits has been hindered by the lack of commercial EDA tools. Since current design methodologies are difficult for VLSI design and implementation, a novel design flow for asynchronous circuits is presented. By fully using conventional EDA tools, this flow can improve the efficiency and decrease the difficulties of design as well. To obtain optimized circuit structure, multi-virtual clock method is used for logic synthesis. The delay of asynchronous control path is also accurately matched through a quantitative analysis approach. Based on this flow, an asynchronous microprocessor core in UMC 0.18μm process is designed and implemented. Experimental results show that this flow can make the design of asynchronous circuits easily and effectively, and the asynchronous circuit style can offer a low power solution for future microprocessor design.
Applying Batch Processing to Improve Performance of iSCSI Storage System
Han Yong, Yao Nianmin, and Diao Ying
2012, 49(9):  2036-2043. 
Asbtract ( 515 )   HTML ( 0)   PDF (1736KB) ( 501 )  
Related Articles | Metrics
iSCSI storage systems follow SCSI protocol over IP network, but the transport of large number of small I/O such as SCSI commands, status, and tiny data over network seriously affects the performance of storage systems. The conventional method of batch processing combines several small I/O into a large one. However, this method is lack of quantitative analysis and the value of parameter K is set based on experience. Firstly the mathematical model of SCSI commands batch processing is created through queuing theory. Then the software module called K-batch is designed in iSCSI storage protocol architecture. Finally the iSCSI storage system with K-batch is tested under certain network scenarios built on ns-2. In the experiments, as the command arrival rate changes, the K-batch real-timely adjusts the value of parameter K correspondingly. The results show that the mean response time of SCSI commands is significantly reduced, and so the batch processing strategy can improve the performance of iSCSI storage system.