Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2009, Volume 46 Issue 2
Paper
FPGA-Based Parallel Real-Time System for 10Gbps Traffic Processing
Wang Jiandong, Zhu Chao, Xie Yingke, Han Chengde, and Zhao Zili,
2009, 46(2):  177-185. 
Asbtract ( 670 )   HTML ( 1)   PDF (1401KB) ( 424 )  
Related Articles | Metrics
As network link speeds increase, traditional application systems such as intrusion detection and traffic audit are unable to process high bandwidth traffic. Proposed in this paper is a novel parallel architecture for processing 10Gbps traffic in real-time. OC192 traffic is first classified and then filtered in this architecture. Filtered traffic is load-balanced to backend processors for detail processing. All kinds of statistics are collected during these procedures. A universal range-supported split TCAM structure (RSTCAM) is designed in the classification unit of this architecture. In RSTCAM each classification rule is split into 5 separate sub-rules according to its fields. These sub-rules are stored in 5 TCAMs separately. With RSTCAM, the following benefits can be obtained: resource and lower power consumption. It is demonstrated that it is very convenient for RSTCAM to imply range matching. A novel feedback based dynamic load balancing algorithm (FDLB) is also implemented in this architecture. FDLB dispatches traffic to backend processors based on their loads. As many applications require session integrality, FDLB guarantees this through table based hashing. Traffics are dispatched to specific processor by hashing their source and destination ip addresses. The prototype of the architecture is implemented in a FPGA. Experiment results show that the whole system can sustain OC192 traffic throughput with a processing delay of 4.2 microseconds.
Replication Strategies in Data Grid Systems with Clustered Demands
Jiang Jianjin, and Yang Guangwen,
2009, 46(2):  186-193. 
Asbtract ( 403 )   HTML ( 0)   PDF (746KB) ( 426 )  
Related Articles | Metrics
In data grid systems, data usage pattern plays an important role in system performance. According to some recent traces about real systems, data request and replica distribution exhibit clustering properties. Considered in this paper is the relationship between request distribution and replica distribution in data grid where request exhibits clustering properties. First the formal model of replication strategies in federated data grid system is given. The performance metrics include cumulative hit ratios and average access latency. Then investigated is what is the optimal way to replicate data with the objective of minimizing average access latency when request exhibits clustering properties. In the sense of minimizing average access latency, it is found that the more popular a file in a subgrid, the more replicas should be created in this subgrid; furthermore, when requests distribute uniformly in system, replicas should be uniformly distributed in system too. The optimization model is solved by means of Lagrange multiplier method and bisection method. Then, an optimization downloading replication strategy for clustering demands is obtained. The performance of this strategy is compared with that of uniform replication strategy, proportional replication strategy, square root replication strategy and LRU caching strategy through simulation. Simulation results validate the effectiveness of optimal strategy. Compared with these popular strategies, the optimal strategy has advantages of least wide area network bandwidth requirement and least average access latency.
Time Optimization Heuristics for Scheduling Budget-Constrained Grid Workflows
Yuan Yingchun, Li Xiaoping, Wang Qian, and Wang Kejian
2009, 46(2):  194-201. 
Asbtract ( 404 )   HTML ( 2)   PDF (805KB) ( 513 )  
Related Articles | Metrics
Workflow scheduling which guarantees anticipated QoS (quality of service) is a fundamental and complex problem in grids. In this paper, the budget-constrained scheduling of workflows represented by DAG (directed acyclic graph) with the objective of time optimization is considered. In general, the optimization problem is NP-hard. Two new iterative heuristics based on priority rules are proposed. According to the property of parallel activities in DAG, two concepts called TL (top level) and BL (bottom level) are defined. By incorporating them with priority rule MP (maximum profit) respectively, two priority rules, i.e. MPTL (maximum profit with top level) and MPBL (maximum profit with bottom level) are designed. They are implemented in iterative heuristics to generate iteratively feasible solutions from leveling-based initial solutions which need bigger total costs. In each step, MPTL and MPBL manage to take into account the maximum decrease in the total cost but the minimum increase of workflow duration. Computational experiments show that MPTL and MPBL can considerably improve the average performance of MP within a few iterations and a little computation time. Moreover, MPBL outperforms MPTL. As well, the impact of budget constraints and the number of Web services on the two heuristics are analyzed.
A Survey of Web Prefetching
Ban Zhijie, Gu Zhimin, and Jin Yu
2009, 46(2):  202-210. 
Asbtract ( 603 )   HTML ( 0)   PDF (894KB) ( 628 )  
Related Articles | Metrics
Web prefetching is one of the key techniques to reduce the user access latency and improve the quality of service of the network. It is a hot research topic that has gained increasing attention all over the world in recent years. The main advantage of employing prefetching is to complement the existing Web cache mechanisms and overcome the inherent limitation of Web caching in capitalizing on the spatial locality of Web accesses. In this paper, the classification of Web prefetching techniques is summarized from the perspective of the location of Web prefetching. The limitation and superiority of different types of prefetching are compared and surveyed. Based on a lot of research results in the open literature, a basic prefetching model is given and the function of its each important component is described. The evaluation criteria of Web prefetching are discussed and analyzed in detail. As the key problem of the Web prefetching model, Web prefetching algorithms are roughly classified into several categories, which are based on dependency graph, Markov model, data mining, cost function, and semantic preferences. The advantages and disadvantages of these algorithms are systematically analyzed and compared. Finally, several major issues and research directions of Web prefetching for further exploration are also pointed out.
A Prefetching-Based Bandwidth Adaptation Transmission Scheme for the Video Streaming
Xie Jianguo
2009, 46(2):  211-216. 
Asbtract ( 472 )   HTML ( 0)   PDF (689KB) ( 430 )  
Related Articles | Metrics
In recent years, the streaming video applications have led to a huge market. But dynamic behavior of the IP network's transmitting bandwidth makes it difficult to provide perceptually good quality of streaming video. The recent technologies of FGS and SVC video coding are proposed to deal with these problems by distributing the data in enhancement layers over a wide range of bit rates. However, data rate of the FGS video coding exhibits significant variability. This variability is meanwhile faced with the problem of the available network bandwidth variability. Therefore, the streaming is faced with the problem of trying to accommodate the mismatch caused by both the available bandwidth variability and the encoded video variability. By using the technologies of prefetching and buffing, this paper focuses on how many encoded video bits of the FGS video should be truncated, and how to truncate them when network available bandwidth isn't enough firstly. Then a scheme of the bandwidth adaptation video transmitting is developed through minimizing to truncate the encoded video bits, and the amount of truncated video bits are analyzed, which can achieve the minimum result theoretically. The scheme can provide a platform of how to compute truncating bit amount for other analogous algorithms. Finally, experimental results show the scheme is efficient and feasible.
Centroid-Based Focused Crawler with Incremental Ability
Wang Hui, Zuo Wanli, Wang Huiyu, Ning Aijun, Sun Zhiwei, and Man Chunlei
2009, 46(2):  217-224. 
Asbtract ( 529 )   HTML ( 0)   PDF (1036KB) ( 477 )  
Related Articles | Metrics
How to crawl selectively in a Web page is studied in this paper. Document feature weight and centroid feature weight are calculated based on the proposed TFIDF-2 model and the three heuristic rules Max, Ave, and Sum. After these two weights are figured out, a centroid vector which corresponds to a root set can be easily constructed. The centroid vector is then used as a front-end classifier to guide a focused crawler. First of all, the authors use the front-end classifier and the back-end one respectively to score anchor texts of URLs. Then, they sum up the two anchor text scores of the same URL. Finally, they select the URL which has the highest anchor text score from the frontier and download the URL's corresponding Web page. Four series experiments are conducted. Experimental results show that with the aid of newly constructed centroid vector, the focused crawler can efficiently and accurately predict the relevance of a Web page simply by using URLs' corresponding anchor texts. Furthermore, the two classifiers' framework contributes to the focused crawler an incremental crawling ability, which is one of the most important and interesting features and must be settled down in the domain of focused crawling.
An Efficient Policy Analysis Algorithm for SPKI/SDSI2.0
Geng Xiuhua, Han Zhen, Jin Li, and Cao Xianggang
2009, 46(2):  225-234. 
Asbtract ( 416 )   HTML ( 0)   PDF (967KB) ( 370 )  
Related Articles | Metrics
Trust management is a mechanism for large-scale, open, distributed access control and it is flexible and safe compared with traditional methods. SPKI/SDSI2.0 is a popular trust management system at present, and each principal in it can issue policy statements. A set of SPKI/SDSI2.0 certificates form a state of system. In a given state, many important properties need to be known and analyzed, such as whether a principal is authorized to access a protected resource? Which principals are members of one local name? For a specific right, who are granted? When the number of certificates is huge, a special algorithm is needed to answer these questions. However, previous algorithms only study the problems about authorization, ignoring the policy analysis to involved names. Moreover, the efficiency of those algorithms is not high. In this paper, EPAAS (efficient policy analysis algorithm for SPKI/SDSI2.0) is presented. EPAAS expands the area of policy analysis essentially, so it can analyze properties not only about authorization and name but about integrated properties. The time complexity of query is improved from previous algorithms' O(n\+3l) to O(n). The logic programs are gotten based on translating each policy statement into some Datalog clauses. The minimal Herbrand model of Datalog program is used as the program's semantics and it can be evaluated in polynomial time. In addition, the soundness of the semantics is proved.
An Adaptive Forward Error Correction Method for Carrier Ethernet
Dai Jinyou, and Yu Shaohua
2009, 46(2):  239-244. 
Asbtract ( 556 )   HTML ( 0)   PDF (720KB) ( 544 )  
Related Articles | Metrics
Huge change has happened with physical channel of the present Ethernet while compared with that of traditional Ethernet. Carrier Ethernet brings forward higher request to Ethernet physical channel. Under the application environment of carrier Ethernet, it is of necessity to improve the transport performance of the Ethernet physical channel by using a special method. Based on the frame structure defined in IEEE Std 802.3, a kind of frame structure is presented, which comprises FEC(forward error correction) contents and an FEC method is also brought forward. The method includes an error correction function as well as a mechanism of detecting the performance of the physical channel and adaptively adjusting the FEC configuration, so the cost brought by the FEC method can be cut short to the minimum while the error correction performance is guaranteed. An Ethernet interface using the FEC method can communicate with an ordinary non FEC Ethernet interface, so the backward compatibility can be reached. Detailed analysis for the performance of the FEC method is given too, and the analysis result proves that the FEC method is feasible. A kind of network device that carries out the method is designed and a test topology is set up to test and verify the method. The test data show that the method has the expected performance.
Term-Committee-Based Event Identification Within Topics
Zhang Kuo, Li Juanzi, Wu Gang, and Wang Kehong
2009, 46(2):  245-252. 
Asbtract ( 743 )   HTML ( 1)   PDF (767KB) ( 499 )  
Related Articles | Metrics
With the overwhelming volume of news stories created and stored electronically everyday, there is an increasing need for techniques to analyze and present news stories to the users in a more meaningful manner. Most previous research focuses on organizing news set into flat collections (topics) of stories. However, a topic in news is more than a mere collection of stories: it is actually characterized by a definite structure of inter-related events. Unfortunately, it is very difficult to identify events within a topic because stories about the same topic are usually very similar to each other irrespective of the events they belong to. To deal with this problem, two methods based on event key terms to identify events and their relations are proposed. For event identification, some tight term clusters are first captured as term committees of potential events, and then used to find the core story sets of potential events, and each story is assigned to an event core story set at last. For event relation discovery, the term committees are also used to improve story and event similarity calculation. The experimental results on two Linguistic Data Consortium (LDC) datasets show that the proposed methods for event identification and relation discovery outperform previous methods significantly.
On a Preference System of Agent and Its Construction
Wang Zhenzhen, Xing Hancheng, and Chen Hanwu
2009, 46(2):  253-260. 
Asbtract ( 429 )   HTML ( 0)   PDF (915KB) ( 343 )  
Related Articles | Metrics
Inspired by the interest in agent's preference model, a new preference system called AF preference system is presented in this paper, which is able to describe the behavior of human beings. It combines the rational inference of preference offered by Michael Freund with classical inductive logic and probability logic provided by Adams. That is, relying upon the classical logical system of Adams, the logical reasoning, the general (commonsense) reasoning based on preference inclination of human beings, and the inclined reasoning based on subjective hopes of human beings are combined. It is therefore an organic system. Moreover, in this system the method for constructing the rational preference relations of agent and the process of rational inference are provided. And finally, an example is given to show that the system can fully guess an agent's undisclosed preferences according to the incomplete information on his preferences. Such system provides a mental model for any rational agent. Once the inclination of agent's preference changes, this model can adjust itself accordingly, without the need of doing radical change. It is in some sense a hierarchy of the preference system of Freund and so it is particularly fit for the inductive manner of human mind. It has robustness and utilitarian value.
Hierachical Bagging Ensemble Pruning Based on Deleting Worst Base Learner
Xie Yuancheng and Yang Jingyu
2009, 46(2):  261-267. 
Asbtract ( 589 )   HTML ( 0)   PDF (852KB) ( 437 )  
Related Articles | Metrics
The problem of selecting the best combination of classifiers from an ensemble has been shown to be NP-complete. Several strategies have been used to reduce the number of the units in ensemble classifier. The main objective of selective ensemble is to find a rapid pruning method for bagging to reduce the storage needs, speed up the classification process and obtain the potential of improving the classification accuracy. Those traditional methods of selective ensemble focus on the diversity of base learners. Diversity implies many-to-many relationship and agreement implies one-to-many relationship, so bagging pruning based on agreement may be an easy way for selective ensemble. A new selective ensemble algorithm(HDW-bagging), which is based on researching on the agreement of base learners, is proposed in this paper. That is to find the worst base learner which can reduce the ensemble generalization error of the rest base learners by deleting itself. Hierachical pruning is used to speed up the new algorithm. The new algorithm's running time is close to bagging and the performance of the new algorithm is superior to the bagging algorithm. The new algorithm's training time efficiency is superior to GASEN's and the performance of the new algorithm is close to that of GASEN. And the new Algorithm supports parallel computing.
Wavelet-Based Amnesic Synopses for Data Streams
Chen Huahui, and Shi Baile
2009, 46(2):  268-279. 
Asbtract ( 473 )   HTML ( 0)   PDF (1087KB) ( 456 )  
Related Articles | Metrics
Maintaining a synopsis data structure dynamically from data stream is vital for a variety of streaming data applications, such as approximate query or data mining. In many cases, the significance of data item in streams decays with age: this item perhaps conveys critical information first, but, as time goes by, it gets less and less important until it eventually becomes useless. This feature is termed amnesic. Discrete wavelet transform is often used in construction of synopses for streaming data. Proposed in this paper is a wavelet-based hierarchical amnesic synopsis (W-HAS), which includes the amnesic feature of data stream in the generation of wavelet synopses. W-HAS can provide a better approximate representation for data streams with amnesic feature than conventional wavelet synopses. To maintain W-HAS online for evolving data streams, the authors first explore the merger process of two wavelet decompositions, and then implement the addition of data nodes in W-HAS structure based on the merger process. Using the addition of data nodes, W-HAS grows dynamically and hierarchically. The construction methods of W-HAS under sum of squared error (sse) and maximum absolute error metrics are discussed. Further, W-HAS with error control is also explore. Finally, experiments on real and synthetic datasets validated the proposed methods.
Deep Web Data Extraction Based on Result Pattern
Ma Anxiang, Zhang Bin, Gao Kening, Qi Peng, and Zhang Yin
2009, 46(2):  280-288. 
Asbtract ( 707 )   HTML ( 1)   PDF (1130KB) ( 475 )  
Related Articles | Metrics
With the rapid development of World Wide Web, how to improve the efficiency and precision of Deep Web data extraction has already become more and more important for effective Deep Web data integration. However, the bottleneck problem of the improvement of efficiency and precision of Deep Web data extraction is repeatedly semantic annotating and the existing of nested attributes. The definition of result pattern is given, and a novel approach to Deep Web data extraction based on result pattern is proposed. The approach includes two stages which are result pattern generation and data extraction based on result pattern. According to the feature of Deep Web result pages, the definition of feature matrix of Web page data is given. By constructing and analyzing the feature matrix of Web page data, result pattern can be easily obtained. Attribute semantic annotating is completed during the stage of result pattern generation. In this way, repeatedly semantic annotating is resolved well. At the same time, an effective method to divide nested attributes is also proposed. Experimental results show that Deep Web data extraction based on result pattern improves the efficiency and precision, and lays a solid foundation for Deep Web data integration.
Design of Optimized Surface Passing Through a Given Geodesic
Zhao Hongyan and Wang Guojin
2009, 46(2):  289-294. 
Asbtract ( 386 )   HTML ( 0)   PDF (621KB) ( 619 )  
Related Articles | Metrics
Given a space curve, to design a surface with it as a geodesic is in urgent need of garment manufacture and shoe making industry. The existing research proposed to construct a surface pencil with the curve as a common geodesic, and then specify the final surface among them by fitting scanned data. Such a method highly depends on surface parameterization and representation, and gives no consideration to surface fairness, which is important in practice. According to the material characteristic of garment and the designing concept, this paper provides a new method to specify the final surface in an energy optimization way. It first changes the representation of the surface pencil, so as to establish a relation between the free variable and the surface expression. After that, it introduces a proper energy function, and then determines the optimization surface by solving the minimum energy equation. In this way, global smoothness is realized. The authors also consider surface modification with different constraints such as interpolating or fitting some points or curves. Experiments show that the new method can simulate the garment modeling and processing efficiently and effectively. The method provided is especially applicable and useful in CAD/CAM.
Gait Recognition Using Distributions of Silhouette Feature
Chen Shi, Ma Tianjun, and Gao Youxing
2009, 46(2):  295-301. 
Asbtract ( 576 )   HTML ( 0)   PDF (1250KB) ( 482 )  
Related Articles | Metrics
Vision-based human identification at a distance in surveillance has recently gained more attentions. Gait has the advantages of being non-invasive and difficult to conceal, and is also the only perceivable biometric at a distance. This paper introduces a novel feature representation method for gait analysis and recognition applications. The method includes following steps: first, silhouette extraction is performed for each image sequence. Secondly, the distributions of sampled points from the human local silhouette are analyzed and the gait cycle is detected by a histogram-based approach. Thirdly, by tiling and dispersing all image frames across one gait cycle in a two-dimensional plane along a ring frame by frame at a fixed interval, a contextual stances appearance model is built. The gait appearance model consists of the structural information of the individual silhouette and contextual silhouettes centered at the current frame in the polar-plane. With a designed invariant histogram-based descriptor, the gait appearance characteristics are described as a sequence of shape distributions. These distributions are finally used to achieve gait recognition based on Jeffrey divergence matching criterion and dynamic time warping technology. Recognition capability is illustrated by an 87.59% CCR on Soton database and the result shows that our approach outperforms existing methods.
FPTS: A Fixed-Priority Preemption Threshold Scheduling Algorithm in the Presence of Resources Sharing
He Xiaochuan and Jia Yan
2009, 46(2):  302-309. 
Asbtract ( 521 )   HTML ( 0)   PDF (787KB) ( 329 )  
Related Articles | Metrics
Fixed-priority with preemption threshold (FPPT) is an important form of real-time scheduling algorithm, which fills the gap between fixed-priority preemptive (FPP) and fixed-priority non-preemptive (FPNP). FPPT can prevent tasks from unnecessary task preemption, reduce the additional memory usage, and improve the schedulability of task set. In real-world real-time applications, access to exclusively-shared resources is a very common operation, and therefore shared resources management in FPPT is necessary. The correlations between tasks, resulting from the shared resources, have a great influence on the priority assignment and preemption threshold assignment of task set. However, current research on FPPT real-time scheduling techniques focuses on the time guarantees of independent real-time tasks rather than the complexity coming from exclusively-shared resources. Stack resource protocol (SRP) is well-known resource access and control protocol in real-time systems, and has many nice features such as deadlock avoidance, earlier blocking, shared run-time stack and so on. Proposed in this paper is a new FPTS scheduling paradigm, which integrates FPPT with SRP, including the new critical instant, preemption threshold assignment and appropriate schedulability analysis, based on response time analysis. Furthermore, an algorithm to compute the feasible preemption threshold assignment is presented, and the proofs for the correctness of these algorithms are also presented.
Formal Semantics of Component-Based Architecture Model Mapping
Hou Jinkui, Wan Jiancheng, Yang Xiao, and Wang Haiyang
2009, 46(2):  310-320. 
Asbtract ( 486 )   HTML ( 0)   PDF (1175KB) ( 442 )  
Related Articles | Metrics
Semantic consistency between source models and target models is an important criterion of the correctness of model transformations in model-driven software development. However, the definition, description, and proof of semantic property preservation of model transformations are still problems unresolved. On the basis of software architecture, category theory and algebraic specification are combined together to provide precise semantics for architecture models and their mapping relations in this paper. Formal semantics of component specifications and architecture models are described within typed category diagrams. Morphism composition is used to trace the interconnections and mapping relations between component models, and the mapping relations between different levels of architecture models are formally described by morphisms and functors. On this basis, the semantic properties that should be preserved through model transformations are discussed subsequently. Category theory supports the diagrammatic representation of component models that visualizes the relationships among components and the structural features, which can be used to strengthen the understandability and traceability of model transformations. The application research shows that the framework captures the essence, process and requirements of model-driven development, and thus can be used as a new theoretical guidance for the cognition, design and semantic calculation of model transformations and model-driven software development.
Fault Localization for Broken Control in Quantum Circuits
Xiao Fangying and Chen Hanwu
2009, 46(2):  321-328. 
Asbtract ( 371 )   HTML ( 1)   PDF (658KB) ( 569 )  
Related Articles | Metrics
Reversible circuit is the basis of quantum computation, lowpower COMS, and nanotechnology. To ensure the validity and liability of the reversible circuits, fault detection and fault localization are necessary. And fault localization is more difficult than fault detection. Discussed in this paper is broken control faults (BCFs) localization in reversible circuits. By analyzing the influence of BCFs on the outputs of reversible circuit, it is found that the broken control fault of a gate with size k just changes the outputs of 2\+\{n-k\} inputs of the reversible circuit. According to this, the authors presents a method to construct fault localization tree to locate BCFs, which divides the current fault set into several subsets and constructs the sub fault localization tree for each subset recurrently. The method doesn't need any truth table and fault table when constructing the fault localization tree which are necessary for the traditional fault localization method. The height of the fault location tree (FLT) is short and can be used to locate BCFs efficiently. It has less time and space complexity than the BCFs localization methods called Rfault, so it can be applied to larger circuits. This method can also be applied to locate other kinds of fault models, such as missing gate fault.
Research and Implementation of an Optimizing Design Method of De-Synchronized Circuit
Jin Gang, Wang Lei, Wang Zhiying, and Dai Kui
2009, 46(2):  329-337. 
Asbtract ( 394 )   HTML ( 0)   PDF (995KB) ( 379 )  
Related Articles | Metrics
De-synchronous design method, which is compatible with the traditional EDA tools, can greatly improve the design efficiency and decrease the design difficulties of asynchronous circuits. An optimizing design method of de-synchronous circuit based on an abstract model called control graph is presented in this paper. Control graph is an abstract model of de-synchronous circuit. The core step of this optimizing design method is to combine the local controllers in the control path, and a proof has also been given that this combining procedure can preserve the functional equality of this circuit. Through this design method, the area of the control path can be markedly reduced. This optimizing design method takes the performance evaluation function as its heuristic function, so there is not any penalty on performance. Because this optimizing problem is an NP-hard problem, an approximate algorithm is introduced. To demonstrate the results of this algorithm, it has been applied to a set of benchmark circuits. According to the results, the number of local controllers is decreased 54%, and 76.3% of C-elements required to construct handshake circuit are removed. Finally, a 32-bits de-synchronous multiplier in 0.35μm process is designed with this optimizing design method. Compared with the traditional design method of de-synchronous circuit, this optimizing design method will lead to smaller circuit without any penalty in performance.
VLSI Implementation of QC-LDPC Decoder Using Optimized TDMP Algorithm
Bao Dan, Xiang Bo, Shen Rui, Chen Yun, and Zeng Xiaoyang
2009, 46(2):  338-344. 
Asbtract ( 884 )   HTML ( 0)   PDF (1014KB) ( 451 )  
Related Articles | Metrics
An area and power efficient VLSI architecture is presented for QC-LDPC decoder based on optimized turbo-decoding message-passing (TDMP) algorithm. The optimization is based mainly on the check node updating using normalized Min-Sum (NMS) scheme, which has been proposed for the two-phase message-passing (TPMP) algorithm or the so called two-phase belief propagation algorithm. The primary advantages of the proposed architecture over recent work are: 1) power dissipation reduction resulting from fast convergence speed by a factor of larger than 2 in terms of decoding iterations, 2) more than 50% savings in memory leading to a large chip area reduction benefiting from memory optimization by posterior message compression, 3) good performance and low complexity resulting in small area and low power due to normalized Min-Sum algorithm for check-node updating, and 4) reducing interconnection congestion by making full use of quasi-cyclic characteristics of check matrix and the proposed logarithm shifters. The proposed architecture is implemented in the Chinese Digital Television Terrestrial Multimedia Broadcasting (DTMB) system for LDPC codes decoding. The decoder consumes 0.58 million gates, and reaches a throughput of 107Mbps at a clock frequency of 100MHz. The proposed architecture can be extended to other digital communication systems such as wireless local area network (WLAN), etc, which adopt LDPC codes as the forward error correction (FEC) scheme.
Control Flow Checking and Recovering by Compiler Signatures and Hardware Checking
Gong Rui, Chen Wei, Liu Fang, Dai Kui, and Wang Zhiying
2009, 46(2):  345-351. 
Asbtract ( 526 )   HTML ( 0)   PDF (1083KB) ( 391 )  
Related Articles | Metrics
With the exponential increase in the transistors per chip, microprocessors are becoming more susceptible to soft errors. Control flow checking has been proved effective in promoting soft error tolerant ability of microprocessors. The conventional control flow checking method inserts large number of signature instructions in the program by compiler. So it imposes large overheads on both binary code size and program execution performance. Moreover, the conventional control flow checking method does not consider the recovery from control flow errors. A new method, control flow checking and recovering by compiler signatures and hardware checking (CFCCH), is proposed in this paper to solve the aforementioned problems. CFCCH uses a compiler to insert signature data, not signature instructions, in the program to reduce the binary code size. Hardware checking is automatically triggered after the branch/jump instruction so that the execution cycles of the checking operation can be reduced. Hardware implemented context saving and recovering is also proposed to provide fast recovering from control flow errors. CFCCH based on 8051 architecture is implemented in this paper. Random faults are injected in the 8051 microcontroller with CFCCH to evaluate the soft error tolerant ability. The experimental results demonstrate that compared with the conventional control flow checking method, CFCCH can efficiently reduce the binary code size and program execution time while keeping the same soft error tolerant ability.