Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 April 2007, Volume 44 Issue 4
Survey of Security Technology for Large Scale MANET
Hu Huaping, Hu Guangming, Dong Pan, and Chen Xin
2007, 44(4):  545-552. 
Asbtract ( 458 )   HTML ( 0)   PDF (358KB) ( 588 )  
Related Articles | Metrics
Mobile ad-hoc networks (MANET) have great martial value and wide business application prospect. In MANET, owing to many new characteristics such as central-less control and multi-hop communication, the security situation is more rigorous than that in the traditional network. Especially, when the number of nodes increase, the difficulty of building network, network avalability and network security will be influenced badly. In this paper the current state of MANET research is first surveyed and then the state of the key technologies involved with large scale MANET security, such as security model and scheme, secure clustering, group key management and so on, is discussed in detail. Finally, the main research directions of large scale MANET security technology are presented, which are the security networking technology, the MANET protocol security proving technology, the key management technology, the model simulation and security evaluation technology.
Paper
Research on Reliability Evaluation of Cache Based on Instruction Behavior
Zhou Xuehai, Yu Jie, Li Xi, and Wand Zhigang
2007, 44(4):  553-559. 
Asbtract ( 428 )   HTML ( 0)   PDF (338KB) ( 425 )  
Related Articles | Metrics
Soft error arises from the strike of high-energy particle, and does great harm to the reliability of processor. Furthermore, with the change of design targets of processor to low power consumption, high performance, and the reduction of supplying voltage, the occurrence possibility of soft error arises greatly. As a result, research on reliability of processor receives much more attention than ever. Aiming at solving the problem of low efficiency of traditional evaluation methods, which mostly apply fault-injection methods, this paper presents a systematic evaluation method of the indispensable memory unit in processor, cache. It takes an evaluation attribute, architectural vulnerability factor as research object. On the one hand, this method analyzes instructions that have no impact on the final execution result of application program to get the instructions that affect AVF. On the other hand, according to memory type, writing policy, and features of data/instruction and address tag array of cache, it analyzes various combination of neighboring operations' effects on AVF, thus attaining the needed information in AVF evaluation process. In the experiment, architectural vulnerability factor evaluation of instruction array of cache in PISA architecture is performed. The experiment results demonstrate the validity of this method.
A Dynamic Mix Anonymity Algorithm for Wireless Ad Hoc Networks
Wu Zhenqiang, and Ma Jianfeng
2007, 44(4):  560-566. 
Asbtract ( 387 )   HTML ( 0)   PDF (382KB) ( 519 )  
Related Articles | Metrics
A wireless ad hoc network is subject to various security attacks. The encryption and authentication methods protect communication partners from disclosure of their secret messages but cannot prevent traffic analysis. It is argued that the mix method is better for wireless ad hoc networks after exploring the approaches to preventing traffic analysis through hiding the source and destination of packets, however, the security and efficiency are mission-critical and challenging to the present mix algorithm for ad hoc networks. In this paper, a pseudo-random mix (RM)-based anonymity algorithm is presented for ad hoc networks by redesigning the buffer manager. The RM algorithm delays a message by time stamps when the mix buffer is not full, otherwise it forwards a message by a random number. The RM algorithm not only guarantees the anonymity of an anonymous communication system, but also solves the problem of discarding packet in the stop and go algorithm. The analysis shows that the RM algorithm has desirable security property and it can evidently improve the efficiency of an anonymous system. The simulation shows that the RM algorithm is better than those known algorithms in adaptability and utility for wireless ad hoc networks.
A Trust-Scheme-Based DRM Model for P2P System
Xiao Shangqin, Lu Zhengding, Ling Hefei, and Zou Fuhao
2007, 44(4):  567-573. 
Asbtract ( 406 )   HTML ( 2)   PDF (378KB) ( 474 )  
Related Articles | Metrics
With the maturation of P2P technology, there are more and more challenges to the protection of digital rights. DRM for the traditional client/server model can not satisfy the requirement of digital right protection of P2P systems. Practical application of P2P network technology is combined with new generation DRM technology, and a DRM model for P2P system based on trust degree is given. The proposed model shares the secret key to the trusty peers and affords necessary security for digital content delivery based on P2P technology. Mathematic analyses and simulations show that compared with the current DRM model, the proposed model is more robust toleration of transmit and security.
A Forward Secure Threshold Signature Scheme from Bilinear Pairing
Peng Huaxi and Feng Dengguo
2007, 44(4):  574-580. 
Asbtract ( 476 )   HTML ( 0)   PDF (372KB) ( 624 )  
Related Articles | Metrics
A forward secure threshold signature scheme from bilinear pairing is proposed by combining the concept of forward security with threshold signature from bilinear pairing. In the scheme proposed the signature key is distributed into the whole group and is updated by means of updating partial keys. So the security of the signature key is enhanced and the scheme has the characters of forward security. Furthermore, for the character of partial keys-update, the scheme can prevent the mobile adversaries. The security of the scheme is also analyzed. It is shown that the proposed scheme is secure and effective.
A Novel Algorithm of Soft Fast Correlation Attack and Applications
Zou Yan, Lu Peizhong, and Zhu Xueling
2007, 44(4):  581-588. 
Asbtract ( 519 )   HTML ( 5)   PDF (527KB) ( 487 )  
Related Articles | Metrics
The main researches on improving fast correlation attacks (FCA) are focused on adapting the usual decoding algorithms and the best involved parameters to the practical applications. In this paper a novel soft fast correlation attack (SFCA) is presented for sequences obtained from a highly noisy BPSK channel, and a feasible strategy is provided to adapt the involved parameters in the techniques to fit in with the concrete applications in different channel situations. Fast Walsh transformation is used to realize decoding procedure instead of exhaust search used by conventional attacks. A theorem is derived, which exploits that log-likelihood ratio of a correct state estimate is just the value of corresponding Walsh transformation. The simulation results show that the proposed SFCA algorithm for sequences from BPSK channel has a gain that exceeds 2 dB compared with FCA algorithm for sequences from BSC channel. As a practical application, an efficient acquisition based on SFCA of m-sequence in spread spectrum communication system is given. Compared with the recent RSSE acquisition scheme proposed by Yang, this scheme has a significant improvement in acquisition performance as well as acquisition delay. Furthermore, the number of chips required by this scheme increases linearly as signal noise rate decreases, which results in much better performance in real-time communication.
BPCRISM: A New Intrusion Scenario Building Model
Liu Yuling, Du Ruizhong, Zhao Weidong, and Cai Hongyun
2007, 44(4):  589-597. 
Asbtract ( 286 )   HTML ( 0)   PDF (509KB) ( 551 )  
Related Articles | Metrics
Intrusion detection system (IDS) is the new generation of security-safeguard technology followed firewall and data encryption. Aiming at the same attack, traditional intrusion detection system (IDS) produce a lot of the repeated alerts which have quite difference in content, emphasis and uncertainty, because of its heterogeneity and autonomy. But by analyzing these alerts, the performance of IDS is reduced and the integrated intrusion course and scenario cannot be obtained. In order to analyze and deal the alerts effectively and to rebuild the attack flow and the attack scenario, a new intrusion scenario building model—BPCRISM (based probability and causal relation intrusion scenario model) that combines probabilistic correlation with causal correlation is presented in this paper. The method of the alert relation can be divided into two major categories: probabilistic alert correlation and based causal relation alert correlation, and then algorithms of two alert correlation methods are given. The integrated intrusion course can be identified and the intrusion scenario is built from the correlation alerts. Realizing this model tentatively, experiments are performed by using DARPA Cyber Panel Program Grand Challenge Problem Release 3.2 (GCP), which is an attack scenario simulator, and the effectiveness of the model is verified. This model can solve the problems a single traditional intrusion detection system brings.
The Trust Model and Its Analysis in TDDSS
Tian Junfeng, Xiao Bing, Ma Xiaoxue, and Wang Zixian
2007, 44(4):  598-605. 
Asbtract ( 400 )   HTML ( 0)   PDF (436KB) ( 383 )  
Related Articles | Metrics
A new model—trusted distributed database server system (TDDSS) is presented in this paper. This new model breaks the situation in which trusted computing is always applied in PC. It introduces trusted mechanism from PC into distributed database server system (DDSS). And this model helps to find out a new application area for the trusted computing. Also set up are a complete model of TDDSS and the layers of trusted-chain in trusted distributed database server system with trusted computing technology. Trusted-chain presents assurance for the transfer of the trust. It transfers from the trusted root to the interior of the system. Role-based mechanism, which is recognized by more and more people, is posed in management in TDDSS. It defines a role for every client server, and role-based mechanism proposes a more flexible and scalable permission management model. At the same time, the mechanisms of authentication and log are improved in this system. Especially, two-level of logs is used in TDDSS. It improves the security and makes the information seeking much easier. In conclusion, a complete model for the application of trusted computing in computing systems is given. Furthermore the whole system model is evaluated with mathematics method, and its feasibility and efficiency are proved accurately.
A Group Key Management Scheme Based on Distributed Rekeying Authority in Sensor Networks
Zeng Weini, Lin Yaping, Hu Yupeng, Yi Yeqing, and Li Xiaolong
2007, 44(4):  606-614. 
Asbtract ( 351 )   HTML ( 1)   PDF (535KB) ( 526 )  
Related Articles | Metrics
Most of the group key management schemes proposed in traditional networks rely on a reliable node. Different from the traditional networks, sensor networks don't have such reliable nodes and their resources are constrained. So all these schemes can't be used in sensor networks directly. Proposed in this paper is a group key management scheme based on the distributed rekeying authority. And a broadcast mechanism is introduced in the process of group rekeying. Furthermore, three novel polynomials are presented to revoke compromised nodes and distinguish rekeying information's integrality. Extensive analyses and simulations show that the proposed scheme can provide a high level of security, reduce communication and storage overheads, and avoid the problem of isolated nodes.
Implementing Chinese Wall Policies on RBAC
He Yongzhong, Li Xiaofeng, and Feng Dengguo
2007, 44(4):  615-622. 
Asbtract ( 609 )   HTML ( 1)   PDF (446KB) ( 487 )  
Related Articles | Metrics
The purpose of Chinese Wall policy is to prevent conflict of interest between competing companies. The BN model proposed by Brewer and Nash, and the lattice-based interpretation of Chinese Wall policy proposed by Sandhu are two examples of the Chinese Wall policy models. However, these models are severely restricted and awkward to implement. RBAC is a prevalent model with policy-neutrality. This paper makes a thoroughly research on how to configure RBAC to enforce Chinese Wall policy and its variations. Based on role hierarchies and RBAC constraints, the detailed configurations are presented and the constraints in the configurations are formalized. Compared with traditional models, schemes are more flexible and can be directly enforced in systems supporting the RBAC model.
A TCP Friendly Multicast Rate Control Mechanism for Internet DTV
Li Fei, Wang Xin, and Xue Xiangyang
2007, 44(4):  623-629. 
Asbtract ( 368 )   HTML ( 0)   PDF (406KB) ( 546 )  
Related Articles | Metrics
The current Internet does not offer enough QoS (quality of service) guarantees or support to Internet multimedia applications. However, there are two requirements of multicast sending rate for Internet DTV at the server: 1) it can adapt well to the change of network congestion; 2) it can meet the rate requirement of programs. So multicast sending rate control is needed at the server. In this paper the difficulties of Internet DTV stream multicast are analyzed, and then a multicast rate control approach for network DTV stream based on buffer management is given. The algorithm can not only suit for the change of network traffic, but also satisfy the requirement of DTV decoder, by controlling the sending rate logically. The server first calculates the initial sending rate based on the timestamp included in the program streams and then detects the TCP-friendly bandwidth available. Finally taking the initial rate, the bandwidth available and the data occupancy ratio of send buffer into consideration together, a factual send rate can be got. The simulation results show that it can reduce the average packet loss ratio, increase the transmission quality of Internet DTV, and guarantees the sending buffer not to be overflowed, in comparison with the TFMCC method.
Probabilistic Character for Localization Problem in Sensor Networks
Cui Xunxue, Fang Hongyu, and Zhu Xulai
2007, 44(4):  630-635. 
Asbtract ( 368 )   HTML ( 0)   PDF (355KB) ( 567 )  
Related Articles | Metrics
Sensor networks hold the promise of many new applications in the area of monitoring and control. Sensor positioning is a fundamental and crucial issue for network operation and management. The motivation for this paper is to explore the probabilistic behavior for the localization problem in sensor networks. Due to the peculiarity of sensor network, its localization behavior needs to be investigated to obtain some general principles by probability theory. The position estimation error and the node connectivity issues are analyzed. First, it is proved that the linear transform of position estimation for node localization is distributed by χ\+2. Secondly, the node connectivity and the number of nodes in a deployed area are distributed by the Poisson, if a sensor network is configured in a uniform fashion over the whole field. Some connectivity requirements should be met to accomplish successfully positioning according to the result of simulation experiments. The model of position errors can be used to evaluate the impact of a location discovery algorithm on subsequent tasks in a multi-hop ad-hoc sensor network. Understanding of behavior characteristics is of importance for implementing localization process and evaluating localization methods.
Mobile Robot Hierarchical Simultaneous Localization and Mapping Based on Active Loop Closure Constraint
Huang Qingcheng, Hong Bingrong, Li Maohai, and Luo Ronghua
2007, 44(4):  636-642. 
Asbtract ( 483 )   HTML ( 0)   PDF (628KB) ( 478 )  
Related Articles | Metrics
A hierarchical map representation approach based on active loop closure constraint is proposed to implement mobile robot simultaneous localization and mapping (SLAM) efficiently with the Rao-Blackwellized particle filters (RBPF). The hierarchical map includes the local metric map and the global topological map, and in the global level an active loop closure strategy based on information entropy is proposed to reduce the map uncertainty as well as the robot trajectory uncertainty. The estimation of relative locations between local metric feature maps is maintained with local map alignment algorithm, and a minimization procedure is carried out using the loop closure constraint with backward correction to reduce the uncertainty between local maps. The robot is only equipped with monocular vision and odometer, and the robust observation model is constructed; Scale invariant feature transform (SIFT) is used to extract image features served as the nature landmarks, and SIFT features are invariant to image scaling, rotation, and change in 3D viewpoints, which are highly distinctive due to a special technique for their description. A fast nearest neighbor search algorithm using KD-tree is presented to implement SIFT feature matching in the time cost of O(log\-2N). Experiments on the real robot show that the proposed method provides an efficient and robust method for implementing SLAM.
Agent-Based Automatic Composition of Semantic Web Services
Qiu Lirong, Shi Zhongzhi, Lin Fen, and Chang Liang,
2007, 44(4):  643-650. 
Asbtract ( 415 )   HTML ( 0)   PDF (460KB) ( 545 )  
Related Articles | Metrics
The problem of Web service composition is the process of selecting, combining and executing existing services to establish reusable and versatile inter-operability applications. Currently, human beings perform manual Web service composition by reading information provided on service's Web pages. With the ever increasing number of Web services being made available on the Web, it is already beyond the human ability to analyze them and generate the composition process manually. This triggers an active area of research and development on automatic composition of Web services. Many researchers propose their composition approaches based on AI planning techniques. The major technical contributions of this paper are: 1) This paper first surveys the current problems and resolutions of semantic Web services and analyzes the relationship of the semantic Web, Web services and agents. Then an agent-based semantic Web service composition architecture is proposed. 2) The description logic is used, which is detailed as the formal tool of semantic Web and used to formalize the service composition problem, to represent the five different relationships of services. 3) An autonomic semantic Web service composition method is proposed based on planning algorithms by increasing limitation conditions and adopting agent strategies. By testing the approach to a simple, yet realistic example, the preliminary results demonstrate that the implementation provides a useful solution.
An Algorithm for Clustering of Outliers Based on Key Attribute Subspace
Jin Yifu, Zhu Qingsheng, and Xing Yongkang
2007, 44(4):  651-659. 
Asbtract ( 301 )   HTML ( 0)   PDF (576KB) ( 397 )  
Related Articles | Metrics
It is an important part of data mining to discover and analyze outlying observations. Outliers may contain crucial information, and so detecting them is much more significant than detecting general patterns in some applications which include, for instance, credit card fraud in finance, calling fraud in telecommunication, intrusion in network, disease diagnosis, etc. Existing outlier mining algorithms focus on detecting and identifying outliers, but studies of outliers include both mining outliers and analyzing why they are exceptional. The research on explaining and analyzing outliers slightly lags behind outlier mining technology now. It is inevitable that analyzing outliers to the full needs a great deal of knowledge from object task fields. However, some further discoveries of outliers may be obtained from studies of distributing characteristics of dataset in attribute space. By analyzing the origin and feature of outliers and using the theory of rough set, a concept of outlying partition similarity is defined and then an algorithm for clustering outliers based on key attribute subspace (COKAS) is proposed. The approach can provide the extended knowledge of identified outliers and improve the understanding of the whole data set. Experimental results of real multi-dimension data set show that this algorithm is scalable and efficient.
A Quick Emergency Response Plan Generation System Combining CBR and RBR
Luo Jiewen, Shi Zhiping, He Qing, and Shi Zhongzhi
2007, 44(4):  660-666. 
Asbtract ( 546 )   HTML ( 2)   PDF (446KB) ( 513 )  
Related Articles | Metrics
It is important to generate response plan to deal with the emergency, which will greatly decrease the cost. The traditional method is using the expert system (rule based reasoning system) to generate the decision method. However, this approach is often slow and it is also hard to generate the reasoning rules sometimes. This paper combines two kinds of artificial intelligence techniques, case based reasoning (CBR) and rule based reasoning (RBR), to construct a quick emergency response plan generation system. It improves the performance and solves the knowledge acquisition bottleneck of traditional RBR systems. With the CBR tool CbrSys, decision support is generated from the previous emergency cases and solutions in database through the similarity retrieval. Once new emergency events happen, the case base is first retrieved to find the similar solutions. Only when the solution cases can not be obtained from the case base or the case solutions are not satisfactory, the RBR system is used to reason for solution and then the reasoning result is stored in the case base for future use. A series experiments are conducted to test its efficiency, which shows that it is superior to the traditional RBR system in response speed. The system is now applied in flood decision support system and city emergency inter-act project.
Optimal Decomposition of Decision Table Systems Based on Bayesian Networks
Hu Xiaojian, Yang Shanlin, Hu Xiaoxuan, and Fang Fang
2007, 44(4):  667-673. 
Asbtract ( 504 )   HTML ( 1)   PDF (408KB) ( 498 )  
Related Articles | Metrics
It is shown that the decomposition method based on GDF (generalized decision function) is equivalent to that based on Bayesian networks in decision table systems; It is pointed out that the problem of information system decomposition is boiled down to those solving multiple sectioned Bayesian network (MSBN) and its d-separator set (d-sepset) corresponding to decision table systems; For the same Bayesian network (BN) owning various d-sepsets, various decomposition models exist. The relation between d-sepsets of MSBN and separator sets (sepsets) of linked junction forest (LJF) are put forward and proven, and it is shown that sepsets of LJF decide optimal d-sepsets of MSBN. Therefore the problem of decomposition of decision table systems is also to solve sepsets of LJF. Finally, feasibility of the method put forward is verified through an example.
A Direct Clustering Algorithm Based on Generalized Information Distance
Ding Shifei, Shi Zhongzhi, Jin Fengxiang, and Xia Shixiong
2007, 44(4):  674-679. 
Asbtract ( 319 )   HTML ( 0)   PDF (324KB) ( 429 )  
Related Articles | Metrics
In this paper a novel direct clustering algorithm based on generalized information distance (GID) is put forward. Firstly, based on information theory, a basic concept of measure of diversity is given and an inequality about measure of diversity is proved. Based on this inequality, a concept of increment of diversity is discussed and a defined. Secondly, by analyzing distance measure, two new concepts of generalized information distance (GID) and improved generalized information distance (IGID) are proposed, and a new direct clustering algorithm based on GID and IGID is designed. Finally this algorithm is applied to soil fertility data processing, and compared with hierarchical clustering algorithm (HCA). The results of simulation application show that the algorithm presented here is feasible and effective. Because of simplicity of algorithm and robustness. It provides a new research approach for studies of pattern recognition theory.
A Secure Multi-Attribute Auction Model
Chen Xiang, Hu Shanli, and Shi Manyin
2007, 44(4):  680-685. 
Asbtract ( 394 )   HTML ( 1)   PDF (311KB) ( 658 )  
Related Articles | Metrics
Internet auction not only is an integral part of electronic commerce but also has become a promising field for applying autonomous agents and multi-agent system (MAS) technologies. Meanwhile, auction, as an efficient resource allocation method, has important application in decentralized scheduling problems and MAS problems, such as coalition formation and has received more and more attention among scholars. Multi-attribute consideration for a resource should be a common phenomenon in an auction; however, it has received decreasing levels of attention. Also, the secure mechanisms and privacy of an auction are important aspects in implementing an auction. A general auction model is given and a Vickrey-type multi-attribute auction model is studied. For trust problems among participants in an auction, in this model, a mobile agent is introduced for a decentralized implementation during the auction. Furthermore, a secure two-party computation protocol is introduced for the privacy demand of bidding. Finally, a secure protocol named SVAMA comes out for the model. SVAMA has some useful properties like strategy-proof, false-name proof, Pareto efficient, etc. The model presented in this paper is more general compared with the existing multi-attribute auction models, and has a dominant strategy for sellers, also has the property of false-name proof. These characteristics are the improvements for the existing multi-attribute auction methods, improving the work of Esther David, Felix Brandt, et al.
PKUMoDEL: A Model-Driven Development Environment for Languages Family
Ma Haohai, Xie Bing, Ma Zhiyi, Zhang Nengbin, and Shao Weizhong
2007, 44(4):  686-692. 
Asbtract ( 414 )   HTML ( 0)   PDF (493KB) ( 646 )  
Related Articles | Metrics
The UML family of languages and model driven architecture, stemming from the object management group, turn the role of model-driven software development from contemplative to productive. As a result, model-driven development environments should not only put the concept of the model on the critical path of software development life cycle, but also support definition and manipulation of UML-based meta-models catering for the upgrading of UML and the emergence of new members of UML family. A model-driven development environment for language family (PKUMoDEL) incorporates MOF-based meta-modeling tools and UML 2.0-based modeling tools. The environment addresses the issues such as the definition, extension and evaluation of meta-models, adaptation and evolution of modeling tools, integration of various modeling tools, reusability of models, and mapping and deployment of models into implementation platform.
Dynamic Role Assignment for Multi-Agent System with Parallel Constraints Among Goals
Wang Hongbing, Fan Zhihua, and She Chundong
2007, 44(4):  693-700. 
Asbtract ( 369 )   HTML ( 0)   PDF (451KB) ( 380 )  
Related Articles | Metrics
Multi-agent system is increasingly becoming a powerful paradigm for modeling and developing large, complex and distributed information systems. The mechanism of dynamic role assignment is often required to be supported in many systems developed by using the technology of multi-agent system. The influence caused by the constraints among goals to role assignment is not considered in the known algorithms of dynamic role assignment from the current literatures. The model of dynamic role assignment for multi-agent system with parallel constraints among goals is firstly proposed. Multi-role assignment manager agents are especially introduced to take the computational task of role assignment jointly in order to avoid the computational bottleneck caused by a single role assignment manager agent. Then, the algorithm of goal partition is presented based on the goal structure diagram with parallel constraints. Finally, the algorithm of role assignment is given and the time complexity is analyzed. The run time of the algorithm of role assignment is experimentally investigated. And it is shown that the theoretic result is consistent with the experimental result. This goal partition in the model assures that the computational results produced by multi-role assignment manager agents can be directly incorporated without checking parallel constraints.
Irregular Patch for Texture Synthesis
Xiong Changzhen, Huang Jing, and Qi Dongxu,
2007, 44(4):  701-706. 
Asbtract ( 365 )   HTML ( 0)   PDF (399KB) ( 461 )  
Related Articles | Metrics
Patch-based texture synthesis methods have achieved significant progress. Nevertheless, there is still a serious problem in the seam of two adjacent patches. In order to preserve the texture border structures of adjacent patches sample, a new texture synthesis method is developed using irregular patches, a random overlapping method and an optimal matching algorithm. This new texture synthesis method is composed of two stages. In the first stage, an irregularly shaped feature patch is extracted by identifying a set of border structure features from the sample texture based on HVS using intelligent selecting tools. Then different irregular patches are obtained by introducing a small amount of deformation. In the second stage, guided by the irregular patch image, the random overlapping method and the optimal matching method are applied to texel-overlapped texture and texel-nonoverlapped texture respectively to decide how to paste the irregular patches into the output texture correctly. A variety of synthesis examples are shown, including structured texture, near-regular structured image and irregular texture. A comparison between the new method and Graphcut is given. Compared with be existing methods, the new method can preserve the border structures more efficiently and the results of the experiment also demonstrate that the new method can produce “texel-style texture” of high quality.
A Performance Model of I/O-Intensive Parallel Applications
Chen Yongran, Qi Xingyun, and Dou Wenhua
2007, 44(4):  707-713. 
Asbtract ( 545 )   HTML ( 0)   PDF (474KB) ( 508 )  
Related Articles | Metrics
High performance computing (HPC) is widely used in science and engineering to solve large computation problems. The peak performances of computers increase in a continuous and rapid way. But the sustained performances achieved by real applications do not increase in the same scale as the peak performances do and the gap between them is widening. Performance model of parallel systems, which is one of effective ways to solve this problem, draws the attentions of the research community as well as the industry community. In this paper, an open performance model infrastructure PMPS(n) and a realization of this infrastructure—PMPS(3), aperformance model of I/O-intensive parallel application, are given and used to perform NPB benchmarking on PⅣ cluster systems. The experiment results indicate that PMPS(3) can forecast better than PERC for I/O intensive applications, and can do as well as PERC for storage-intensive applications. Through further analysis, it is indicated that the results of the performance model can be influenced by the data correlations, control correlations and operation overlaps. Then such factors must be considered in the performance models to improve the forecast precision. The experiment results also show that PMPS(n) has very good scalability.
Design of Application Specific Instruction-Set Processors Directed by Configuration Stream Driven Computing Architecture
Li Yong, Wang Zhiying, Zhao Xuemi, and Yue Hong
2007, 44(4):  714-721. 
Asbtract ( 357 )   HTML ( 1)   PDF (484KB) ( 398 )  
Related Articles | Metrics
Efficiency and flexibility are crucial features of processors in embedded systems. The embedded processors need to be efficient in order to achieve real-time requirements with low power consumption for specific algorithms. And the flexibility allows design modifications in order to respond to different applications. In this paper, the configuration stream driven computing architecture (CSDCA) is proposed, which is both flexible and application specific hardware solution for implementation of embedded processors. Different from the traditional very long instruction word (VLIW) architecture or the transport triggered architecture (TTA), in the CSDCA, not only the responsibility of controlling the data transports is moved from the hardware to the compiler, but also the interconnect network between function units is visible to the compiler. So the routing can be performed by the compiler and the architecture can support the efficiency but complex interconnections to achieve low area overhead with low power dissipation. Directed by the CSDCA, an efficient design method for hardware implementation of application specific instruction-set (ASIP) processors is presented, which supports the reconfigurable segmented-bus networks. Experiment results with several practical applications show that the segmented-bus network can save 53% in power consumption and 38.7% in bus numbers, while maintaining the same speed compared with the simple-bus network.
Research on Reliability of a Reconfigurable Data Processing System Based on JBits
Ren Xiaoxi, Li Renfa, Jin Shengzhen, Zhang Kehuan, and Wu Qiang
2007, 44(4):  722-728. 
Asbtract ( 415 )   HTML ( 0)   PDF (403KB) ( 402 )  
Related Articles | Metrics
space solar telescope (SST) is a scientific satellite that employs FPGA (field programmable gate array) to preprocess the huge data gathered by its sensors. Since its high building and maintaining cost and poor working environment, it's a great challenge to ensure the reliability. An improved TMR (triple module redundancy) architecture is presented, in which the data arbiter can find the difference among its three inputs and send an error message to the main controller to launch fault scanning operation. A new method for the main controller is proposed to detect and remove hardware faults based on the configuration data of reconfigurable system. The test circuits and test stimulus are constructed by generating different patterns of configuration data. Since the reconfiguration process and the structure of configuration data are very complex, JBits is used to simplifythe process of configuration data, which is originally written to facilitate reconfiguration system development. These measures are able to detect faults when they appear and remove faults by hardware reconfiguration, thus taking full advantage of TMR and reconfigurable features to improve reliability. And experiment results are given to show that minor routing resource fault can be repaired by using JBits and JRoutes. The availability of reconfigurable system with the proposed new architecture and fault processing method is modeled and analyzed using Markov process theory. Analysis results show that the reliability is improved greatly.