Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 April 2006, Volume 43 Issue 4
Paper
Semi-Online Scheduling Algorithm of Multi-Agent in Network Management
Liu Bo, Li Wei, Luo Junzhou, and Bian Zheng'ai
2006, 43(4):  571-578. 
Asbtract ( 420 )   HTML ( 0)   PDF (477KB) ( 564 )  
Related Articles | Metrics
Agent scheduling algorithm has great influence on the efficiency of executing task in agent based network management. The existing algorithms don't involve the relationship among tasks, so that considerable network load and waiting time will be incurred when confronted with complicated task systems. To solve scheduling problem in network management, a scheduling framework that answers for the characteristic of network management is presented and a semi-online multi-agent scheduling algorithm based on dependences of sub-tasks is proposed. The competitive analysis and proof reveal that the semi-online scheduling algorithm is better than the existing full-online scheduling algorithm. The results of the tests of performance and scheduling time of SONL are consistent with the theoretical results. This scheduling algorithm is a new approach for dynamic agents scheduling in network management.
Stochastic Sleeping for Energy-Conserving in Large Wireless Sensor Networks
Shi Gaotao and Liao Minghong
2006, 43(4):  579-585. 
Asbtract ( 371 )   HTML ( 1)   PDF (418KB) ( 555 )  
Related Articles | Metrics
Scheduling the nodes to work alternately can prolong the network lifetime efficiently. The existing solutions usually depend on the geographic information which may compromise the effectiveness as a whole. In this paper, a stochastic sleeping scheduling mechanism is studied and four stochastic scheduling schemes based on different information are introduced. Furthermore, analysis and simulation are provided in detail. The results show that stochastic sleeping scheduling mechanism can reduce the working node number and guarantee a high coverage rate in different level if the sleeping probability is set based on neighbors information properly.
Study of Mechanism of Trust Management to P2P Networks Based on the Repeated Game Theory
Liu Ye and Yang Peng
2006, 43(4):  586-593. 
Asbtract ( 387 )   HTML ( 0)   PDF (456KB) ( 495 )  
Related Articles | Metrics
Self-organization is a fundamental property of P2P networks. The local views of the nodes which are managed in self-organization mode make the corresponding trust model hard to establish. Current trust models based on global reputation by using the iterative method have high time complexity and excessive incurred packets, thus limiting the scalability when applied to P2P networks. A novel trust model, named RGTrust, based on repeated game theory is given in the paper. Under the assumption that individual peers are rational and selfish, the mechanism of RGTrust is described in details. P2P networks using RGTrust scheme have a good performance and stability. Furthermore, compared with other types of trust model, RGTrust yields both lower time complexity and fewer incurred packets. Simulations have also verified the conclusion.
Proxy Caching for Interactive Streaming Media
Liu Wei, ChunTung Chou, Cheng Wenqing, and Du Xu
2006, 43(4):  594-600. 
Asbtract ( 427 )   HTML ( 0)   PDF (464KB) ( 489 )  
Related Articles | Metrics
A typical assumption in proxy-based streaming application is that users favor the beginning part of the media object. However, it is observed that any parts of the media content can be the focus of users in interactive scenarios. A new segment-based caching algorithm, named popularity-wise caching, is proposed for interactive streaming. It can cache the hot part media content under arbitrary distribution of media content popularity. Simulations results show that the performances of current segment-based caching algorithms degrade with the increase of user interactivity, while popularity-wise caching can provide lower user startup latency and bandwidth consumption under different user request modes and interactive degrees.
A Distributed Network Monitoring Model with Link Constraint
Cai Zhiping, Yin Jianping, Liu Xianghui, Liu Fang, and Lü Shaohe
2006, 43(4):  601-606. 
Asbtract ( 468 )   HTML ( 0)   PDF (312KB) ( 492 )  
Related Articles | Metrics
Distributed network monitoring system could obtain up-to-date performance information of the network effectively. The monitoring and aggregating procedure require the establishment of reliable, low delay and low cost aggregating routes. Hence the aggregating procedure is constrained by the link delay and the round hops. Addressed in this paper are the problem of optimizing a distributed monitoring system and the problem of optimally upgrading the existing monitoring infrastructure as the network evolves. It is shown that both problems are NP hard, and could be mapped to the well-known weighted set cover problem by assigning the weight function. A greedy algorithm could be used to solve these problems with an approximation ratio ln n+1, where n is the number of monitored nodes. Furthermore, how to choose an appropriate link constraint value by simulation is discussed.
ERSN: An Efficient and Robust Super-Peer P2P Network
Zheng Qianbing, Peng Wei, and Lu Xicheng
2006, 43(4):  607-612. 
Asbtract ( 912 )   HTML ( 0)   PDF (321KB) ( 407 )  
Related Articles | Metrics
Super-peer P2P network takes advantage of the heterogeneity of capabilities across peers to solve the problem of bandwidth barrier. However, its constructing protocols are inefficient and its topology is fragile. An efficient and robust super-peer P2P network ERSN is presented. The ERSN utilizes peer sampling protocol based on random walks to estimate the network requirements for constructing an efficient super-peer network and establishes emergence links between leaf peers to get a robust network. Experimental results show that compared with Gnutella 0.6 network, the number of peers which process the locating request reduces by 76% at most and the hit rate for locating file increases by 36.4% at most with many super-peers and leaf peers leaving network simultaneously.
An New Method of Audio-Digital Watermarking Based on Trap Strategy
Wang Rangding, Jiang Gangyi, Chen Jin'er, and Zhu Bin
2006, 43(4):  613-620. 
Asbtract ( 405 )   HTML ( 0)   PDF (481KB) ( 428 )  
Related Articles | Metrics
The recent growth of network multimedia systems has increased the need for the protection of digital media. This is particularly important for the protection and enhancement of intellectual property rights. The ubiquity of digital media in Internet and digital library applications has called for new methods in digital copyright protection and new measures in data security. Digital watermarking techniques have been developed to meet the needs for these growing concerns and have become active areas of research. A new scheme for robust, high-quality embedded audio coding is presented in this paper, in which the effect of different attacks is taken into account and the trap strategy is designed. Trap strategy can assure that if the strong attack can removes the watermark embedded in audio, it destroys the watermarked audio as well, or else the watermark can be detected with very high response. Watermarks are embedded by quantizing audio's DCT coefficients using the trap strategy and extracted by different ways due to different attacks. The binary image used as watermark is embedded to audio pieces. De-synchronization attacks can be resisted in this algorithm, such as random clipping some audio samples, A/D and D/A conversion, low-pass filtering and so on. The extracted watermark can be identified directly. Compared with the former algorithm (cocktail watermarking) in the method, watermark is blind detected with a very high detector response. Experimental results demonstrate that this audio watermarking scheme is remarkably effective.
Intrusion Detection for Ad Hoc Routing Based on Fuzzy Behavior Analysis
Zhang Xiaoning and Feng Dengguo
2006, 43(4):  621-626. 
Asbtract ( 380 )   HTML ( 0)   PDF (353KB) ( 419 )  
Related Articles | Metrics
Mobile ad hoc network is a kind of network that does not need infrastructure. In such networks mobile nodes are self-organized and provide network routing for each other. Ad hoc networks are extremely vulnerable to attacks, especially internal attacks. In this paper, an intrusion detection scheme is proposed to detect internal routing attacks. In the scheme, every node monitors its adjacent nodes and tries to detect their misbehavior by analyzing the difference between their route behavior and the route specification. An FBA (fuzzy behavior analysis) method is introduced into the data analysis procedure, which can greatly decrease the false alarm rate. Simulation result shows that the scheme can effectively detect intrusions, while keeping the false alarm rate comparatively low.
The Implementation of Alert Aggregation and Dataset Testing
Qian Jun, Xu Chao, and Shi Meilin
2006, 43(4):  627-632. 
Asbtract ( 440 )   HTML ( 0)   PDF (366KB) ( 400 )  
Related Articles | Metrics
Intrusion detection systems are receiving considerable attention and serving as an indispensable fortification for shielding networks against attackers. To improve the effectiveness of intrusion detection systems, distributed schemes are developed and implemented in real networks. The distributed schemes are classified into two major principles on the basis of data collection and detection engines. Both of them generate a mass of alerts and false positives that flood the administrators and thus impair the effectiveness of IDS. A two-stage real time solution based on DBTCAN (density-based time clustering of application with noise) algorithm is presented for alert aggregation and correlation in distributed contexts. The effectiveness of the approach and prototype on the intrusion detection evaluation dataset is demonstrated, where attacks can be detected more accurately with a low rate of false alarms and more succinct and informative alerts can be provided for administrators with the redundant alarms greatly reduced. The comparative experiments and analysis show that the approach is effective in distributed probing detection and the system gives better results in real time detection.
An Intrusion Detection Ensemble System Based on the Features Extracted by PCA and ICA
Gu Yu, Xu Zongben, Sun Jian, and Zheng Jinhui
2006, 43(4):  633-638. 
Asbtract ( 510 )   HTML ( 3)   PDF (351KB) ( 660 )  
Related Articles | Metrics
Intrusion detection system should be able to detect intrusion behaviors and learn novel intrusion types. In this paper, an intrusion detection ensemble system is proposed, which is integrated by two incremental SVM (support vector machine) subsystems. The two subsystems process the features extracted by PCA and ICA respectively. The intrusion information is represented by support vectors set and the weight of the integration is adjusted by genetic algorithm. Experiments show that the ensemble system combines the advantages of the two subsystems, and outperforms each of the subsystems and the standard SVM system.
A Low Cost RSA Chip Design Based on CRT
Wu Min, Zeng Xiaoyang, Han Jun, Ma Yongxin, Wu Yongyi, and Zhang Guoquan
2006, 43(4):  639-645. 
Asbtract ( 598 )   HTML ( 3)   PDF (475KB) ( 485 )  
Related Articles | Metrics
In this paper, a VLSI design and ASIC implementation of a pretty low cost RSA cryptosystem is presented, which is based on the modified Montgomery algorithm and the Chinese remainder Theory (CRT), By adopting a novel scheduling method, we realize 1152-bit modular exponentiation with 576-bit modular multiplier unit, which reduces the hardware complexity greatly. And by using the CRT technique, a comparable throughput with general 1024-bit RSA cryptosystem is achieved. The experimental result shows that a 1024-bit modular exponentiation calculation can be performed in about 1.2 mega cycles, and less than 54K gates are needed. With 40MHz system clock, a signature rate of over 30Kbps can be achieved.
Modified Particle Swarm Optimization Based on Differential Model
Cui Zhihua and Zeng Jianchao
2006, 43(4):  646-653. 
Asbtract ( 476 )   HTML ( 0)   PDF (450KB) ( 1093 )  
Related Articles | Metrics
Through mechanism analysis of differential model of particle swarm optimization, the effect of the maximum speed constant is analyzed and the results are shown that can guarantee the existence of solution, but decrease the global search capability. A new broaden differential model is proposed, which treats the velocity and position vectors equally and searches the space at the same time. And the stability condition is also discussed. Thus a new modified particle swarm optimization algorithm is given. The optimization computing of some examples is made to show that the new algorithm has better global search capacity and rapid convergence rate.
An Annealing Expectation Maximization Algorithm
Qi Yingjian, Luo Siwei, Huang Yaping, Li Aijun, and Liu Yunhui
2006, 43(4):  654-660. 
Asbtract ( 519 )   HTML ( 1)   PDF (452KB) ( 543 )  
Related Articles | Metrics
Training the stochastic feedforward neural network with expectation maximization (EM) algorithm has many merits such as reliable global convergence, low cost per iteration and easy programming. A new algorithm named A-EM (annealing-expectation maximization) based on the EM algorithm is proposed for training the stochastic feedforward neural network. The A-EM algorithm computes the condition probability of the hidden variable in the network system through the maximum entropy principle of the thermodynamics. It can reduce the influence of the initial value on the final resolution by simulating the annealing process and introducing the temperature parameter. This algorithm can not only keep the merits of the original EM, but also facilitate the results converge to the global minimum. The convergence of the algorithm is proved and its correctness and validity is verified by experiments.
A Reinforcement Learning Method Based on Node-Growing k-Means Clustering Algorithm
Chen Zonghai, Wen Feng, Nie Jianbin, and Wu Xiaoshu
2006, 43(4):  661-666. 
Asbtract ( 807 )   HTML ( 4)   PDF (379KB) ( 667 )  
Related Articles | Metrics
State variables of real-world problems are usually continuously real-valued variables. However, a standard reinforcement learning method is only suitable for problems with finite discrete states. To apply it to real-world problems, representation of continuous states must be properly handled. There are mainly two kinds of methods. One is parameterized function approximation method and the other is discretization method. To analyze the advantages and disadvantages of the current adaptive partition method, a partition method based on node-growing k-means clustering is proposed. Reinforcement learning methods based on the proposed clustering algorithm are presented for both discrete and continuous action problems. Simulation is conducted on mountain-car problem with discrete actions and on double integrator problem with continuous actions. Results show that the proposed method can adaptively adjust partition resolution and achieve an adaptive partition of continuous state space. Optimal policy is learned at the ame time.
Employing BP Neural Networks to Alleviate the Sparsity Issue in Collaborative Filtering Recommendation Algorithms
Zhang Feng and Chang Huiyou
2006, 43(4):  667-672. 
Asbtract ( 835 )   HTML ( 6)   PDF (355KB) ( 1319 )  
Related Articles | Metrics
Poor recommendation quality is one major challenge in collaborative filtering recommender systems. Sparsity of source data sets is one major reason causing the poor quality. The popular singular value decomposition techniques and the agent-based methods to a certain extent are able to alleviate this issue. But at the same time they also introduce new problems. To reduce sparsity, a novel collaborative filtering algorithm is designed, which firstly selects users whose non-null ratings intersect the most as candidates of nearest neighbors, and then builds up backpropagation neural networks to predict values of the null ratings in the candidates. Experiments are conducted based on standard dataset. The results show that this methodology is able to increase the accuracy of the predicted values, resulting in improving recommendation quality of the collaborative filtering recommendation algorithm.
Research on Learning Weights of Fuzzy Production Rules Based on Maximum Fuzzy Entropy
Wang Xizhao and An Sufang
2006, 43(4):  673-678. 
Asbtract ( 346 )   HTML ( 1)   PDF (313KB) ( 425 )  
Related Articles | Metrics
Fuzzy production rules (FPRs) is a fundamental and important way of imprecise knowledge representation. For enhancing generalization capability of FPRs for the given examples, the concept of weight is introduced into FPRs. So it is necessary to explore specific criterion for determining these weight values. Generally speaking, the usual criterion of the weight values adjustment, which is basedonly on improving training accuracy, often results in an over-fitting. This paper aims to accomplish this task by using a new method based on the well-known maximum fuzzy entropy principle. In the case that the training accuracy does not decrease, the testing accuracy will increase with the value of fuzzy entropy of training set. At the same time, adjusting the weight values can change the fuzzy entropy of training set. Therefore, this new criterion can avoid the drawback of over-fitting and can improve the testing accuracy.
Algebraic-Trigonometric Splines
Chen Wenyu and Wang Guozhao
2006, 43(4):  679-687. 
Asbtract ( 325 )   HTML ( 0)   PDF (599KB) ( 495 )  
Related Articles | Metrics
The B-spline provides a free control of the parametric polynomial, but it cannot deal with some transcendent curves. Therefore, lots of research works present all kinds of new models. However, these models can encompass neither high order curves nor conical solenoids and involutes of the circle. Thus a new kind of splines generated over the space spanned by {cost,sint,tcost,tsint,1,t,t\+2,…,t\+\{k-5\}}(k≥5) are presented, which are called non-uniform algebraic-trigonometric splines of order k with regard to the given node sequence T. The algebraic-trigonometric splines have most of their properties similar to that of the B-splines in the polynomial space. After inserting new nodes to the node sequence, the sequence of the control polygons converts to the spline. Apparently, algebraic-trigonometric splines can encompass conical solenoids, involutes of circles and some other transcendent curves.
A Constrained Curve Surface Deformation Model Based on Metaball
Li Lingfeng, Tan Jianrong, and Chen Yuanpeng
2006, 43(4):  688-694. 
Asbtract ( 409 )   HTML ( 2)   PDF (535KB) ( 401 )  
Related Articles | Metrics
Combining the curve surface deformation technique and the metaball method, a metaball based curve surface constrained deformation model is presented. The field function of metaball expresses the restriction of curve surface deformation that is applied on the surface. Regulation of the parameters of the field function can master the desired deformation result. The parameters include the field function type, the center, the effect radius and the displacement of metaball. Convolution from skeleton achieves smooth surface. Some issues concerning curve surface deformation are discussed in this paper, such as relationship between constraints, the effect of single constraint, to control the influence of other constraint, etc. Several samples illustrate the mechanism and application in the curve surface modeling and soft surface simulating of the model.
A Shape Adaptive Integer Wavelet Coding Algorithm Based on New Quantization Scheme
Song Chuanming and Wang Xianghai,
2006, 43(4):  695-701. 
Asbtract ( 404 )   HTML ( 1)   PDF (405KB) ( 360 )  
Related Articles | Metrics
A shape adaptive integer wavelet transform (IWT) algorithm based on lifting scheme is proposed. Through careful analysis of the difference of the coefficients' distribution characteristics between the integer wavelet and the first generation wavelet, it is concluded that the narrow dynamic range of IWT coefficients under a relatively wider threshold interval can result in much less zerotrees. Then a quantization threshold scheme based on an odd number square and a quantization strategy based on binary search are proposed. The scheme is able to use less bits than the scheme based on bit-plane to exactly reconstruct a coefficient. On this basis, a shape adaptive EZW based on IWT and the new binary quantization scheme is proposed. Simulation results prove the rationality of the threshold and the effectiveness of the binary quantization scheme. At the same decode bitrate, the proposed scheme can achieve S/N ratio, which is 0.5-2dB higher PSNR than the traditional threshold and bit-plane based quantization scheme.
Statistical Landscape Features for Texture Retrieval
Xu Cunlu, Chen Yanqiu, and Lu Hanqing
2006, 43(4):  702-707. 
Asbtract ( 489 )   HTML ( 0)   PDF (392KB) ( 506 )  
Related Articles | Metrics
A method of using information derived from the graph of an image function for texture description is proposed in this paper. The graph of an image function is a surface in the three-dimensional space that appears like a landscape. Four texture feature curves based on the statistics of geometrical and topological properties of the solids induced by the graph and a variable horizontal plane are used to characterize the texture. This method is named statistical landscape features (SLF). Systematic experimental comparison using the Brodatz texture set as well as the VisTex texture set shows that the performance of the proposed statistical landscape features is higher than that of multi-resolution simultaneous auto-regressive, statistical geometrical features and discrete wavelet transform.
Remote Sensing Image Classification Based on a Loose Modified FastICA Algorithm
Wang Xiaomin, Zeng Shenggen, and Xia Deshen
2006, 43(4):  708-715. 
Asbtract ( 558 )   HTML ( 0)   PDF (585KB) ( 433 )  
Related Articles | Metrics
The multi-band remote sensing images reflect the spectral features of diverse surface features, and the classification is the basis of remote sensing applications. The independent component analysis (ICA) algorithm uses the high-order statistical information of multi-band remote sensing images. It not only removes the correlation of images, but also obtains the new band images that are mutual independent. But the computational complexity of FastICA is too big, influencing the application of ICA in remote sensing field. M-FastICA algorithm could improve the performance of FastICA algorithm by reducing the computational quantum. But like FastICA, its convergence is dependent on initial weight. By importing loose gene in the M-FastICA algorithm, the new algorithm (LM-FastICA) could implement convergence in large-scale. BP neural network is used in classification of the remote-sensing images which are pre-processed by ICA. The exactness rate of pre-processed images is higher than that of source images, and the performance of classification of three kinds of ICA algorithms is near.
Design of Mongolian Operating System Within the Framework of Internationalization
Rui Jianwu, Wu Jian, and Sun Yufang
2006, 43(4):  716-721. 
Asbtract ( 409 )   HTML ( 0)   PDF (342KB) ( 473 )  
Related Articles | Metrics
To implement traditional Mongolian operating system is more difficult and more costly. It lies in two issues: (1) Mongolian characters always transform their shapes within different context of text; (2) Mongolian text is written from top to bottom and all columns are arranged from left to right. The first issue involves many characteristics of this script, which make the display of Mongolian textrather complex. The second issue results in special requirements for human-computer interaction, which are not supported by operating systems currently. First, some characteristics of Mongolian script are analyzed. Second, technical details to design a traditional Mongolian operating system are discussed, which include character set and its encoding scheme, transformation of traditional Mongolian characters, text display in a vertical style and graphical user interface special to Mongolian users. Some challenges are discussed and related solutions are presented. Third, an implementation example of operating system to support traditional Mongolian based on Qt/KDE is described in brief. Finally, some work to be done is proposed.
The Priority Mapping Problem in Static Real-Time Middleware
Wang Baojin, Li Mingshu, and Wang Zhigang
2006, 43(4):  722-728. 
Asbtract ( 422 )   HTML ( 2)   PDF (403KB) ( 366 )  
Related Articles | Metrics
The deadline monotonic (DM) priority assignment scheme and distributed priority ceiling resource access protocol (DPCP) work well with real-time CORBA. In practice, a potentially large number of global unique priorities must be mapped to the restricted number of local priorities provided by the operating systems. Most operating systems use first-in-first-out (FIFO) scheduling within the same priority. So, a high global priority task could be blocked by lower global priority tasks ahead of it in the local priority FIFO queue. This causes priority inversion and affects the schedulability of tasks with higher global priority. In addition, the optimal priority assignment requires a search of exponential complexity. This is the priority mapping problem. To solve it, necessary and sufficient conditions are presented for analyzing the schedulability of a task which global priority has been mapped to a local priority. The decreasing global priority mapping (DGPM) algorithm is also provided. It can schedule a task and global critical section (GCS) set that is schedulable under any other direct priority mapping algorithms. DGPM can overlap tasks (map two or more tasks to the same local priority) while not allowing the system to become non-schedulable, or prove that the system is no-schedulable after overlapping. The conditions and algorithm are used in the projects.
A Buffer Management Policy in IA-64 Large-Scale Video Streaming Servers
Yu Hongliang, Chen Jing, Li Yi, and Zheng Weimin
2006, 43(4):  729-737. 
Asbtract ( 404 )   HTML ( 1)   PDF (497KB) ( 455 )  
Related Articles | Metrics
Buffer management is a very critical problem in large-scale video streaming servers. Especially with the appearance of IA-64, the physical memory size has increased up to 16 exabytes, so the buffer management becomes more and more important. There are already many caching policies, among which interval caching policy has been proposed as an effective one. But most of the previous interval-based policies don't consider the popularity of video objects, and the use of huge memories provided by IA-64 systems. The memory utilization is troubled by this. A popularity-based interval caching policy (PIC) is presented to solve the problem. It makes use of the huge memory in IA-64 systems and pays attention to the popularity of video objects. To study the performance of this policy, a static analytical model is given, and large amount of simulations are conducted. The results show that the PIC policy outperforms the traditional interval caching policy.
A New Java Memory Model L-JMM
Wu Junmin and Chen Guoliang
2006, 43(4):  738-743. 
Asbtract ( 524 )   HTML ( 1)   PDF (292KB) ( 378 )  
Related Articles | Metrics
Java memory model (JMM) is an important topic in Java language and Java virtualmachine (JVM) design. But the memory model in Java specification can't guarantee the safely running of Java multithread code. It needs memory coherence, which imposes constraints that prohibit JVM implementation optimization. To fix the problem of Java memory model in Java specification, a new Java memory model, L-JMM, is proposed, which is based on location consistency. This model extends the location consistency to adapt the features of Java technology. It also defines the rule of Java multithread memory operation, including ordinary variable rule, volatile variable rule, final variable rule and synchronization rule. It is proved that this memory model has the same property as location consistency. It can guarantee the correctness for Java multithread code. It can also improve the performance of Java virtual machine. Finally, the simulation of this memory model in the simulator MMS verifies that the new model boasts a better performance than the Java specification memory model does.
A Low-Power Instruction Cache Design Based on Record Buffer
Ma Zhiqiang, Ji Zhenzhou, and Hu Mingzeng
2006, 43(4):  744-751. 
Asbtract ( 403 )   HTML ( 1)   PDF (567KB) ( 369 )  
Related Articles | Metrics
Most modern microprocessors employ on-chip caches to bridge the enormous speed disparities between the main memory and central processing unit (CPU), but these caches consume a significant fraction of total energy dissipation, especially the power dissipated by instruction cache itself is often a significant part of the power dissipated by the entire on-chip caches. Using buffer can filter most of instruction cache accesses and reduce it's power consumption, but there arestill many unnecessary data array accesses left, based on this idea. In this paper, a low-power instruction cache called RBC is proposed. With the record buffer and the modification on data array, RBC can filter most of the unnecessary cache activities, thus reducing energy consumption significantly. Experiments on 10 SPEC2000 benchmarks show that, compared with conventional block buffering cache, 24.33% energy savings for instruction cache can be achieved, at the cost of only 6.01% slowdown and 3.75% area overhead.
A TTA-Based ASIP Design Methodology for Embedded Systems
Yue Hong, Shen Li, Dai Kui, and Wang Zhiying
2006, 43(4):  752-758. 
Asbtract ( 373 )   HTML ( 0)   PDF (361KB) ( 457 )  
Related Articles | Metrics
ASIP (application specific instruction processor) design methodology in the design of embedded microprocessors can not only satisfy the functionality and performance requirements of embedded systems but also shorten the lead time of embedded microprocessors. The current ASIP design method confronts many problems, such as architecture optimization and software retargetable compilation. So a TTA (transport triggered architecture)-based embedded ASIP design methodology is proposed, and the key technique in the design is discussed in detail. Also two target applications of ASIP microprocessor design instances are presented to illustrate that this method can effectively solve these problems and quickly develop embedded microprocessors.