Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 August 2008, Volume 45 Issue 8
Paper
A Scenario of Trust Negotiation Based on TPM Anonymous Credentials
Shi Zhiguo, He Yeping, and Zhang Hong,
2008, 45(8):  1279-1289. 
Asbtract ( 742 )   HTML ( 0)   PDF (2020KB) ( 454 )  
Related Articles | Metrics
An effective sensitive information protection mechanism in trust negotiation is needed to promote sharing and collaboration between security domains in distributed network computing. TCG is an industry standardization body that aims to develop and promote an open industry standard for trusted computing hardware and software building blocks to enable more secure data storage, online business practices, and online commerce transactions while protecting privacy and individual rights. The novel anonymous credentials based trusted negotiation system (ACTN) is designed and implemented based on the TPM anonymous credentials of trusted computing, which excellently deals with the difficulty of the protection of sensitive resources between strangers. The scenario resists the replay attacks, tampering attacks, masquerading, and the mechanism is based on a hardware module, called trusted platform module. The model of ACTN and the anonymous credentials are defined in detail; the parameter and the construct method of anonymous credentials are explained; the security of policy, the mechanism of delegation and the credential chain discovery are discussed; the framework of negotiation nodes and the process of negotiation are designed in addition. The results of the experiments are compared with the TrustBuilder and COTN negotiation system, and the results prove the sound performance and good security guarantee. Finally, some related future research fields of the paper are pointed out.
A Survey of Intrusion Response Decision-Making Techniques of Automated Intrusion Response Systems
Mu Chengpo, Huang Houkuan, Tian Shengfeng, and Li Xiangjun,
2008, 45(8):  1290-1298. 
Asbtract ( 562 )   HTML ( 0)   PDF (1665KB) ( 753 )  
Related Articles | Metrics
Automated intrusion response system and its significances are briefly introduced in this paper. The intrusion response-decision making is one of the critical techniques of automated intrusion response systems. A hierarchical architecture about intrusion response decision-making problems is presented. The roles of response goals and response strategies in an intrusion response decision-making process are discussed, meanwhile their related work is introduced. Intrusion response decision-making factors are used in decision-making models and directly influence the results of intrusion decision-making models. The decision-making factors in the latest existing intrusion decision-making mechanisms are reviewed, and it is pointed out that some of these factors are not properly used in a few of existing decision-making models. In order to choose proper factors in an intrusion response decision-making model, a taxonomy of response decision-making factors is given. The existing models of intrusion response measure decision-making are presented, and their features and problems of these models are discussed in detail. The concept and idea of intrusion response time decision-making are proposed, and at the same time, a few of intrusion response time decision-making models are introduced. The architecture, response time decision-making model, response measure decision-making model and experiments of the intrusion detection alert management & intrusion response system (IDAM&IRS) developed by the authors are shown. In addition, its features are described. Finally the development trends of response decision-making are summarized.
An FSM State Table Compressing Method Based on Deep Packet Inspection
Chen Shuhui, Su Jinshu, Fan Huiping, and Hou Jie
2008, 45(8):  1299-1306. 
Asbtract ( 518 )   HTML ( 0)   PDF (1203KB) ( 732 )  
Related Articles | Metrics
Recent advances in network packet processing focus on payload inspection for applications that include content-based billing, layer-7 switching and Internet security. Most of the applications in this family compile the fingerprints to finite state machine (FSM), and then process packets based on state transition table. There are two kinds of FSM: deterministic finite automaton (DFA) and nondeterministic finite automaton (NFA). DFA is fast but needs more memory, while NFA needs little memory but is slow. There is little time for every packet to be processed in payload inspection, so DFA is always preferred. To solve the state transition table explosion problem of regular expression pattern matching using DFA, a set intersected precode (SI-precode) method is introduced. SI-precode codes all input symbols before the conversion from regular expressions to DFA, and the space of FSM state transition table is then reduced by compressing the input symbols. Experiments based on L7-filter are employed to show that the memory space would save 87% to 97% for single pattern FSM, and 59% for multiple-pattern FSM with 50 patterns. SI-precode doee not affect the performance in the case of the architecture combining software and hardware. For pure software implementation, the performance decreases 2% to 4%.
A Query Rewriting Algorithm Supporting Attribute Grain Database Encryption
Xian Hequn and Feng Dengguo
2008, 45(8):  1307-1314. 
Asbtract ( 470 )   HTML ( 1)   PDF (1441KB) ( 560 )  
Related Articles | Metrics
Query processing over encrypted database is one of the key issues to the DAS (database as a service) modal encryption. Due to the unique trust modal and system structure of the DAS modal, data encryption and decryption can only be carried out at the client site. The server is not trusted and sensitive data should be protected from potentially malicious database administrators. Current studies employ query rewriting techniques based on tuple level encryption, which are inefficient in encryption operations. They may waste a great deal of computational power on encrypting data that are not secret, especially when only one or a few attributes in a relation need to be protected. In this paper, a query rewriting algorithm is proposed, which supports attribute grain database encryption in the DAS model. The algorithm rewrites user queries according to relational algebra formulas, and it discriminates those encryption involving predicates from the others that do not use encrypted attributes. New queries are reconstructed and executed respectively on the client and the server so that optimization can be achieved. The algorithm is capable of processing correlated subquery with any depth in a recursive manner. Experiments show that the algorithm can reduce the network traffic caused by temporary query result transferring and shorten the query execution time effectively.
A New Verifier-Local Revocation Group Signature with Backward Unlinkability
Wei Lingbo, Wu Chuankun, and Zhou Sujing,
2008, 45(8):  1315-1321. 
Asbtract ( 547 )   HTML ( 1)   PDF (1130KB) ( 453 )  
Related Articles | Metrics
Membership revocation is an important issue of group signatures. Verifier-local revocation (VLR for short) and witness-based method are two current main nontrivial approaches. The latter is more suitable in some environments, especially in mobile environments. In CCS04, an efficient VLR group signature scheme is proposed by Boneh and Shacham, which is very short but not having the propriety of backward unlinkability (BU), a property that keeps the anonymity of a member even after heshe has been revoked. Recently, BU-VLR group signature schemes have been proposed. However, these schemes are not comparable with Boneh-Shachams in performance. Based on the weak DTDH and q-SDH assumptions, a new BU-VLR group signature scheme is proposed. The advantage of the proposal is shorter signature length and lower computation cost over the previous BU-VLR group signature schemes. To reduce the data published in the revocation list, an extended version is also provided with lower overhead in computation according to the method put forward by Nakanishi and Funabiki in ASIACRYPT05. Finally, an improved version of the scheme in IEICE07 proposed by Nakanish and Funabiki is given using the construction trick in signature, the signature length of which is only 77.8% that of the original scheme under the same computation.
Network Pricing Mechanism with Congestion Charge and Compensation
Dong Yongqiang, Yang Lu, and Dai Jiangpeng
2008, 45(8):  1322-1329. 
Asbtract ( 560 )   HTML ( 0)   PDF (1494KB) ( 392 )  
Related Articles | Metrics
Soft real-time applications may be rate-adaptive or delay-adaptive, showing much flexibility in bandwidth requirements. For such traffic it is acceptable to allow users instantaneous rate to be more or less than what heshe requires. Taking this utility characteristics into account, a network pricing mechanism with congestion charge and compensation is proposed, which differs from the traditional congestion pricing in that users traffic is distinguished so as to punish the misbehaving users by extra congestion cost and compensate the innocent users by price discount while network congestion is onset. Following this idea, the rules for setting congestion charge and compensation are examined in detail. By comparing the acquired transmission rate T\+* with the pre-agreed rate T\+0 negotiated during admission control and the actual arrival rate T, the network users are classified into three kinds so that the basic connection price, congestion charge and compensation could be applied respectively. Then a dynamic pricing algorithm is presented accordingly along with the analysis on the incentive compatibility of charging policy. Simulation results show that the pricing algorithm with congestion charge and compensation is reasonable in that users are charged appropriately in accordance with the pre-agreed service level and actually allocated network resources. With this pricing mechanism, users would like to announce their true requirements through service negotiation while network nodes would like to participate in the pricing game as well to make more profit.
A Dynamic Sleep Scheduling Mechanism for Localization in Mobile Sensor Networks
Liu Yuheng, Wu Jing, Chen Zhenyong, and Xiong Zhang
2008, 45(8):  1330-1337. 
Asbtract ( 470 )   HTML ( 1)   PDF (1368KB) ( 416 )  
Related Articles | Metrics
Due to the limited power supply, sensors have to switch between sleeping mode and awake mode at a low duty cycle to extend the network life-time. Therefore, when localizing a mobile node, it is possible that a seed node is passed by the mobile node during its sleeping mode, so that the mobile node cannot collect enough seed information and fails to be localized consequently. In this paper, a dynamic sleep scheduling mechanism, namely P-SWIM is proposed. In P-SWIM, each seed is proactively notified if a mobile node is moving toward it, so that only these seeds should remain active in full duty when the mobile node passes by them, while the other seeds can still stay in the low duty cycle mode. P-SWIM can ensure enough seed connectivity around the mobile node, which is necessary for successful mobile localization. Simulation results indicate that localization algorithms based on P-SWIM can achieve better localization performance than those based on the other two sleep scheduling mechanisms, RIS and GAF. Moreover, P-SWIM causes least running overhead to the network overall power consumption among the three mechanisms, at most 47.6% and 60.2% less than RIS and GAF.
A New Approach to Heterogeneous Semantic Search on the Web
Huang Rui, and Shi Zhongzhi
2008, 45(8):  1338-1345. 
Asbtract ( 416 )   HTML ( 0)   PDF (1233KB) ( 528 )  
Related Articles | Metrics
Relevance ranking is a key to Web search in determining how results are retrieved and ordered. As keyword-based search does not guarantee relevance in meanings, semantic search has been put forward as an attractive and promising approach. Recently several kinds of semantic information have been adopted in search respectively, such as thesauruses, ontologies and semantic markups, as well as folksonomies and social annotations. However, although to integrate more semantics would logically generate better search results, search mechanism to fully adopt different kinds of semantic information is still in absence and to be researched. To these ends, an integrated semantic search mechanism is proposed to incorporate textual information and keyword search with heterogeneous semantic information and semantic search. A statistical based measurement of semantic relevance, defined as semantic probabilities, is introduced to integrate both keywords and four kinds of semantic information including thesauruses, categories, ontologies and folksonomies. It is calculated with all textual information and semantic information, and stored in a newly proposed index structure called semantic-keyword dual index. Based on this uniform measurement, the search mechanism is developed that fully utilizes existing keyword and semantic search mechanisms to enhance heterogeneous semantic search. Experiments show that the proposed approach can effectively integrate both keyword-based information and heterogeneous semantic information in search.
A General Formulation of Polynomial Smooth Support Vector Machines
Xiong Jinzhi, Yuan Huaqiang, and Peng Hong
2008, 45(8):  1346-1353. 
Asbtract ( 579 )   HTML ( 1)   PDF (1185KB) ( 589 )  
Related Articles | Metrics
Yuan et al. used a polynomial function as smoothing function, and proposed a polynomial smooth support vector machine (PSSVM) in 2005, which improved the performance and efficiency of SVM for classification. Using the technique of interpolation functions, Xiong et al. developed a recursive formula to obtain a new class of smoothing functions, and solved the problems of existence and seeking better smoothing functions in 2007. However, problems still exist in looking for other smooth models and the general formulation of the polynomial smooth support vector machines. A class of polynomial functions is applied as new smoothing functions, and a dth-order polynomial smooth support vector machine (dPSSVM) is proposed using the smoothing technique, which is a general formulation of polynomial smooth support vector machines. The global convergence of dPSSVM is proved by a mathematical inductive method, and experiments are carried out to evaluate dPSSVM. The numerical results show that the performance and efficiency of dPSSVM are best, and better than that of the PSSVM when its smooth order is 3, but after its smooth order is greater than 3, the performance of classification is almost the same while the efficiency becomes worse. The problem of general formulation is successfully solved for polynomial smooth support vector machines.
Ensembles of Classifiers for Chinese Word Sense Disambiguation
Wu Yunfang, Wang Miao, Jin Peng, and Yu Shiwen
2008, 45(8):  1354-1361. 
Asbtract ( 617 )   HTML ( 1)   PDF (1288KB) ( 611 )  
Related Articles | Metrics
Word sense disambiguation has long been a central concern for natural language processing, and ensemble of classifiers is one of the four current directions in machine learning study. This paper makes a systematic study on the ensembles of classifiers for Chinese word sense disambiguation. Nine kinds of combining strategies are experimented in this paper: product, average, max, min, majority voting, rank-based voting, weighted voting, weighted probability, and best single combining, among which the three combining methods of product, average and max have not been applied in word sense disambiguation in previous works. Support vector machine, nave Bayes, and decision tree are selected as the three component classifiers. Four kinds of features are used in all of the three classifiers: bag of words, words with position, parts of speech with position and 2-gram collocations. Experiments are conducted in two different datasets: the first dataset is 18 ambiguous words selected from Chinese semantic corpus, and the second dataset is the multilingual Chinese-English lexical sample task at SemEval-2007. The experimental results illustrate that the three kinds of combining strategies of average, product and max, which are applied for the first time in Chinese word sense disambiguation in this paper, exceed the accuracy of best single classifier support vector machine, and also outperform the other six kinds of combining methods.
A Novel Method for Pitch Contour Clustering
Huang Pingmu, Liu Gang, and Guo Jun
2008, 45(8):  1362-1370. 
Asbtract ( 519 )   HTML ( 1)   PDF (1380KB) ( 426 )  
Related Articles | Metrics
In this paper, the clustering problem of syllable pitch contours is studied. By doing clustering and reasonable sample selection, the size of the large speech corpus can be significantly reduced. Besides, by introducing the speech coding technique, a small-size multi-sample tonal mono-syllable corpus can be built to satisfy the demands of clarity and naturalness for embedded text-to-speech systems. For pitch contours with different lengths, a non-fixed-length contours clustering approach is proposed. This approach introduces the idea of dynamic programming (DP) into clustering. Firstly, the pitch of contours is normalized (zero-mean). Then, the best path is found between two contours using the DP method. Finally, the distance measure of two contours along this path is calculated. If the shapes of the two pitch contours are similar, the distance measure value will be very low. In the stage of sample selection, the tone domain of syllables is divided by pitch means and then the typical samples are identified according to its levels and clusters. Clustering experiments show that better clustering results can be achieved by this approach compared with traditional approaches. And new clustering approach is also validated by synthesis experiments.
An Intelligentized Method of XML Query for Multiobjective Optimization Combined PSO and ACO
Liu Bo, Yang Luming, Lei Gangyue, and Xie Dong
2008, 45(8):  1371-1378. 
Asbtract ( 480 )   HTML ( 0)   PDF (1378KB) ( 562 )  
Related Articles | Metrics
With the fast development of Web technology and its application, XML has become the important standard of information expression and data exchange on the Internet, XML functions have reached every corner of the network community. Considering the characteristics of XML for multi-objective optimization and the shortcomings of XML query, an optimization method using probabilistic rules of swarm intelligence algorithm is proposed. It adopts a path scattered rule, and combines PSO(particle swarm optimization) with ACO(ant colony optimization) to conduct dynamic swarm query by XML characteristics of semi-structured and the probabilistic rules to improve XML query, especially the PSO has the fast stochastic overall search ability, but it is unable to use the feedback information. The swarm actions are taken in turn towards the targets: swarm self-adaptive cross, encoding repeatedly, iterative choice, etc., which leads to the following good results: widening data search range, improving search precision and convergence efficiency, avoiding premature convergence, and reducing complexity of the algorithm. The simulation experiments show that the proposed combined method has a preferable query effectively.
Study of Query Optimization Methods for Data on Tertiary Storage
Liu Baoliang, Li Jianzhong, and Gao Hong
2008, 45(8):  1379-1385. 
Asbtract ( 413 )   HTML ( 0)   PDF (1080KB) ( 350 )  
Related Articles | Metrics
The management of DBMS on tertiary storage is becoming more and more important with the development of applications, not only because tertiary devices are used to archive data, but also the amount of data that application has to deal with is increasing rapidly. The cost model and query optimization method of current disk based database management system cant deal with massive data on tertiary storage. A cost model which can evaluate relational operations for tertiary resident data is proposed. The cost of various relational operations can be deduced through the cost definitions of several basic data accessing pattern and the costs of two pattern combination operators. To further improve query efficiency, multiple relation copies are stored on the tertiary storage with different organization methods. The cost model can also evaluate the cost of the same relational operation on different relation copies. Two query optimization methods are also proposed, which can not only choose the most efficient implementation algorithm for relational operators, but also choose the most I/O efficient copy of the relation on tertiary storage. The experimental results show that query efficiency for tertiary resident data can be greatly improved by adapting the proposed cost model and the query optimization methods. The introduction of relation copies demonstrates the feasibility of improving query efficiency at the cost of using more storage space.
A Video Segmentation Method Based on DEMD and Its Application on Video Watermarking
Meng Yu, Li Wenhui, and Peng Tao
2008, 45(8):  1386-1394. 
Asbtract ( 587 )   HTML ( 0)   PDF (1622KB) ( 482 )  
Related Articles | Metrics
In this paper, a video segmentation method based on directional empirical mode decomposition(DEMD) is proposed, which is combined with independent component analysis(ICA) to embed watermark into videos. In the video segmentation method based on DEMD, the intrinsic mode function(IMF) with the lowest frequency is discarded and a simple and effective motion compensation method is adopted. As a result, the video segmentation method is no longer sensitive to abrupt change of luminance, object motion in shots, motion and zoom inout of cameras, so that the precision of detecting shot boundaries is improved. Using ICA to analyze videos segmented by the method mentioned above, a series of independent component frames are obtained, and then an improved wavelet-domain quantization-based image watermark algorithm is applied to embed watermarks into these frames to realize the embedding of video watermarks. Experimental results show that video watermarking method in this paper is robust enough against several kinds of watermark attacks. It is more robust against attacks such as frames discarding and decreasing in the same time than traditional video segmentation methods based on histogram.
A Geometry Simplification Method for Mobile 3D Graphics
Ma Jianping, Luo Xiaonan, Chen Bo, and Chen Huahong,
2008, 45(8):  1395-1401. 
Asbtract ( 638 )   HTML ( 0)   PDF (1202KB) ( 690 )  
Related Articles | Metrics
Mobile 3D graphics computing is a fast growing domain in network and 3D graphics. Given the limited channel of wireless network and display resolution of mobile devices, the 3D graphics must be decomposed into small packets for transmission over wireless networks, and the contents must be adjusted to fit the different resolutions. Provided in this paper is a novel algorithm to simplify the geometric model based on modified loop subdivision scheme. The dense mesh is decomposed into progressive meshes with a coarse mesh and a series of offset by repeating the three processes: vertices split, odd vertices predict and re-triangle. The modified loop subdivision scheme is utilized as the predictor during odd vertices predicting process. Since the number of vertices affiliation combination of loop scheme is less, it improves the speed of mesh simplification and reconstruction. The progressive meshes can easily be progressively transmitted over wireless network. The original mesh can be reconstructed losslessly on mobile devices. The experiments show that the proposed approach is simple, yet highly efficient. It can be widely used in mobile 3D graphics application.
A Method of Image Reconstruction Based on Sub-Gaussian Random Projection
Fang Hong, Zhang Quanbing, and Wei Sui
2008, 45(8):  1402-1407. 
Asbtract ( 601 )   HTML ( 0)   PDF (948KB) ( 904 )  
Related Articles | Metrics
In this paper, sub-Gaussian random projection is introduced into compressed sensing (CS) theory and two new kinds of CS measurement matrix: sparse projection matrix and very sparse projection matrix are presented By the tail bounds for sub-Gaussian random projections, the proof of how these new matrices satisfy the necessary condition for CS measurement matrix is provided Then, it is expatiated that owing to their sparseness, new kinds of matrices greatly simplify the projection operation during image reconstruction, which simultaneously greatly improves the speed of reconstruction Further, it can be easily proved that Gaussian matrix and Bernoulli matrix are special matrices obeying sub-Gaussian random distribution, which indicates that new measurement matrices extend the current results on CS measurement matrix Both the results of simulated and real experiments show that with a certain number of measurements, new matrices have good measurement effect and can acquire exact reconstruction Finally, the comparison and analysis of reconstruction results respectively adopting new matrices and Gaussian measurement matrix is conducted Compared with Gaussian measurement matrix, new matrices have lesser average over-sampling factor, which indicates lower complexity of reconstruction.
Crosscutting Invasion and Crosscutting Invariant
Lü Jia, Ying Jing, Wu Minghui, and Jiang Tao
2008, 45(8):  1408-1416. 
Asbtract ( 397 )   HTML ( 0)   PDF (1166KB) ( 437 )  
Related Articles | Metrics
Owing to the characteristics of quantification and obliviousness of aspect-oriented language, modular behavioral analysis and modular reasoning are more difficult than that of the traditional paradigms. To deal with crosscutting safety and crosscutting quality in aspect-oriented language, crosscutting modules and affected modules are constrained with pre-conditions and post-conditions, but assigning blame for pre-condition and post-condition failures during the process of crosscutting poses subtle and complex problems. To analyze behavioral effect of a crosscutting concern, the programmer should consider the aspect itself and the part of the system it affects. Furthermore, when several aspects are woven at a same pointcut, the analysis of possible dangerous interferences becomes more complex. Similar to the notion of behavioral subtyping in object-oriented language, a notion of crosscutting invariant is proposed. In order to check the behavioral errors of violating crosscutting invariability and four other simple behavioral errors, an algorithm based on software behavioral contracts is proposed. To formalize this algorithm, crosscutting contract calculus and a set of contract elaboration rules are presented. The contract soundness theorem which ensures the correctness of the contract elaboration process is stated and proved. An example is also represented to show how to use these contract elaboration rules to check and analyze the behavioral errors.
Program Restructuring to Improve Efficiency of Software Model Checking
Huang Weiping
2008, 45(8):  1417-1422. 
Asbtract ( 495 )   HTML ( 0)   PDF (969KB) ( 418 )  
Related Articles | Metrics
In order to cope with the problem, that is, the currently available software model checker can hardly deal with large-scale software, it is proposed to use the technique of program restructuring to pre-process the source code, so as to enhance the efficiency of software model checking. Program restructuring decomposes a large-scale program into a set of small procedures which preserves the original semantics. Due to the fact that the procedure summary edge can be separately computed in the model-checking algorithm, and a procedure may be called many times in a program, the pre-processing can avoid unnecessary repetition of state space search so as to decrease the overhead of space and the time in model checking algorithm. According to the components of linear temporal logic, the sufficient conditions for semantical equality of program restructuring are given. Given programs and programs property, the experimental results before and after restructuring the programs on the model checker “blast” are compared. Theoretical analysis and experimental results on some benchmark programs show that program restructuring can effectively decrease the overhead of model checking of large-scale programs, and at the same time, it can satisfy the security requirements of software model checking.
An Acquisition Algorithm of C/A Code in GPS Receiver Under Weak Signal Environment Based on Optimum Path
Qin Xinxian, Xie Yingke, and Han Chengde
2008, 45(8):  1423-1429. 
Asbtract ( 640 )   HTML ( 1)   PDF (1119KB) ( 435 )  
Related Articles | Metrics
Acquisition of C/A code is most important for GPS receiver in weak signal environment. Processing coherently with long integration time is an efficient way to improve the sensitivity of GPS receiver. However, because of the transition of the navigation data bits, the integration time has to be limited. How to detect the transition of navigation data bits is the key issue to prolong the coherent integration time in weak signal environment. In this paper, some conventional acquisition technologies are studied carefully and the idea of optimum path is introduced. Optimum path algorithm is employed to minimize the cost function which determines the transition of navigation data bits. Furthermore, the whole acquisition algorithm based on the optimum path is given. This novel algorithm process the received data coherently with longer integration time and without any squared loss. Compared with other algorithm, the new method can reduce computing burden dramatically. The acquisition rate expression is also deduced. Simulations show that the new algorithm can effectively improve the acquisition rate. When integration time equals 80ms, SNR equals -40dB, the new optimum path algorithm still has as high as 92% acquisition rate which remarkably improves the sensitivity of GPS receiver.
Design of HighSpeed FFT Processor for Length N=q×2\+m
Deng Shanshan, Sun yi, Zhang Lisheng, Mo Zhifeng, and Xie Yingke
2008, 45(8):  1430-1438. 
Asbtract ( 545 )   HTML ( 0)   PDF (1294KB) ( 409 )  
Related Articles | Metrics
For the design of FFT processor of massive data with length N≠2\+m, zeropadding technique always results in massive wastes of hardware resource, and reduces the processor performance significantly. Presented in this paper is a new FFT algorithm for length N=q×2\+m, where q is a prime integer except for 2. Under the condition of a wider range of choices on different sequence lengths, comparisons with previously reported algorithms show that this algorithm not only makes substantial savings on arithmetic, but also has more regular computational structure which is of great benefit to hardware implementation of the algorithm. Based on this algorithm, a design method of parallel architecture FFT processor is also presented. The dedicated parallel memory mapping algorithm with the feature of minimal memory size relies on the inplace calculation property of the PCW FFT algorithm, and can simultaneously access to q×4 or 4×q data. A hybrid arithmetic unit unifies the arithmetic structures of mixed radix FFT and WFTA of qpoint. Compared with separate arithmetic units, this unit can save hardware resource substantially, which makes more arithmetic units in one processor possible. The hardware simulation of the FFT processor shows that a complex 20480point DFT operation can be done within 7181 clock cycles under the max clock frequency 105MHz.
A Novel System-Level Communication Synthesis Methodology Containing Crossbar Bus and Shared Bus
Cao Yafei, Wang Dawei, and Li Sikun
2008, 45(8):  1439-1445. 
Asbtract ( 576 )   HTML ( 1)   PDF (1131KB) ( 375 )  
Related Articles | Metrics
The on-chip communication architecture is a fabric that integrates the various SoC(system-on-chip) components, and provides them with a mechanism for the exchange of data. It has a significant impact on SoC performance. The bus based communication architectures are the most popular communication styles up to now. There are mainly two kinds of bus topologies: shared bus topology and crossbar bus topology. There is a great deal of research in the field of system level communication synthesis, but previous bus based communication synthesis research is confined to shared bus synthesis or crossbar bus synthesis. Few works take both shared bus and crossbar bus into account. In this paper, a novel system-level communication synthesis methodology containing crossbar bus and shared bus is presented. Starting with the communication traffic between the system-level processing elements and storage units, the proposed methodology synthesizes an optimal bus topology containing crossbar bus and shared bus. It uses a genetic algorithm to synthesize bus parameters which satisfy the communication delay constraints while considering bus contention and communication synchronization. It models the generated crossbar bus and shared bus at transaction level and optimizes the communication architectures. Experiments show that the proposed methodology results in about 10% component savings when compared with the methodology ever before.
DSCF: Data Streams Clustered Forwarding for Multi-Core DSPs with Memories Shared
Wang Dong and Chen Shuming
2008, 45(8):  1446-1553. 
Asbtract ( 407 )   HTML ( 0)   PDF (1458KB) ( 427 )  
Related Articles | Metrics
Multi-core digital signal processors (MC-DSPs) are new multi-core processors for the emerging next generation of high performance embedded applications. Like multi-core general processors, MC-DSPs with memory-shared structures often suffer from the long access latency involved in cache coherency operations. Data speculation technology which mainly consists of data fetching and data forwarding is an efficient approach to hide this kind of access latency. Starting from exploring two important application features of MC-DSPs, a new data stream clustered forwarding (DSCF) technique is proposed for MC-DSPs with scalable memory-shared structures. DSCF uses its own data streams forwarding primitives inserted in the codes of DSP cores as producers to trigger a customized forwarding management units (FMU) to forward shared data streams to the local data buffers of DSP cores as consumers. The transmission process of shared data streams is controlled to be matched with their being consumed process, and the forwarded data streams are partitioned into multi clusters to transmit. DSCF method is compatible with basic shared memory cache coherency protocols, and has lower hardware overhead, no pollution to destination DSP caches, well matched transmission speed and improved structure scalability. The simulation with several typical DSP benchmarks shows that DSCF can reduce the miss ratio of the MC-DSP cache coherency by 44% on average, and improve the overall system performance by 30% to 70%.