Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 August 2007, Volume 44 Issue 8
Paper
Progress in Testing for Web Applications
Deng Xiaopeng, Xing Chunxiao, and Cai Lianhong
2007, 44(8):  1273-1283. 
Asbtract ( 483 )   HTML ( 1)   PDF (522KB) ( 701 )  
Related Articles | Metrics
Testing for Web applications has challenges on its special aspects. Compared with design and development, relatively little work on testing has been done so far. In this paper, the factors influencing testing for Web applications are analyzed from such cases as its architecture, techniques, elements of its composition, running environment, running mechanism, design process and development process. Main aspects for Web applications testing in design, development, running and evolution steps are summarized. Dynamic testing and static testing are summarized also. Crucial researches from these aspects such as hyperlink testing, black-box testing and white-box testing for components, Web services testing, Web services composition testing and performance testing are surveyed. These problems include object-oriented model and unify Markov model for hyperlink testing, data flow testing and control flow testing for white-box testing, data composition testing for black-box testing, testing for Web services, Web services group, protocol, Web services composition, workload modeling and approaches of creating effective user session data sets for performance testing. Localizations of some methods are pointed out. From model-driven testing, agent-based testing, Web services testing and testing for service-oriented architecture, performance testing, the trends of future research for Web applications testing are also discussed.
Mining Interests and Navigation Patterns in Personalization on Portal
Wu Jing, Zhang Pin, Luo Xin, Sheng Hao, and Xiong Zhang
2007, 44(8):  1284-1292. 
Asbtract ( 400 )   HTML ( 0)   PDF (624KB) ( 524 )  
Related Articles | Metrics
Personalization services pose new challenges to interest mining on portal, such as many powerful functions to customize desktops. Capturing these surfing behaviors of users implicitly and mining interest navigation patterns are the top demanding tasks. Based on the summary of personalized interest mapping on portal, a novel portal-indepeodent mechanism of interest elicitation with privacy preserving is proposed, which implements both the implicit extraction of diverse access behaviors and corresponding semantic analysis. A new schema is also presented to extend the interest representation rule efficiently in privacy preserving process. Then the legal transparent accountable interest in terms of the interest behavior entries could be implied with this interest-extended rule. Moreover, the navigation relationship among the interests can describe the user's next possible interest trends, especially benefiting the recommendation. By combining the association effect of those interests and the prediction on interest intentions, a hidden Markov model is extended with personalized interest descriptions of portal to form interest navigation patterns for different users. Then experiments are carried out in order to validate the proposed approaches with availability and feasibility. The improvement of representation accuracy and mining capability for the complex interests on portal is a feature that clearly distinguishes the proposed approaches from traditional ones.
Bayesian Classifier Based on Frequent Item Sets Mining
Xu Junming, Jiang Yuan, and Zhou Zhihua
2007, 44(8):  1293-1300. 
Asbtract ( 524 )   HTML ( 2)   PDF (490KB) ( 577 )  
Related Articles | Metrics
Nave Bayesian classifier provides a simple and effective way to classifier learning, but its assumption on attribute independence is often violated in real-world applications. To alleviate this assumption and improve the generalization ability of Nave Bayesian classifier, many works have been done cy researchers. AODE ensembles some one-dependence Bayesian classifiers and LB selects and combines long item sets providing new evidence to compute the class probability. Both of them achieve good performance, but higher order dependence relations may contain useful information for classification and limiting the number of item sets used in classifier may restricts the benefit of item sets. For this consideration, a frequent item sets mining-based Bayesian classifier, FISC (frequent item sets classifier), is proposed. At the training stage, FISC finds all the frequent item sets satisfying the minimum support threshold min_sup and computes all the probabilities that may be used at the classification time. At the test stage, FISC constructs a classifier for each frequent item set contained in the test instance, and then classifies the instance by ensembling all these classifiers. Experiments validate the effectiveness of FISC and show how the performance of FISC varies with different min_sup. Based on the experiment result, an experiential selection for min_sup is suggested.
A General Algorithm for Automatically Generating Composition Table in Spatio-Temporal Reasoning
Wang Shengsheng and Liu Dayou
2007, 44(8):  1301-1308. 
Asbtract ( 296 )   HTML ( 0)   PDF (425KB) ( 493 )  
Related Articles | Metrics
Spatio-temporal reasoning (STR), the research field aiming at spatial and/or temporal questions, has a wide variety of potential applications in artificial intelligence (such as semantic Web, robot navigation, natural language processing, qualitative simulation of physical processes and common-sense reasoning) and other fields. Composition inference plays an important role in spatio-temporal reasoning, and it's the basic step of other qualitative reasoning such as constraint satisfaction problem. A compositional inference is a deduction, which decides R(a,c) from R(a,b) and R(b,c). Compositions of pairs of relations can be maintained in composition table for looking up. But composition table needs to be built one by one with manual deduction, and few models have their independent generation algorithms at this time. There is no general automatic algorithm supporting multiple models up to now. So a general algorithm for automatically generating composition table is proposed. First introduced is the general representation method of spatio-temporal relation based on space partition. Then proposed is an algorithm that can automatically generate composition table according to scene checking. Theory analysis and examinations of over 20 representative spatio-temporal models such as RCC, broad boundary, interval algebra, etc. shows that this algorithm can quickly and correctly generate composition table for all the spatio-temporal models based on precise region (or interval).
A Tableaux Decision Procedure for Fuzzy Description Logic FALNUI
Jiang Yuncheng, Tang Yong, Wang Ju, and Shen Yuming
2007, 44(8):  1309-1316. 
Asbtract ( 458 )   HTML ( 0)   PDF (490KB) ( 421 )  
Related Articles | Metrics
ER model may be translated into description logic ALNUI knowledge bases, and the reasoning on ER model may be reduced to model reasoning on ALNUI knowledge bases. Fuzzy description logic FALNUI is the fuzzy extension of description logic ALNUI through fuzzy logic, and the syntax and semantics of FALNUI are given. The relationship of description logic FALNUI and fuzzy ER model is investigated, i.e., fuzzy ER model may be translated into FALNUI knowledge bases, and reasoning problem of satisfiability, redundancy, and subsumption relation of fuzzy ER model may be translated into FALNUI subsumption reasoning problem, but FALNUI lacks reasoning algorithms for satisfiability and subsumption reasoning tasks at present. A kind of description logic tableaux based satisfiability reasoning algorithm for FALNUI is presented, and the correctness of the satisfiability reasoning algorithm is proved. The Tbox expansion and elimination methods for FALNUI are presented. It is proved that subsumption reasoning problem may be trnslated into satisfiability reasoning problem in FALNUI too, and subsumption reasoning algorithm for FALNUI is presented through Tbox expansion and elimination. Therefore, the theoretical foundation for the implementation of automatic reasoning of satisfiability, redundancy, and subsumption relation of fuzzy ER model is presented through fuzzy description logic FALNUI tableaux reasoning algorithms.
An Automated Reasoning Expanded Method Based on Set Signs
Liu Quan, Fu Yuchen, Sun Jigui, Cui Zhiming, Gong Shengrong, and Ling Xinghong
2007, 44(8):  1317-1323. 
Asbtract ( 546 )   HTML ( 0)   PDF (429KB) ( 471 )  
Related Articles | Metrics
On the basis of the many-valued logics tableau reasoning, an automated reasoning expansion method based on set sign is presented. This approach which treats set signs as truth values can be applied to some methods and techniques of reasoning suited to classical logics, which make reasoning reform from the non-classical logics to the classical logics. Through rewriting set signs and expansion rules, it is very easy to spread to modal logics, intuitionistic logics and so on. The technology can also be further spread to infinite-valued logic and logic with fuzzy quantifiers(such as T-calculus and S-calculus and so on). The theorem proving system—TSetTAP is implemented by using SWI-PROLOG language in microcomputer. Using method of set signs in the system, rule programming can be generated by only increasing reasoning rules in rule base. System need not be repaired. So some strategies and approaches used in classical logics can easily apply to non-clssical logic. 900 logic questions in TPTP are proved using the system. The result shows TSetTAP makes the tableau close early and improve greatly in time efficiency and space efficiency of reasoning.
Selective Bayes Classifiers for Incomplete Data
Chen Jingnian, Huang Houkuan, Tian Fengzhan, and Fu Shujun
2007, 44(8):  1324-1330. 
Asbtract ( 406 )   HTML ( 1)   PDF (400KB) ( 460 )  
Related Articles | Metrics
Selective classifiers have been proved to be a kind of algorithms that can effectively improve the accuracy and efficiency of classification by deleting irrelevant or redundant attributes of a data set. Though some selective classifiers have been proposed, most of them deal with complete data, which is due to the complexity of dealing with incomplete data. Yet actual data sets are often incomplete and have many redundant or irrelevant attributes because of various kinds of reason. Similar to the case of complete data, irrelevant or redundant attributes of an incomplete data set can also sharply reduce the accuracy of a classifier established on this data set. So constructing selective classifiers for incomplete data is an important problem. With the analysis of main methods of processing incomplete data for classification, two selective Bayes classifiers for incomplete data, which are denoted as SRBC and CBSRBC respectively, are presented. While SRBC is constructed by using the robust Bayes classifiers, CBSRBC is based on SRBC and chi-squared statistics. Experiments on twelve benchmark incomplete data sets show that these two algorithms can not only enormously reduce the number of attributes, but also greatly improve the accuracy and stability of classification as well. On the whole, CBSRBC is more efficient than SRBC and its classification accuracy is higher than that of SRBC. But some thresholds necessary to CBSRBC can be avoided by SRBC.
An Improved Adaptive Buffer Replacement Algorithm Used for Second Level Buffer
Sun Guozhong, Yuan Qingbo, Chen Mingyu, and Fan Jianping
2007, 44(8):  1331-1338. 
Asbtract ( 447 )   HTML ( 0)   PDF (512KB) ( 462 )  
Related Articles | Metrics
In a cluster or a database server system, the performance of some data intensive applications will be degraded much because of the limited local memory and large amount of interactions with slow disk. In high speed network, utilizing remote memory of other nodes or customized memory server to be as second level buffer can decrease access numbers to disks and benefit application performance. With second level buffer mode, this paper made some improvements for a recently proposed buffer cache replacement algorithm—LIRS, and brings forward an adaptive algorithm—LIRS-A. LIRS-A can adaptively adjust itself according to application characteristic, thus the problem of not suiting for time locality of LIRS is avoided. In TPC-H benchmarks, LIRS-A could improve hit rate over LIRS by 7.2% at most. In a Groupby query with network stream analyzing database, LIRS-A could improve hit rate over LIRS by 31.2% at most. When compared with other algorithms, LIRS-A also show similar or better performance.
An Improved Adaptive Sampling Method for Traffic Measurement
Wang Dan, Xie Gaogang, Yang Jianhua, Zhang Guangxing, and Li Zhenyu,
2007, 44(8):  1339-1347. 
Asbtract ( 582 )   HTML ( 4)   PDF (554KB) ( 494 )  
Related Articles | Metrics
The emergency of high speed links brings great challenges on online traffic monitoring and measurement. Due to the capacity restriction of traffic sampling system, an accurate and efficient sampling method is highly demanded. Fixed probability sampling is the simplest technique for detecting bigger traffic flows while discarding the smaller ones which consist almost more than 80% of the count of whole traffic flows. The smaller traffic flows are vital for the analysis of network traffic. Data streaming algorithm can collect data from high speed links online and efficiently. SGS (sketch guided sampling) is based on this algorithm and can evaluate accurately the distribution of flow sizes. But its accuracy declines rapidly when the sampling speed exceeds the capacity of the monitoring system. In this paper, an adaptive sampling method for real time network traffic measurement on high speed links based on the SGS method is proposed, called SRGS (sketch and resources guided sampling). The SRGS method takes the system capacity as an important parameter to adjust the sampling probability. Experiment results show that the SRGS method can adjust the package sampling probability according to the current flow sizes and the capacity in time. And it is more accurate than the SGS method.
Congestion Avoidance, Detection and Mitigation in Wireless Sensor Networks
Li Shanshan, Liao Xiangke, Zhu Peidong, and Xiao Nong
2007, 44(8):  1348-1356. 
Asbtract ( 505 )   HTML ( 0)   PDF (526KB) ( 510 )  
Related Articles | Metrics
Congestion is an essential problem in wireless sensor networks. It occurs when the offered traffic load exceeds available capacity at any point in a network. Congestion causes overall channel quality to degrade and drop rates to rise. Furthermore, redundant transmissions are always adopted to guarantee reliable data delivery, which may deteriorate congestion since they bring about more contention and in return hampers the reliability. In this paper, a framework is proposed to avoid, detect and mitigate congestion effectively. Compared with previous works, this work doesn't control congestion at the cost of desired reliability, instead, both goals can be achieved. There are mainly two parts in this framework: congestion aware traffic allocation (COTA) and congestion detection and mitigation (CODEM). COTA is used in multipath routing to balance traffic around the whole network. COTA uses some heuristic information to analyze the potential congestion region and avoid traversing these regions. Once congestion happens inevitably, CODEM is presented to detect congestion using accurate metric and reallocate traffic among multiple paths to mitigate congestion. The multi-frequency characteristic of CC2420 radio is used to minimize contention between control packets and data packets. Comprehensive simulations validate the distinguished performance in several aspects of the framework.
A Method of Web Service Discovery Based on Functional Semantics
Ye Lei and Zhang Bin
2007, 44(8):  1357-1364. 
Asbtract ( 411 )   HTML ( 1)   PDF (427KB) ( 493 )  
Related Articles | Metrics
With the rapid development and application of Web service technologies, how to discover Web services based on functional semantics has already become the most urgent problem to be resolved. But in practice the existing Web service discovery methods are not mature, and cannot cope with such kind of requests very well. In this paper an effective and feasible approach to Web service discovery based on functional semantics is proposed. The Web services functional semantic description model is first defined, which provides a unified manner to semantically describe the function of Web services for service providers and customers. In order to avoid semantic heterogeneity, the domain-oriented functional ontology is built. Depending on the ontology the functional semantic annotation mechanism is proposed to further eliminate the semantic difference between the descriptive terms used by services providers and customers. Based on the above efforts the matchmaking between Web services and requests is realized. Furthermore, the Web service description language based on functional semantics is designed and a prototype system FunWSDS is established. Finally, based on the prototype system the performance of this method is evaluated by an experimental setup. The results of experiment prove that such a kind of Web service discovery method is feasible and effective.
Research on a Trust Model Based on the Subjective Logic Theory
Lin Jianning and Wu Huizhong
2007, 44(8):  1365-1370. 
Asbtract ( 409 )   HTML ( 1)   PDF (324KB) ( 480 )  
Related Articles | Metrics
In grid environment, grid nodes provide services with QoS guarantee, so how to choose a credible service is a critical problem. It is proposed that basic probability assignment be constructed based on the QoS constraints that the services provide, and that in the process of building the basic probability assignment, a partial relation is inducted in order to remove the worthless history information. And referring to the behavior of human society the reputation model of the grid nodes is established. Moreover, based on the discounting operator and the consensus cooperator, the reputation options from different grid nodes can be integrated. In order to avoid the overload in some nodes, a balancing scheme implemented with a biased roulette wheel is adopted. Simulation tests prove that the model can tackle the trust problem in a simple and efficient way.
A Novel Gray Hole Attack Detection Scheme for Mobile Ad-Hoc Networks
Chen Wei, Long Xiang, Gao Xiaopeng, and Bai Yuebin
2007, 44(8):  1371-1377. 
Asbtract ( 526 )   HTML ( 0)   PDF (423KB) ( 573 )  
Related Articles | Metrics
Mobile ad hoc networks (MANETs) are typical distribution networks, which have unique characteristics and constraints such as none centralized control, dynamically changed network topology, and limited bandwidth. For the absence of fixed network infrastructure, MANETs are vulnerable to various types of denial of service (DoS) attacks. The gray hole attack is a kind of DoS attacks. In this attack, an adversary silently drops some or all of the data packets sent to it for further forwarding even when no congestion occurs. Firstly, related works, DSR protocol, aggregate signature algorithm and network model are introduced. Secondly, a scheme based on aggregate signature is proposed to trace packet dropping nodes. The proposal consists of three related algorithms: the creating proof algorithm, the checkup algorithm and the diagnosis algorithm. The first is for creating proof, the second is for checking up source route nodes, and the last is for locating the malicious nodes. Finally, the efficiency of the proposal is analyzed. The simulation results using ns-2 show that in a moderately changing network, most of the malicious nodes could be detected, the routing packet overhead is low, and the packet delivery rate is improved after abandoning routes containing bad nodes.
Weighted Threshold Secret Sharing
Huang Dongping, Liu Duo, and Dai Yiqi
2007, 44(8):  1378-1382. 
Asbtract ( 496 )   HTML ( 1)   PDF (271KB) ( 468 )  
Related Articles | Metrics
A weighted threshold secret sharing scheme based on modular computation is proposed in this paper. Weights are used to give differential influence to participants. When the sum of weights of the participants is as big as or bigger than the threshold value, they can recover the secret, otherwise they can not. The participants of the previous weighted secret sharing schemes based on decomposition structure have to hold several sub-secrets which have different application circumstances and are very difficult to manage. But the proposed scheme only requires each participant to hold one single sub-secret, which simplifies the management and usage of the sub-secrets, and enables this scheme applicable for the cases where the management convenience is more important. In some cases,the weights and the threshold value could be adjusted to reduce the size of the scheme but still with the same effect compared with the original one. For this purpose, the concept of the equivalence of access structures is suggested, and an algorithm for parameter adjustment based on integer programming is proposed to minimize the threshold.
Research on Local Route Repair Algorithm in Mobile Ad Hoc Networks
Xiao Bailong, Guo Wei, Liu Jun, and Zhu Silu
2007, 44(8):  1383-1389. 
Asbtract ( 397 )   HTML ( 1)   PDF (364KB) ( 411 )  
Related Articles | Metrics
Multi-hop wireless connectivity, frequently changing network topology and limited bandwidth are main characteristics of mobile ad hoc networks, which pose lots of challenges to routing protocols of such networks. If multi-hop route fails, the routing protocol should maintain it. The previous route repair mechanism causes high control overhead and long packet delay. The problem worsens when mobility is high and many real-time applications do not tolerate such long delays. In broken route, only nodes near the broken links may need to be substituted and the rest of nodes can be retained on the route. In this paper, a new idea about local route repair which limits the repair vicinity of the broken links is proposed to decrease the reaction time of route breakage and the overhead of route maintenance. This is desirable to solve the problem with the least cost in terms of both bandwidth and time. Furthermore, the approach can repair failure links without taking into account of their relative position on the whole path. It improves obviously the ability of dealing with failure links and scalability properties of ad hoc networks. Simulations show that the improved routing protocol results in significant performance improvement, such as packet delivery ratio and end-to-end packet delay.
Nonlinear Diffusion based Image Denoising Coupling Gradient Fidelity Term
Zhu Lixin, Pheng Ann Heng, and Xia Deshen
2007, 44(8):  1390-1398. 
Asbtract ( 432 )   HTML ( 0)   PDF (671KB) ( 457 )  
Related Articles | Metrics
Image denoising with second order non-linear diffusion PDEs often leads to an undesirable staircase effect, namely, the transformation of smooth regions into piecewise constant ones. In this paper, these nonlinear diffusion models are improved by adding the Euler-Lagrange equation derived from the gradient fidelity term which describes the similarity in gradient between the noise images and the restored ones. After coupling the new restriction equation derived from the gradient fidelity term, the classical second order PDE-based denoising models will produce piecewise smooth results, while preserving sharp jump discontinuities in images. The convexity of the proposed model is proved and the existence and uniqueness of optimal solution is ensured. The influence of introducing spatial regularization on the gradient estimation is also analyzed and the importance of proper regularization parameter selection to the final results is emphasized theoretically and experimentally. In addition, the gradient fidelity term is integrable in bounded variation function space which makes the models outperform fourth order nonlinear PDEs based denoising methods suffering from the leakage problems and the sensitivity to high frequency components in images. Experimental results show that the new model alleviates the staircase effect to some extent and preserves the image features well, such as textures and edges.
Adaptive Spatial Domain Image Watermarking Based on Support Vector Machine
Li Chunhua, Ling Hefei, and Lu Zhengding
2007, 44(8):  1399-1405. 
Asbtract ( 465 )   HTML ( 0)   PDF (476KB) ( 408 )  
Related Articles | Metrics
An adaptive spatial domain image watermarking algorithm based on support vector machine (SVM) is proposed. Since there is very close similarity between SVM and human visual system (HVS) in self-learning, generalization and non-linear approximation, the watermark embedding locations and strength can be adaptively identified by applying SVM algorithm based on the HVS. In this scheme, a kind of un-supervisory machine learning method, named fuzzy c-mean clustering algorithm, is first used to label the pixels in a cover image. Then, only those pixels whose subjection-value exceed a given threshold are selected from each label to be the training sample set of SVM. Sequentially, an SVM based multi-classification model is established. According to this model, the watermark embedding locations are further optimized. Finally, a bit of the watermark is adaptively embedded by adjusting the corresponding pixel value, according to the image local correlation. The presented watermarking scheme can extract the watermark without the help of the original image. Experimental results show that the proposed adaptive scheme has both sound perceptual quality and high robustness to various signal processing such as lossy compression, noise addition, image enhancement, filtering, cropping, mosaic, blurring, and so on. The watermarking performance notably outperforms the similar algorithm.
Implementation of a Kernel-Based Chinese Relation Extraction System
Liu Kebin, Li Fang, Liu Lei, and Han Ying
2007, 44(8):  1406-1411. 
Asbtract ( 879 )   HTML ( 1)   PDF (347KB) ( 1608 )  
Related Articles | Metrics
Entity relation extraction (RE) is an important task in information extraction. In this paper, a novel kernel-based Chinese entity relation extraction system is presented, which appies the improved sequence kernel function with KNN learning algorithm to fulfill the RE task. Experiments are carried out on 3 kinds of relation types and their 6 subtypes defined in the ACE guidelines. Results show that the new approach achieves an average precision up to 88%, significantly higher than feature-based approaches and traditional kernel methods. The new approach has a better generalization capability especially on small training sets. The system consists of 8 independent modules including named entity detection, candidate generation, etc. for easy maintenance and update. The system is implemented either as a Java application or plug-ins on gate platform. It extracts not only the binary relation, but also their description such as job in employment relation.
English Topic Tracking Research Based on Query Vector
Zhao Hua, Zhao Tiejun, Yu Hao, and Zheng Dequan
2007, 44(8):  1412-1417. 
Asbtract ( 367 )   HTML ( 0)   PDF (361KB) ( 516 )  
Related Articles | Metrics
As a new area of natural language processing, topic tracking has received a lot of attentions from experts both at home and at broad, and has become more and more popular. Topic tracking is defined to be the task of monitoring a stream of news stories to find those that discuss the topic known to the system. Research is made into three key problems in the query-based topic tracking: feature extraction, feature weight computation, and similarity measure. Firstly, a feature extraction algorithm based on the combination of word differentiation and the location property is proposed. The basic idea of word differentiation is to divide words into capital words, whose initials are capital, and common words, whose initials are not capital. The location property decides the importance of words based on the inverse-pyramidal structure of the news stories. Secondly, a new method to compute the feature's weight based on the combination of several different feature extraction algorithms is proposed. This method gives the feature bigger weight, which is selected by more feature extraction algorithms. Finally, a double filtration algorithm based on the majority vote rule is proposed, which makes two judgments about the relativity of a story and a topic, and reduces the system's false alarm successfully. Experiments indicate that these three proposed methods not only perform well, but also have good scalability.
A Self-Management Unit-Based and Differentiated Service-Enable Web Container
Li Yang, Chen Ningjiang, Jin Beihong, Zuo Lin, and Huang Tao,
2007, 44(8):  1418-1428. 
Asbtract ( 365 )   HTML ( 0)   PDF (705KB) ( 360 )  
Related Articles | Metrics
Web container conforming to J2EE specification, which provides runtime environment for servlet and JSP and adopts best-effort service mode, has become an effective platform to deploy enterprise Web applications on Internet. However, complex business Web applications require Web container to provide differentiated services that traditional Web container does not provide for requests from different clients according to role or payment etc. Some approaches such as admission control and priority scheduling have been applied to provide differentiated services for Web container, but they do not consider differentiated services in finer granularity so that they can only provide monotone and static strategy of differentiated services. To improve the deficiencies of traditional Web container, the requirement of differentiated services is analyzed and a differentiated service Web container (DSWC) based on self-management unit (SMU) is brought forward. In DSWC, process modules are encapsulated into SMUs, which are connected to compose a SMU chain. DSWC provides SMU-based and SMU chain-based differentiated services for requests according to SLA (service level agreement). At the same time, an adaptive strategy selection algorithm is also presented, which can adaptively select different strategies of differentiated services based on the dynamic environment. The experiments of prototype show DSWC can effectively provide differentiated services according to SLA for different type of requests.
Hyperblock-Based Unified Cluster Assignment and Modulo Scheduling
Hu Dinglei, Chen Shuming, and Liu Chunlin
2007, 44(8):  1429-1438. 
Asbtract ( 449 )   HTML ( 0)   PDF (567KB) ( 398 )  
Related Articles | Metrics
In order to exploit instruction level parallelism (ILP), multiple functional units with multi-ports register file are often used in very long instruction word (VLIW) processor. As the number of functional units rises, the number of register file ports will grow accordingly. At some point, the multiplexing logic on register ports can come to dominate the processor's cycle time. A reasonable solution is to partition the register file into independent clusters. Although clustered architectures reduce register file ports per cluster without performance degradation, they present new challenges to compiler which must assign every operation and operand to a specific cluster and coordinate data movement between clusters to achieve fine ILP. In this paper, a scheduling algorithm for clustered VLIW architectures—hyperblock-based unified cluster assignment and modulo scheduling (HBUCAMS) is proposed. Compared with basic block, hyperblock can provide more larger schedule region for exploiting ILP. Furthermore, because loop bodies with control flow can be converted into hyperblocks, there are more opportunities to apply modulo scheduling. Instead of performing clustered assignment and modulo scheduling sequentially, HBUCAMS put them into a single phase. This unified approach is more effective than phase-ordered approach, since it allows optimizing the global code generation problem instead of searching for optimal solutions to each individual step. Experiments in YHFT-DSP/700 compiler show that the proposed algorithm can obtain more optimized result than the ITSS algorithm.
A Scheduling Algorithm for Dynamic Reconfigurable Computing
Qi Ji, Li Xi, Yu Haichen, Hu Nan, Gong Yuchang, and Wang Ligang
2007, 44(8):  1439-1447. 
Asbtract ( 423 )   HTML ( 0)   PDF (699KB) ( 610 )  
Related Articles | Metrics
Dynamic reconfigurable system can be partially configured at run-time, without disturbing the execution of original functions. Such system has become the focus of research in recent years. The scheduling of hardware tasks is one of the critical factors that is concerned closely with the performance of the dynamic reconfigurable computing system. In this paper, an MGS (minimum gap scheduling) algorithm is proposed, which employs a 2-dimentional time-space coordinate system for the hardware tasks to allocate the resources on chip and the time slices of execution. The concepts of task shadow and cost function are also presented to facilitate the process of scheduling. The algorithm can reduce the waste of reconfigurable resources and can effectively improve the parallelism of the tasks. The algorithm is quite easy to implement and is suitable to be applied for both situations of real-time and non-real-time. The simulation results show that the MGS algorithm can not only reduce the scheduling overhead, but also achieve higher chip utilization and lower task rejection ratio than using other existent scheduling algorithms at the same time.