Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 July 2016, Volume 53 Issue 7
Energy-Efficient Fingerprint Matching Based on Reconfigurable Micro Server
Qian Lei, Zhao Jinming, Peng Dajia, Li Xiang, Wu Dong, Xie Xianghui
2016, 53(7):  1425-1437.  doi:10.7544/issn1000-1239.2016.20160076
Asbtract ( 956 )   HTML ( 0)   PDF (4765KB) ( 777 )  
Related Articles | Metrics
Large-scale fingerprint based application needs high-performance fingerprint matching backend system as a support. Based on reconfigurable micro server(RMS) technology, we propose a software-hardware cooperated fingerprint matching approach. Relying on the advantages of reconfigurable hybrid core computing architecture, our approach can accelerate the computing intensive part of fingerprint matching algorithm by using highly customized hardware accelerator and process the parts which contain complex control flows and a large number of discrete memory accesses on general processing cores. Then, we complete the implementation of algorithm prototype and the performance test on RMS computing node. The test result shows that, single RMS node can achieve about 10,500 fingerprint matches per second with only 5 watts power consumption. Compared with related works, the fingerprint matching performance of a single RMS computing node is 15.5 times that of a single X86 cluster node. Its energy efficiency is 583 times of single X86 cluster node and 5.4 times of Tesla C2075 based GPU server. Based on RMS technology, our method is more flexible and extensible than FPGA platform. It is expected to become an effective technique solution for building large-scale fingerprint matching system in the future.
A High Energy Efficiency FFT Accelerator on DSP Chip
Lei Yuanwu, Chen Xiaowen, Peng Yuanxi
2016, 53(7):  1438-1446.  doi:10.7544/issn1000-1239.2016.20160123
Asbtract ( 1491 )   HTML ( 6)   PDF (3876KB) ( 837 )  
Related Articles | Metrics
Fast Fourier transform (FFT) is a most time-consuming algorithm in the domain of digital signal processing (DSP). The performance and energy efficiency of FFT will make significant effect on different DSP applications. Thus, this paper presents a high energy efficiency variable-size FFT accelerator based on matrix transposition on DSP chip. Several parallel schemes are employed to exploit instruction level parallel and task level parallel of batch of small-size FFTs or big-size Cooley-Tukey FFT. A “Ping-Pong” structure of multi-bank data memory (MBDM) is presented to overlap the overhead of data move and FFT calculation. Moreover, based on MBDM, fast matrix transposition algorithm with basic block transposition is presented to avoid the matrix access with column-wise and improve the utilization of DDR bandwidth. Hybrid twiddle factor generating scheme, combining lookup table and on-line calculation with CORDIC, is presented to reduce the hardware for twiddle factor. Experimental results show that our FFT accelerator prototype with power efficiency of 146 GFLOPs/W, achieves energy efficiency improvement by about two orders of magnitude with multi-thread FFTW on Intel Xeon CPU.
P-SMART: An Energy-Efficient NoC Router Based on SMART
Li Bin, Dong Dezun, Wu Ji, Xia Jun
2016, 53(7):  1447-1453.  doi:10.7544/issn1000-1239.2016.20160150
Asbtract ( 753 )   HTML ( 1)   PDF (3553KB) ( 526 )  
Related Articles | Metrics
As the number of on-chip cores in chip multiprocessors (CMPs) increase, size of network-on-chips (NoCs) and network latency increase. NoCs consume an increasing fraction of the chip power as technology and voltage continue to scale down, and static power consumes a larger fraction of the total power. Currently, processor designers strive to send under-utilized cores into deep sleep states in order to improve overall energy efficiency. However, even in state-of-the-art CMP designs, when a core going to sleep the router attached to it remains active in order to continue packet forwarding. The router attached to a sleeping core has low traffic load, due to no packets to or from sleeping core. An on-chip network called SMART (single-cycle multi-hop asynchronous repeated traversal) that aims to present a single-cycle data-path all the way from the source to the destination. This paper, we propose reducing the VC(virtual channel) of router that is attached to sleeping core, based on SMART NoC, reducing power consumption and bringing little performance penalty. We evaluate our network using synthetic traffics. Our evaluation results show that VC power gating increases network latency less than 2% when the workload is low, and compared with no bypass path network, the power is reduced about 13.4%.
PLUFS: An Overhead-Aware Online Energy-Efficient Scheduling Algorithm for Periodic Real-Time Tasks in Multiprocessor Systems
Zhang Dongsong, Wang Jue, Zhao Zhifeng, Wu Fei
2016, 53(7):  1454-1466.  doi:10.7544/issn1000-1239.2016.20160163
Asbtract ( 689 )   HTML ( 0)   PDF (4387KB) ( 543 )  
Related Articles | Metrics
Although some existing multiprocessor energy-efficient approaches for periodic real-time tasks can achieve more energy savings with taking practical overhead of processor into consideration, they cannot guarantee the optimal feasibility of periodic tasks. For the non-ignorable overhead of switching the processor state in embedded real-time systems, this paper proposes an overhead-aware online energy-efficient real-time scheduling algorithm in multiprocessor systems, the periodic tasks with largest utilization first based on switching overhead (PLUFS). PLUFS utilizes the fluid scheduling concept of time local (TL) remaining execution plane and the switching overhead of the processor states to implement energy-efficient scheduling for real-time tasks in multiprocessors at the initial time of each TL plane as well as at the end execution time of a periodic task in each TL plane. Consequently, PLUFS can obtain a reasonable tradeoff between the real-time constraint and the energy-saving while realizing the optimal feasibility of periodic tasks. Mathematical proof and extensive simulation results demonstrate that PLUFS guarantees the optimal feasibility of periodic tasks, and on average saves more energy than existing algorithms, and improves the saved energy of some existing algorithms by about 10% to 20% at the same time.
XOS: A QoE Oriented Energy Efficient Heterogeneous Multi-Processor Schedule Mechanism
Gong Xiaoli, Yu Haiyang, Sun Chengjun, Li Tao, Zhang Jin, Ma Jie
2016, 53(7):  1467-1477.  doi:10.7544/issn1000-1239.2016.20160113
Asbtract ( 1348 )   HTML ( 0)   PDF (2548KB) ( 544 )  
Related Articles | Metrics
Smart mobile devices are playing a more and more important part in people’s daily life. However, the pursuit of increasing performance of mobile devices directly conflicts with the limited battery capacity. The inevitable contradiction between them begins to block the development of smart mobile devices. To overcome this limitation, the heterogeneous multi-processor architecture can balance the user experience and the energy consumption on smart mobile devices, which makes it become a new solution. Based on a compartmental QoE model, a schedule mechanism called XOS oriented heterogeneous multi-processor devices is presented to provide a high energy efficient solution. In XOS, the user interaction tasks are recognized by the operating system based on the cross-layer information, and more computing resources are allocated to these tasks to guarantee the quality of experience, while resources would be limited for other tasks to reduce energy consumption. A simulation system is built to verify the effectiveness of the XOS model and then make a reasonable optimization. Then the implementation and the experiment of the XOS are conducted on Odroid-XU3 board with Android operation system. The result shows that the tasks scheduled by XOS decelerate lessens 2.7%~7.3% QoE lost, whereas they reduce 8%~48% energy consumption at the same time.
Optimization Research on Thermal and Power Management for Real-Time Systems
Li Tiantian, Yu Ge, Song Jie
2016, 53(7):  1478-1492.  doi:10.7544/issn1000-1239.2016.20160134
Asbtract ( 789 )   HTML ( 1)   PDF (2463KB) ( 501 )  
Related Articles | Metrics
Power consumption issue of real-time system has been paid much attention to by both academia and industry due to its constraints on energy, peak temperature and deadline of real-time task. Up to now, there have been many related researches. Temperature-unaware traditional researches usually adopt DVS to scale processor states for optimal power management. However, with the increasing power density of processors due to the continuous shrinking of chip size, the mutual effect between temperature and power has become unignorably. As a consequence, many new temperature-aware optimization approaches have derived based on the traditional methods. This paper firstly makes an overview of the three models (task, thermal and power) this research bases on; secondly, this paper divides existing researches into two categories: temperature-unaware traditional researches and temperature-aware optimization researches, and the latter one is further divided as single task optimization and multi-task scheduling; thirdly, this paper makes a comparison of the researches from mechanism, optimization goal and effect, and scheduling time etc., analyzing their advantages and disadvantages; finally, this paper points out the future research directions.
Green Hierarchical Management for Distributed Datacenter Containers
Hou Xiaofeng, Song Pengtao, Tang Weichao, Li Chao, Liang Xiaoyao
2016, 53(7):  1493-1502.  doi:10.7544/issn1000-1239.2016.20160119
Asbtract ( 1001 )   HTML ( 2)   PDF (2645KB) ( 493 )  
Related Articles | Metrics
In recent years, modular datacenters (datacenter containers) have become promising IT infrastructure solutions due to their impressive efficiency and scalability. Pre-fabricated containers can be deployed not only in existing warehouse-scale datacenter facilities for capacity expansion, but also in urban/remote areas to support onsite Internet of Things (IoT) data processing. Combing conventional centralized datacenter servers with distributed containers can offer cloud providers new opportunities of exploiting local renewable energy resources and reducing unnecessary data movement overhead. This paper investigates a hierarchical management strategy for emerging geographically distributed datacenter containers. We logically group distributed datacenter containers into multiple classes that have different data accessing patterns. During runtime a central navigation system is used to monitor the utilization of each container class and perform dynamic sleeping scheduling to further improve the overall energy efficiency. Experimental results on our scaled-down test-bed show that the proposed mechanism can save 12%~32% energy cost while ensuring high performance.
Energy Optimization Heuristic for Deadline-Constrained Workflows in Heterogeneous Distributed Systems
Jiang Junqiang, Lin Yaping, Xie Guoqi, Zhang Shiwen
2016, 53(7):  1503-1516.  doi:10.7544/issn1000-1239.2016.20160137
Asbtract ( 929 )   HTML ( 3)   PDF (3863KB) ( 590 )  
Related Articles | Metrics
Most of existing energy optimization heuristics with deadline constraint for workflows in DVFS-enabled heterogeneous distributed systems usually trap in local optima. In this paper, we propose a new energy optimization heuristic called backward frog-leaping global energy conscious scheduling: BFECS. This algorithm makes full use of surplus time between the lowerbound of the workflow and the constrained deadline. Specifically, it starts from the constrained deadline, and leapfrogs towards the lowerbound of the workflow with different leap interval. During the whole process of leapfrogging, the leap intervals are continually changed according to the locally optimal value until the endpoint of leapfrogging is reached; the scheduling sequence with least run energy consumption is also saved at the same time. Furthermore, more energy consumption can be reduced by leveraging slack time reclamation technique, and the idle time slots caused by precedence constraints can be assimilated by the tasks through running at a lower and suitable voltage/frequency using DVFS technique, without violating the precedence constraints of the workflow and breaking the deadline. The experimental results show that the proposed algorithm can decrease energy consumption significantly.
A Statistic-Based Method for Hard-Disk Power Consumption in Storage System
Sun Jian, Li Zhanhuai, Zhang Xiao, Wang Huifeng, Zhao Xiaonan
2016, 53(7):  1517-1531.  doi:10.7544/issn1000-1239.2016.20160133
Asbtract ( 773 )   HTML ( 1)   PDF (8104KB) ( 344 )  
Related Articles | Metrics
Due to the rapid development of big data in the data center, power consumption of storage system is a major issue in today’s datacenters. How to reduce the power consumption of storage systems has become an urgent issue and a hot research topic in the field of computer science. As the hard disk drive is the primary storage medium in today’s storage systems, modeling hard-disk power consumption is attracting more attention in the current state of research. The accurate power consumption model of disk can not only solve the problem of power matching in data center devices, but also estimate the accuracy of energy-efficient solutions. We develop a statistic-based hard-disk power modeling method that estimates the power consumption of storage workloads. The model makes up the weakness of traditional fine-grained model and it is more accurate than the coarse-grained model. In practical applications, it does not need to record the disk internal activities, and does not need to trace complex parameter. Our power estimation results are highly accurate, which means error of 3% and the model is applicable to the synchronous IO and asynchronous IO. Moreover, our model can also be applied to various online storage systems and data center.
An Energy-Efficient Routing Algorithm Based on Bezier Curve in Wireless Sensors Networks
Wan Shaohua, Zhang Yin
2016, 53(7):  1532-1543.  doi:10.7544/issn1000-1239.2016.20150172
Asbtract ( 900 )   HTML ( 5)   PDF (4348KB) ( 650 )  
Related Articles | Metrics
Wireless sensor networks have tremendous value for event-based applications. However, due to the battery energy exhausted, sensor nodes become invalid and get out of usage, hence researching on energy efficient algorithms plays a significant role in the area of sensor networks. Multipath routing can distribute the energy load onto the multiple routes and thus increase the lifetime and quality of the network. It is important to stress the fact that evenly regulating the routing task among the more nodes of the network can also protect a node from failure considering that a node with heavy duty is likely to deplete its power quickly. On the contrary, all the traffic will be shipped along the shortest path routing, corresponding to the heavily congested path case, which in turn leads to overload of the nodes along the optimal routes between the sink and source pair, and finally shortens the lifetime of the network. In this paper, firstly, we propose an energy-efficient multipath routing algorithm based on Bezier curve (MPRB) that allows a given source node send samples of data to a given sink node in large scale sensor networks and by comparison with the typical multi-path routing algorithms, the experimental results demonstrate that our algorithm can obtain better energy efficiency. Secondly, motivated by the fact that the number of trees in the query region can influence the lifetime gain, we compare two new energy-efficient spatial-temporal query algorithms and how the way of the query region division and the number of sub-query regions have an effect on energy consumption of the wireless sensor networks through theoretical and experimental analysis. The results show the algorithm with the angular query region division is of energy efficiency and a ‘green’ mechanism.
ICIC_Target: A Novel Discovery Algorithm for Local Causality Network of Target Variable
Li Yan, Wang Ting, Liu Wanwei, Zhang Xiaoyan
2016, 53(7):  1544-1560.  doi:10.7544/issn1000-1239.2016.20148251
Asbtract ( 702 )   HTML ( 0)   PDF (3119KB) ( 467 )  
Related Articles | Metrics
Causality research aims to reveal the law of evolution of nature, society and human. Nowadays, the causality research receives widespread attention for its important applications of human life and science research, but there are still many difficulties and challenges. This paper presents a unified model to explain the stimulating and inhibiting causalities. Based on this model, we also present a framework ICIC and a novel algorithm ICIC_Target to infer the local causal structure of a target variable from observational data without any limitation of some assumptions, such as assumption of acyclic structure, hidden variables and so on. Following our descriptions of causality essence and properties, as well as several classical theories proposed by Judea Pearl, Gregory F. Cooper and so on, we introduce concepts of exogenous variable and clique-like structure (IClique) to get rough ordering of variables, which are necessary for revealing the causality accurately and efficiently. To evaluate our approach, several experiments compared with HITON, IC, PC, PCMB and several methods based on four datasets with different data types have been done. The results demonstrate the higher performance and stronger robustness of our algorithm ICIC_Target. In this paper, we also discuss the advantages of stability and complexity of ICIC_Target.
A Cluster Evolutionary Algorithm for Power System Economic Load Dispatch Optimization
Chen Hao, Pan Xiaoying, Zhang Jie
2016, 53(7):  1561-1575.  doi:10.7544/issn1000-1239.2016.20150378
Asbtract ( 751 )   HTML ( 5)   PDF (3748KB) ( 481 )  
Related Articles | Metrics
In electric power system, economic load dispatch (ELD) is an important topic, which can not only help to build up safety and stable operation plans and prolong the service life of generating units but also can save energy and maximize the economic benefits of power enterprise. The practical ELD problem has non-smooth cost function with nonlinear constraints which make it difficult to be effectively solved. In this study, a novel global optimization algorithm, cluster evolutionary algorithm (CEA), is proposed to solve ELD problem. In CEA, a virtual cluster organization has been constructed among individuals in order to dynamically adjust the searching process of simulated evolutionary system and improve the optimization efficiency of population. In simulations, the CEA has been applied to 13 testing functions and 3 IEEE testing systems for verifying its feasibility. The experiments have shown the CEA can get high quality solutions with lesser computation cost for 13 testing functions. Compared with the other existing techniques, the proposed algorithm has shown better performance for 3 IEEE systems. Considering the quality of the solution obtained, this method seems to be a promising alternative approach for solving the ELD problem in practical power system.
The Error Theory of Polynomial Smoothing Functions for Support Vector Machines
He Wenbin, Liu Qunfeng, Xiong Jinzhi
2016, 53(7):  1576-1585.  doi:10.7544/issn1000-1239.2016.20148462
Asbtract ( 740 )   HTML ( 0)   PDF (1152KB) ( 433 )  
Related Articles | Metrics
Smoothing functions play an important role in the theory of smooth support vector machines. In 1996, Chen et al proposed a smoothing function of support vector machines—the integral function of Sigmoid function, and solved the error problem of the smoothing function. From 2005 to 2009, Yuan, Xiong and Liu proposed an infinite number of polynomial smoothing function and the corresponding reformulations for support vector machines. However, they did not touch the error functions for this class of polynomial smoothing functions. To fill up this gap, this paper studies the problem of the error functions with the Newton-Hermite interpolation method. The results show that: 1) the error functions of this class of polynomial smoothing functions can be calculated using the Newton-Hermite interpolation method, and the detailed algorithm is given; 2) there are an infinite number of error functions for this class of polynomial smoothing functions and a general formulation is obtained to describe these error functions; 3) there are several important properties for this class of error functions and the strict proof is given for these properties. By solving the problem of the error functions and their properties, this paper establishes an error theory of this class of polynomial smoothing functions, which is a basic theoretical support for smooth support vector machines.
Optimizing eSTR Algorithm for Solving Constraint Satisfaction Problems
Wang Ruiwei, Li Zhanshan, Li Hongbo
2016, 53(7):  1586-1595.  doi:10.7544/issn1000-1239.2016.20150284
Asbtract ( 775 )   HTML ( 1)   PDF (1385KB) ( 592 )  
Related Articles | Metrics
Table constraints, i.e., constraints given in extension by listing all allowed (or forbidden) tuples, are very straight forward and easy to understand, which are intensively studied in constraint programming (CP). Because such constraints are presented in many real world applications from areas such as design, databases, configuration and preferences modeling. However, With the growth of number of constraints and number of tuples, the space cost for table constraints and time cost of consistency checking have become key topics. eSTR is an algorithm which extends simple tabular reduction (STR) to higher-order consistency. After in-depth analysis of eSTR algorithm, this paper proposes two kinds of optimized methods for eSTR: PW\+{sup} data structure and minimal constraint scope, and then we prove that the constraint network enforce Pair-Wise Consistency and Generalized Arc-Consistency (PWC+GAC) with minimal constraint scope is equivalent to original constraint scope. At the same time, minimal constraint scope can reduce further space cost of eSTR2 algorithm by deleting columns of the tables in constraints, and PW\+{sup} data structure can reduce the CPU running time by avoiding some unnecessary checking of Pair-Wise-support (PW-support), since tuples in table of the constraint may not lose any PW-supports on the tables of other neighbour constraints. The experimental results show that combining our methods with eSTR2 algorithm can obviously outperform eSTR2 and eSTR2w on many instances of different problems, since it reduces the space cost and CPU running time of eSTR algorithm.
An Algorithm Based on Extension Rule For Solving #SAT Using Complementary Degree
Ouyang Dantong, Jia Fengyu, Liu Siguang, Zhang Liming
2016, 53(7):  1596-1604.  doi:10.7544/issn1000-1239.2016.20150032
Asbtract ( 640 )   HTML ( 0)   PDF (1344KB) ( 448 )  
Related Articles | Metrics
The #SAT problem also called model counting is one of the important and challenging problems in artificial intelligence. It is used widely in the field of artificial intelligence. After doing the in-depth study of model counting algorithm CER that is based on Extension Rule, we propose a method using complementary degree for #SAT problem in this paper. We formalize the computing procedure of solving #SAT problem by introducing SE-Tree which produces all the subsets of clause set that need to be computed. As the closed nodes are added into the SE-Tree, the most subsets that contain complementary literal(s) can never be produced, and the true resolutions cannot be missed by pruning either. Then the concept of complementary degree is presented in this paper, and the nodes of SE-Tree are extended in accordance with the descending order of complementary degree. With this extended order, the subsets that contain complementary literal(s) and have smaller size can not only be generated early, but also can reduce the number of generated nodes that are redundant. Moreover the calculation for deciding the complementarity of subsets is reduced. Results show that the corresponding algorithm runs faster than CER algorithm and further improves the solving efficiency of problems which complementary factor is lower.
Distributed Low Rank Representation-Based Subspace Clustering Algorithm
Xu Kai, Wu Xiaojun, Yin Hefeng
2016, 53(7):  1605-1611.  doi:10.7544/issn1000-1239.2016.20148362
Asbtract ( 1057 )   HTML ( 1)   PDF (1536KB) ( 610 )  
Related Articles | Metrics
Vision problem ranging from image clustering to motion segmentation can naturally be framed as subspace segmentation problem, in which one aims to recover multiple low dimensional subspaces from noisy and corrupted input data. Low rank representation-based subspace segmentation algorithm (LRR) formulates the problem as a convex optimization and achieves impressive results. However, it needs to take a long time to solve the convex problem, and the clustering accuracy is not high enough. Therefore, this paper proposes a distributed low rank representation-based sparse subspace clustering algorithm (DLRRS). DLRRS adopts the distributed parallel computing to get the coefficient matrix, then take the absolute value of each element of the coefficient matrix, and retain the k largest coefficients per column and set the other elements to 0 to get a new coefficient matrix. Finally, DLRRS performs spectral clustering over the new coefficient matrix. But it doesn’t have incremental learning function, so there is a scalable distributed low rank representation-based sparse subspace clustering algorithm (SDLRRS) here. If new samples are brought in, SDLRRS can use the former clustering result to classify the new samples to get the final result. Experimental results on AR and Extended Yale B datasets show that the improved algorithms can not only obviously reduce the running time, but also achieve higher accuracy, which verifies that the proposed algorithms are efficient and feasible.
Non-Functional Requirements Oriented Software Process Modeling
Zhang Xuan, Li Tong, Wang Xu, Dai Fei, Xie Zhongwen, Yu Qian
2016, 53(7):  1612-1630.  doi:10.7544/issn1000-1239.2016.20150112
Asbtract ( 910 )   HTML ( 9)   PDF (4290KB) ( 608 )  
Related Articles | Metrics
The qualities of software relate to their synonym of non-functional requirements (NFRs) and mostly depend on the software processes. Based on this viewpoint, collecting process strategies from different software engineering processes and using aspect-oriented modeling, an approach to modeling NFRs-oriented software processes is proposed. The purpose of the approach is to ensure the development or evolution of high quality software through the whole life cycle of the software. First, a knowledge base of process strategies is created to store the activities for ensuring the software qualities. Based on these strategies and using aspect-oriented approach, corresponding aspects are defined to be composed into the base software process models. The need for these aspects is based primarily on the factor that the activities for NFRs and the base process models can be created separately and easily to be composed later. Besides, the conflicts between multi-aspects and between aspects and base models are detected and controlled. Second, a modeling aided tool NPAT (non-functional requirements-oriented processes aided tool) is developed to support the modeling of NFR-oriented software processes. Finally, the theory, the approach and the tool were used in a case study. Through the case study, the theory and the approach are proved to be feasible and the tool is proved to be effective. The NFRs-oriented software process modeling approach can help an organization provide a focus for enhancing software qualities by adding the NFR activities to the software processes.
An Approximation Algorithm of Betweenness Centrality Based on Vertex Weighted
Wang Min, Wang Lei, Feng Xiaobing, Cao Baoxiang
2016, 53(7):  1631-1640.  doi:10.7544/issn1000-1239.2016.20148355
Asbtract ( 744 )   HTML ( 4)   PDF (2554KB) ( 552 )  
Related Articles | Metrics
Betweenness centrality (BC) is a widely used indicator to measure the importance of network vertices. Currently the fastest time complexity of the algorithm is O(V×E). When computing the BC of vertices in networks, we need to do n times searches of single source shortest path. In big data era, networks have more nodes and edges, and the amount of calculation of BC algorithm is large. The huge amount of calculation makes the algorithm need a long time to compute each vertices’ BC value, and the algorithm cannot be applied in practice. The related work focuses on approximation to reduce the running time of BC algorithm, but they also can not reduce the amount of calculation of BC algorithm significantly. In order to further reduce the amount of the calculation of BC algorithm, we propose a vertex weighted approximation algorithm to reduce the amount of calculation of BC algorithm, and the algorithm can make the calculation process be repeated many times to accumulate on a calculation by vertex weighted and select high-impact vertex as source to compute BC. We can reduce the amount of calculation significantly in this way, and meet the requirement of lower error rate and high performance. The approximation algorithm of BC based on vertex weighted can achieve 25 times speedup, and the error rate is lower than percentage of 0.01.
Scheduling of Mixed Criticality Real-Time Tasks Set with Deadline as the Critical Parameter
Huang Lida, Li Renfa
2016, 53(7):  1641-1647.  doi:10.7544/issn1000-1239.2016.20150331
Asbtract ( 850 )   HTML ( 1)   PDF (1377KB) ( 524 )  
Related Articles | Metrics
An increasing trend in real-time systems is to integrate multiple functionalities of different levels of criticality, or importance, on the same platform, which leads the research of mixed criticality systems. In the current mixed criticality scheduling research, worst-case execution times or periods of high critical-level tasks are critical parameters, which are different between in high-critical mode and low-critical mode. Deadline is also an important time-parameter, especially in hard real-time systems, whereas how to schedule mixed criticality tasks using deadline as the critical parameter is still lack of discussion. This paper considers the mixed criticality scheduling in which deadlines of tasks are the critical parameter on a uniprocessor platform. Towards satisfying schedulability of high-criticality tasks in both modes, we use response time analysis to get timing-demand of fixed priority tasks and propose the raising critical-level in advance (RCLA) scheduling algorithm. RCLA gets high-critical tasks with lower priority to preempt low-critical tasks with higher priority earlier and limitedly. As well as meeting the deadlines of high criticality tasks in high-criticality mode and low-criticality mode, RCLA can schedule mixed criticality tasks as many as possible. Simulation results illustrate the benefits of this scheme.