Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 February 2017, Volume 54 Issue 2
Scientific Big Data Management: Concepts, Technologies and System
Li Jianhui, Shen Zhihong, Meng Xiaofeng
2017, 54(2):  235-247.  doi:10.7544/issn1000-1239.2017.20160847
Asbtract ( 1951 )   HTML ( 25)   PDF (2617KB) ( 1222 )  
Related Articles | Metrics
In recent years, as more and more large-scale scientific facilities have been built and significant scientific experiments have been carried out, scientific research has entered an unprecedented big data era. Scientific research in big data era is a process of big science, big demand, big data, big computing, and big discovery. It is of important significance to develop a full life cycle data management system for scientific big data. In this paper, we first introduce the background of the development of scientific big data management system. Then we specify the concepts and three key characteristics of scientific big data. After an review of scientific data resource development projects and scientific data management systems, a framework is proposed aiming at the full life cycle management of scientific big data. Further, we introduce the key technologies of the management framework including data fusion, real-time analysis, long termstorage, cloud service, and data opening and sharing. Finally, we summarize the research progress in this field, and look into the application prospects of scientific big data management system.
Data Management Challenges and Real-Time Processing Technologies in Astronomy
Yang Chen, Weng Zujian, Meng Xiaofeng, Ren Wei, Xin Rihui, Wang Chunkai, Du Zhihui, Wan Meng, Wei Jianyan
2017, 54(2):  248-257.  doi:10.7544/issn1000-1239.2017.20170005
Asbtract ( 1397 )   HTML ( 12)   PDF (3154KB) ( 646 )  
Related Articles | Metrics
In recent years, many large telescopes, which can produce petabytes or exabytes of data, have come out. These telescopes are not only beneficial to the find of new astronomical phenomena, but also the confirmation of existing astronomical physical models. However, the produced star tables are so large that the single database cannot manage them efficiently. Taking GWAC that has 40 cameras and is designed by China as an example, it can take high-resolution photos by 15s and the database on it has to make star tables be queried out in 15s. Moreover, the database has to process multi-camera data, find abnormal stars in real time, query their recent historical data very fast, persist and offline query star tables as fast as possible. Based on these problems, firstly, we design a distributed data generator to simulate the GWAC working process. Secondly, we address a two-level cache architecture which cannot only process multi-camera data and find abnormal stars in local memory, but also query star table in a distributed memory system. Thirdly, we address a storage format named star cluster, which can storage some stars into a physical file to trade off the efficiency of persistence and query. Last, our query engine based on an index table can query from the second cache and star cluster format. The experimental results show our distributed system prototype can satisfy the demand of GWAC in our server cluster.
Data Management Challenges and Event Index Technologies in High Energy Physics
Cheng Yaodong, Zhang Xiao, Wang Peijian, Zha Li, Hou Di, Qi Yong, Ma Can
2017, 54(2):  258-266.  doi:10.7544/issn1000-1239.2017.20160939
Asbtract ( 841 )   HTML ( 2)   PDF (2984KB) ( 716 )  
Related Articles | Metrics
Nowadays, more and more scientific data has been produced by new generation high energy physics facilities. The scale of the data can be achieved to PB or EB level even by one experiment, which brings big challenges to data management technologies such as data acquisition, storage, transmission,sharing, analyzing and processing. Event is the basic data unit of high energy physics, and one large high energy physics experiment can produce trillions of events. The traditional high energy physical data processing technology adopts file as a basic data management unit, and each file contains thousands of events. The benefit of file-based method is to simplify the complexity of data management system. However, one physical analysis task is only interested in very few events, which leads to some problems including transferring too much redundant data, I/O bottleneck and low efficiency of data processing. To solve these problems, this paper proposes an event-oriented high energy physical data management method, which focuses on high efficiency indexing technology of massive events. In this method, event data is still stored in ROOT file while a large amount of events are indexed by some specified properties and stored in NoSQL database. Finally,experimental test results show the feasibility of the method, and optimized HBase system can meet the requirements of event index.
Data Infrastructure for Remote Sensing Big Data: Integration, Management and On-Demand Service
Li Guoqing, Huang Zhenchun
2017, 54(2):  267-283.  doi:10.7544/issn1000-1239.2017.20160837
Asbtract ( 1256 )   HTML ( 8)   PDF (4787KB) ( 733 )  
Related Articles | Metrics
The increasing growth of remote sensing data and geoscience research pushes earth sciences strongly and poses great challenges to data infrastructures for remote sensing big data, including the collection, storage, management, analysis and delivery. The de-fact remote sensing data infrastructures become bottleneck of the workflows for remote sensing data analysis because of their capability, scalability and performance. In this paper, data infrastructures for remote sensing big data are catalogued into 6 classes based on the features such as basic service unit, distributivity, heterogeneous, space-time continuation and on-demand processing. Then, architectures are designed for all the 6 classes of data infrastructures, and some implementation technologies such as data collection and integration, data storage and management, data service interface, and on-demand data processing, are discussed. With the architecture designs and implementation technologies, data infrastructures for remote sensing big data will provide PaaS (platform-as-a-service) and SaaS(software-as-a-service) services for developing much more remote sensing data analysis applications. With continuously growing data, tools and libraries in the infrastructures, users can easily develop analysis models to process remote sensing big data, create new applications based on these models, and exchange their knowledge each other by sharing models.
Crowdsourcing-Based Scientific Data Processing
Zhao Jianghua, Mu Shuting, Wang Xuezhi, Lin Qinghui, Zhang Xi, Zhou Yuanchun
2017, 54(2):  284-294.  doi:10.7544/issn1000-1239.2017.20160850
Asbtract ( 1073 )   HTML ( 3)   PDF (2465KB) ( 719 )  
Related Articles | Metrics
The ultimate goal of acquiring scientific data is to extract useful knowledge from the data according to specific needs and apply the knowledge to specific areas to help decision makers make decisions. As the volume of scientific data becomes larger, and the structure becomes more complex, such as semi or unstructured data, it is difficult to automatically process these data by computers. By incorporating human computing power in data processing, crowdsourcing has become one of the solutions for big scientific data processing. By analyzing the characteristics of crowdsourcing scientific data processing tasks to citizens, this paper studies three aspects, which are talent selection mechanism, task execution mode, and result assessment strategy. Then a series of crowdsourcing-based remote sensing imagery interpretation experiments are carried out. Results show that not only scientific data can be processed through crowdsourcing paradigm, but also by designing reasonable procedure, high-quality data can be obtained.
A Secure Index Against Statistical Analysis Attacks
Hui Zhen, Feng Dengguo, Zhang Min, Hong Cheng
2017, 54(2):  295-304.  doi:10.7544/issn1000-1239.2017.20150751
Asbtract ( 745 )   HTML ( 2)   PDF (1645KB) ( 496 )  
Related Articles | Metrics
Most of current searchable encryption schemes suffer from the threat of statistical analysis attacks. Some related works design their keyword/document trapdoors in a one-to-one method to avoid the threat, but it could lead to a severe overhead in searching cost. In the present paper, we design an efficient secure index to defend against a kind of statistical analysis attack. This scheme uses a Bloom filter to build indexes for each document. In order to save searching cost, one unique trapdoor is built for one word. To satisfy the security requirement, this scheme treats indexes of all documents as a matrix, and then adopts forged indexes and interpolation to make sure the frequencies of different words are closed and all indexes in the matrix are indistinguishable between each other. As a result, a particular word in the matrix cannot be recognized, thus the statistical analysis attack is resisted. In implementation, this scheme uses inverted indexes to further improve querying performance. The scheme is proved to be semantic security. Experimental results show that the querying performance of our scheme is double of Z-IDX at large dataset and words cannot be recognized based on their frequencies.
A New Verifiably Encrypted Signature Scheme from Lattices
Zhang Yanhua, Hu Yupu
2017, 54(2):  305-312.  doi:10.7544/issn1000-1239.2017.20150887
Asbtract ( 783 )   HTML ( 0)   PDF (830KB) ( 443 )  
Related Articles | Metrics
Verifiably encrypted signatures (VES) can ensure the fairness of the Internet exchange process effectively. In a VES system, a signer can generate an ordinary signature on a given message using the secret key of the signer and then encrypt it under the public key of the adjudicator. A verifier should be able to verify that this encrypted signature is indeed an encryption of the ordinary signature of the signer, but the verifier cannot be able to extract the ordinary signature. The ordinary signature can only be recovered by the adjudicator from this encrypted signature. Using the technique of basis delegation in fixed dimension suggested by Agrawal et al in CPYPTO 2010, the lattice-based preimage sampling algorithm and a non-interactive zero-knowledge proof for the learning with errors (LWE) problem, this paper constructs a new verifiably encrypted signature scheme from lattices, and based on the hardness of the short integer solution (SIS) problem and the LWE problem, this proposed construction is provably strong unforgeable in the random oracle model. Compared with current verifiably encrypted signature schemes, this scheme needs that the public-private key pair of the signer should be generated according to the public key of the adjudicator, and this scheme can resist quantum attacks and enjoy simpler constructions, shorter public-private keys, smaller signature size and higher efficiency.
Android Static Taint Analysis of Dynamic Loading and Reflection Mechanism
Yue Hongzhou, Zhang Yuqing, Wang Wenjie, Liu Qixu
2017, 54(2):  313-327.  doi:10.7544/issn1000-1239.2017.20150928
Asbtract ( 1117 )   HTML ( 4)   PDF (3603KB) ( 633 )  
Related Articles | Metrics
Privacy leakage is one of the most important issues in the current Android security. The present most important method to detect privacy leakage is taint analysis. Because of its high code coverage and low false negative, the technique of static taint analysis is widely used in the detection of Android privacy leakage. However, the existing static taint analysis tools cannot do effective taint analysis for Android dynamic loading and reflection mechanism. Taking into account the present situation that Android dynamic loading and reflection mechanism are being used more and more widely, we focus on how to enable static taint analysis tools to effectively deal with Android dynamic loading and reflection mechanism. We modify the Android source code to enable the Android system to timely store the loaded dex files and reflection invocation information during the running process of an Android app. This information will be used to guide the static taint analysis process of the app and a policy that replacing the reflective method invocation with non-reflective method invocation is proposed. Based on these ideas, a taint analysis tool—DyLoadDroid is proposed, which has made some improvements of the state-of-the-art static taint analysis tool—FlowDroid and can do effective taint analysis for Android dynamic loading and reflection mechanism. Sufficient experimental results show that DyLoadDroid is very effective in tackling the problem of static taint analysis of Android dynamic loading and reflection mechanism.
A Time-Bound Hierarchical Access Control Scheme for Ubiquitous Sensing Network
Ma Jun, Guo Yuanbo, Ma Jianfeng, Zhang Qi
2017, 54(2):  328-337.  doi:10.7544/issn1000-1239.2017.20150925
Asbtract ( 576 )   HTML ( 1)   PDF (1578KB) ( 492 )  
Related Articles | Metrics
In order to realize an effective access control of sensitive data captured by sensor nodes, researchers have made great achievements on secure and efficient hierarchical access control to satisfy the features of widespread distribution, large universe, limited computation and storage capacity of sensor nodes in ubiquitous sensing network. However, time is the main factor that makes the requirements of hierarchical access control scheme in ubiquitous sensing network different from that in traditional Internet networks, leading to the limited actual application scenario. According to the users' requirement on the nodes for gathering resources, an efficient and secure time-bound hierarchical access control scheme is presented in this paper. Based on the characteristics of perception node in ubiquitous sensing network, including the limited power and computation capability, as well as the storage resource, the scheme optimizes the key storage of user, key derivation time, and public information. The advantages of our scheme include that 1) only one key material is required in each users'access; 2) the balance can be achieved between the time for key acquisition and the amount of public information and 3) the scheme is provably secure without random oracle model. Theoretical analysis indicates that our proposed schedule adapts to user' access control requirement of ubiquitous sensing network.
FuzzerAPP:The Robustness Test of Application Component Communication in Android
Zhang Mi, Yang Li, Zhang Junwei
2017, 54(2):  338-347.  doi:10.7544/issn1000-1239.2017.20150993
Asbtract ( 900 )   HTML ( 2)   PDF (2195KB) ( 484 )  
Related Articles | Metrics
The study of Android security has attracted wide attention because of the huge share in operation system market for mobile devices. Aiming at the security issues of Android application, this paper presents a robustness test scheme of application components based on fuzzy testing method. Firstly, a test set and the corresponding test cases are designed. These cases are sent to a target application for collecting and analyzing the test data. Considering the time, efficiency and other factors, the test case is sent to the application components to be tested. Then, the interaction information of the target component in the test process and the statistical analysis of the output data are analyzed. According to the design of test scheme, a platform named as FuzzerAPP is implemented which can test the robustness of the common applications in Android system. Many applications in some famous Android application markets are tested under FuzzerAPP, and the experiments results are collected. By the analysis of the test data, we find that if FuzzerAPP sends a particular Intent to the target application, it will make the application crash or even lead to the cascading breakdown of system services. Besides, there is a test module exposure problem in many applications of the test set, which can cause serious security problems such as privacy leaks and DoS (denial of service attacks). Finally, on contrast of other similar plans in component supporting, test performance, test objectives and Intent construction categories, the results show the effectiveness of the test method and the practicability of the test platform.
Multi-Keyword Fuzzy Search over Encrypted Data
Wang Kaixuan, Li Yuxi, Zhou Fucai, Wang Quanqi
2017, 54(2):  348-360.  doi:10.7544/issn1000-1239.2017.20151125
Asbtract ( 1080 )   HTML ( 6)   PDF (2374KB) ( 974 )  
Related Articles | Metrics
Cloud computing is one of the most important and promising technologies. Data owners can outsource their sensitive data in a cloud and retrieve them whenever and wherever they want. But for protecting data privacy, sensitive data have to be encrypted before storing, which abandons traditional data utilization based on plaintext keyword search. Around the multi-keyword fuzzy matching and data security protection problems, we propose a multi-keyword fuzzy search method on the encrypted data. Based on the Bloom filter, our scheme uses dual coding function and the position sensitive Hash function to build file index. In the meantime, it uses the distance recoverable encryption arithmetic to encrypt the file index, consequently achieving the function which is facing the multi-keyword to fuzzy search over the encrypted data. Meanwhile, the scheme does not need to set index storage space in advance, which greatly reduces the complexity of the search. Compared with the existing solutions, the scheme does not need predefined dictionary library which lowers the storage overhead in consequence. Experimental analysis and security analysis show that the proposed scheme not only achieves the multi-keyword fuzzy search over the encrypted data, and guarantees the confidentiality and privacy.
tsk-shell: An Algorithm for Finding Topic-Sensitive Influential Spreaders
Gou Chengcheng, Du Pan, He Min, Liu Yue, Cheng Xueqi
2017, 54(2):  361-368.  doi:10.7544/issn1000-1239.2017.20150819
Asbtract ( 723 )   HTML ( 4)   PDF (1410KB) ( 521 )  
Related Articles | Metrics
Discovering influential spreaders is a valuable task in social networks, especially for the popularity prediction and analysis of online contents on microblogs, such as Twitter and Weibo. The k-shell decomposition (k-core), which identifies influential spreaders located in the core of a network, attracts more attention due to its simpleness and effectiveness compared with various related methods, such as indegree, betweenness centrality and PageRank. However, k-shell method only considers the factor of the network position of nodes and ignores the impacts of the content itself in information diffusion. The content itself plays an important role in the process of diffusion. For example, ones just retweet their interested tweets in microblogs. The spread ability of users depends not only on topology structures but also on the published contents, and therefore a unified model considering the two aspects simultaneously is proposed to model users' influence. Specifically, the topics hidden in user generated contents (UGC) are exploited to model the users' propagation probability and a topic-sensitive k-shell (tsk-shell) decomposition algorithm is proposed. Experimental studies on a real Twitter dataset show that the tsk-shell outperforms traditional k-shell by 40% on average in the task of finding top k influential users, which proves the effectiveness of the tsk-shell algorithm.
Multi-Feature Based Message Transmitting in Mobile Social Network
Zhu Ziqing, Cao Jiuxin, Zhou Tao, Xu Shuai, Ma Zhuo, Liu Bo
2017, 54(2):  369-381.  doi:10.7544/issn1000-1239.2017.20151020
Asbtract ( 639 )   HTML ( 1)   PDF (3183KB) ( 369 )  
Related Articles | Metrics
Based on the features of delay tolerant network (DTN), mobile social network (MSN) uses “storage-carry-forwards” approach for message transmission between nodes. How to select a suitable relay node for efficient message transmission is an urgent issue in the current research fields. This paper focuses on the problem by analyzing the social characteristics of network in different perspectives. Firstly, based on the interaction between nodes, the model of social relations between nodes is constructed. Secondly, this paper gives the definition of neighbor set and local community based on the network topology and establishes the community relationship between the nodes. Furthermore, this paper defines the social activity based on the behavior of nodes and takes advantage of the PageRank algorithm to obtain PR values on the basis of multiple features of nodes. Then, transmission values of nodes is defined by using PR values and different utility values of nodes can be obtained. On this basis, considering community relations of nodes and different transmission utility values of nodes, this paper designs and implements a message transmission algorithm in mobile social network. Finally, experiments show that the algorithm has advantages in delivery ratio, overhead ratio and average delay.
An Optimized Credit Distribution Model in Social Networks with Time-Delay Constraint
Deng Xiaoheng, Cao Dejuan, Pan Yan, Shen Hailan, Chen Zhigang
2017, 54(2):  382-393.  doi:10.7544/issn1000-1239.2017.20151118
Asbtract ( 630 )   HTML ( 0)   PDF (2798KB) ( 379 )  
Related Articles | Metrics
The research of influence maximization in social networks is emerging as a promising opportunity for successful viral marketing. Influence maximization with time-delay constraint (IMTC) is to identify a set of initial individuals who will influence others and lead to a maximum value of influence spread consequence under time-delay constraint. Most of the existing models focus on optimizing the simulation consequence of influence spread, and time-delay factors and time-delay constraint are always ignored. The credit distribution with time-delay constraint model (CDTC) incorporates the meeting and activation probabilities to optimize the distribution of credit considering time-delay constraint, and utilizes the optimized relationships of meeting and activation probabilities to evaluate the ability to influence on adjacent individuals. Furthermore, the obstructive effect due to repeated attempts of meeting and activation is reflected by the length of increased propagation paths. After assigning the credit along with the increased propagation paths learned from users action-logs, the nodes which obtain maximal marginal gain are selected to form the seed set by the greedy algorithm with time-delay constraint (GA-TC). The experimental results based on real datasets show that the proposed approach is more accurate and efficient compared with other related methods.
Circle-Based and Social Connection Embedded Recommendation in LBSN
Li Xin, Liu Guiquan, Li Lin, Wu Zongda, Ding Junmei
2017, 54(2):  394-404.  doi:10.7544/issn1000-1239.2017.20150788
Asbtract ( 1312 )   HTML ( 6)   PDF (2667KB) ( 556 )  
Related Articles | Metrics
With the pervasiveness of GPS-enabled smart phones, people tend to share their locations online or check in at somewhere by commenting on the merchants, thus arousing the prevalence of LBSN (location based social network), which takes POIs (point-of-interests) as the center. A typical application in social networks is the recommendation system, and the most common problem in recommendation system is cold start, that is, how to recommend for the users who rarely comment on the item or share comments. In this paper, we propose a recommendation algorithm based on circle and social connections in social networks. The circle is made up by all users who visit a particular category of items and their social connections. It means he is interested in this category that a user accesses the category of items. Our algorithm considers different social connections and circles on tradition matrix factorization. The social connections we use include the relationship between friends(explicit relation) and relevant experts(implicit), which are used as the rule to optimize the matrix factorization model. Experiments are conducted on the datasets from the 5th Yelp Challenge Round and Foursquare. Experimental results demonstrate that our approach outperforms traditional matrix factorization based methods, especially in solving cold-start problem.
GeoPMF: A Distance-Aware Tour Recommendation Model
Zhang Wei, Han Linyu, Zhang Dianlei, Ren Pengjie, Ma Jun, Chen Zhumin
2017, 54(2):  405-414.  doi:10.7544/issn1000-1239.2017.20150822
Asbtract ( 704 )   HTML ( 3)   PDF (3151KB) ( 543 )  
Related Articles | Metrics
Although people can use Web search engines to explore scenic spots for traveling, they often find it very difficult to discover the sighting sites which match their personalized need well. Tour recommendation systems can be used to solve the issue. A good tour recommendation system should be able to provide personalized recommendation and take the time and cost factors into account. Furthermore, our investigation shows that often a user u will consider the distance between her/his habitual residence and the tour destination when she/he makes her/his travel plan. It is because that the travel distance reflects the effect of time and cost indirectly. Therefore, we propose a distance-aware tour recommendation model, named GeoPMF (geographical probabilistic matrix factorization), which is developed based on the Bayesian model and PMF (probabilistic matrix factorization). The main idea of GeoPMF is that for each user we try to get a most preferred travel distance span by mining her past tour records. Then we use it as a kind of weight factors added into the traditional PMF model. Experiments on travel data of Ctrip show that, our new method can decrease RMSE (root mean square error) nearly 10% compared with some baseline methods. And when compared with the traditional PMF model, the average decline on RMSE is nearly 3.5% in virtue of the distance factor.
Budget Constraint Auction Mechanism for Online Video Advertisement
Yang Xue, Dong Hongbin, Teng Xuyang
2017, 54(2):  415-427.  doi:10.7544/issn1000-1239.2017.20160491
Asbtract ( 619 )   HTML ( 0)   PDF (3343KB) ( 480 )  
Related Articles | Metrics
As an important segment in IT industry, online advertising brings huge revenue to publishers. Most of the video advertisement auction are traded as keyword action. Due to the selling object are divisible in video ad auction, these two issues are two internal different problems. Hence, we formulate a novel market model to allocate video advertisements in a playing order as a pre-roll ads sequence. In this model each bidder holds various ads with diverse durations, private valuations and public budget limit. It has been proved that there is no individual rationality, positive transfer and Pareto optimal deterministic mechanism for private budget assumption case. Hence for this heterogeneous commodities allocation problem with budget constrain, we develop a randomize mechanism based on “Clinching auction” frame. In particular, we study no limited valuation distribution setting and show that this mechanism is incentive compatible, individually ration and no positive transfer. Furthermore, compared with the fixed price revenue optimize auction, our mechanism has lower bound revenue based on a dominance parameter which measures the size of the budget of a single agent relative to the maximum revenue. And the availability on revenue and efficiency of H-Clinching auction has been proved by several experiments.
Deadlock Avoiding Based on Future Lockset
Yu Zhen, Su Xiaohong, Qi Peng, Ma Peijun
2017, 54(2):  428-445.  doi:10.7544/issn1000-1239.2017.20150701
Asbtract ( 542 )   HTML ( 1)   PDF (5065KB) ( 652 )  
Related Articles | Metrics
Existing dynamic methods for deadlock avoidance have four main drawbacks: limited capability, passive or blind avoiding algorithm, large performance overhead and no guarantee of correctness of target programs. In order to solve these problems, a combined static and dynamic avoiding method based on future lockset is proposed and named as Flider. The key idea is that, for a lock operation, if none of its future locks are occupied, then it makes sure that executing this lock operation won't lead the current thread to trap into a deadlock state. The future lockset for a lock operation is a set of locks that will be requested by the current thread before the corresponding unlock operation is reached. Firstly, Flider statically computes lock effects for lock operations and function call operations, and inserts them before and after the corresponding operations. Secondly, Flider dynamically intercepts lock operations and computes its future lockset using lock effects inserted by static analysis. Flider permits a lock operation to be executed if and only if all locks of its lockset are not held by any other threads. Otherwise, the lock operation waits until the condition is satisfied. Evaluation and comparison experiments verify that this method can efficiently avoid multi-type deadlocks in an active, intelligent, scalable and correctness-guaranteed way.
Resource-Delay-Aware Scheduling for Real-Time Tasks in Clouds
Chen Huangke, Zhu Jianghan, Zhu Xiaomin, Ma Manhao, Zhang Zhenshi
2017, 54(2):  446-456.  doi:10.7544/issn1000-1239.2017.20151123
Asbtract ( 948 )   HTML ( 1)   PDF (3267KB) ( 656 )  
Related Articles | Metrics
Green cloud computing has become a central issue, and dynamical consolidation of virtual machines (VMs) and turning off the idle hosts show promising ways to reduce the energy consumption for cloud data centers. When the workload of the cloud platform increases rapidly, more hosts will be started on and more VMs will be deployed to provide more available resources. However, the time overheads of turning on hosts and starting VMs will delay the start time of tasks, which may violate the deadlines of real-time tasks. To address this issue, three novel startup-time-aware policies are developed to mitigate the impact of machine startup time on timing requirements of real-time tasks. Based on the startup-time-aware policies, we propose an algorithm called STARS to schedule real-time tasks and resources, such making a good trade-off between the schedulibility of real-time tasks and energy saving. Lastly, we conduct simulation experiments to compare STARS with two existing algorithms in the context of Google's workload trace, and the experimental results show that STARS outperforms those algorithms with respect to guarantee ratio, energy saving and resource utilization.