Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 February 2014, Volume 51 Issue 2
Survey of Data-Centric Smart City
Wang Jingyuan, Li Chao, Xiong Zhang, and Shan Zhiguang
2014, 51(2):  239-259. 
Asbtract ( 3002 )   HTML ( 30)   PDF (5584KB) ( 4225 )  
Related Articles | Metrics
Motivated by sustainable development requirements of global environment and modern cities, the concept of the Smart City has been introduced as a strategic device of future urbanization on a global scale. On the other hand, modern cities have built up developed information infrastructure and gathered massive city running data, and therefore are ready to face the coming of the Smart City concept, technologies and applications. An important peculiarity of Smart City is that the technology system is data-centric. The data science and technologies, such as big data, data vitalization, and data mining, play pivotal roles in Smart City related technologies. In this paper, we provide a comprehensive survey of the most recent research activities in data-centric Smart City. The survey is from an informatics perspective and all summarized Smart City works are based on data science and technologies. This paper first summarizes the variety and analyze the feature of urban data that are used in existing Smart City researches and applications. Then, the state-of-the-art progresses in the research of data-centric Smart City are surveyed from two aspects: research activities and research specialties. The research activities are introduced from system architectures, smart transportation, urban computing, and human mobility. The research specialties are introduced from core technologies and theory, interdisciplinary, the data-centric, and the regional feature. Finally, the paper raises some directions for future works.
An Integration and Sharing Method for Heterogeneous Sensors Oriented to Emergency Response in Smart City
Hu Chuli, Chen Nengcheng, Guan Qingfeng, Li Jia, Wang Xiaolei, and Yang Xunliang
2014, 51(2):  260-277. 
Asbtract ( 955 )   HTML ( 5)   PDF (6828KB) ( 755 )  
Related Articles | Metrics
It can be said that smart city will be built on the observations of sensors. Nowadays, city sensors have the features of being diverse in sensor type, different in observation mechanism and huge in quantity, and they represent a closed, isolated and autonomous observation scenario. Facing with complex city emergency events, it is inefficient to manage those heterogeneous city sensors via World Wide Web. The scarcity of the real-time, right and reliable data sourced from physical sensors and the inefficiency of emergency response decision-making seriously hinder the “smart” process of emergency response in smart city. We propose a framework for the integrating and sharing of heterogeneous city sensors oriented to emergency response. Firstly those heterogeneous sensors are uniformly described; Secondly we register them into a standard Web-based catalogue service and the registered sensor resources can be on-demand discovered; Thirdly, we construct an integration and sharing platform for city heterogeneous sensors. Last, we use waterlogging emergency response of Wuhan city as the disaster application to verify the feasibility and extensibility of integration and sharing method for heterogeneous flood-related sensors. The result shows that the proposed framework promotes the shift of heterogeneous waterlogging sensors from the observation island to integration management situation, which can lay a solid basis for sensor sharing and observation planning required in smart city emergency response.
Node Deployment Optimization of Wireless Network in Smart City
Huang Shuqiang, Wang Gaocai, Shan Zhiguang, DengYuhui, Li Yang, and Chen Qinglin5
2014, 51(2):  278-289. 
Asbtract ( 856 )   HTML ( 1)   PDF (8210KB) ( 546 )  
Related Articles | Metrics
In smart city, the deployment of network nodes of wireless networks has direct effect on network quality of service. This problem can be described as deploying appropriate AP as access nodes and special nodes as gateway nodes to aggregate traffic to Internet in a given geometric plane. In the paper, wireless mesh network as an example, number and deployment location of AP nodes can be determined by the regional flow of people statistics, and gateway nodes deployment is abstracted as a geometric K-center problem. To solve the geometric K-center problem, an improved adaptive PSO algorithm is proposed to optimize the minimum coverage radius. The fitness function is redesigned, and random inertia weight adjustment, adaptive learning factor, neighborhood searching strategy are introduced to the improved PSO to get wider solution. Compared with GA algorithm and K-means algorithm, simulation results show that the improved PSO algorithm is more stable and can get shorter path length, thus the network quality of service can be improved.
A Novel Framework of Data Sharing and Fusion in Smart City—SCLDF
Chen Zhenyong, Xu Zhouchuan, Li Qingguang, Lü Weifeng, and Xiong Zhang
2014, 51(2):  290-301. 
Asbtract ( 1256 )   HTML ( 8)   PDF (2348KB) ( 815 )  
Related Articles | Metrics
Smart city is a new concept and model of urban development, and it is the combination of urbanization development and the new generation of information technologies such as Internet of Things, cloud computing, mobile network and big data. With the explosive growth of the quantity of data in cities, how to share and fuse the massive, heterogeneous, multi-source data in smart city becomes a core issue which must be solved. In this paper, the characteristics and drawbacks of traditional data sharing and fusion technologies are firstly described and analyzed in details, and then some frameworks and thoughts such as semantic Web, data vitalization and Internet of data, which may resolve the data sharing and fusion problems in smart city, are introduced. Based on these studies, a new framework of data sharing and fusion in smart city—smart city linked data framework is proposed. Then the overall layered structure, advantages against the other relevant frameworks and technologies are described briefly. Next, the functions, technologies and challenges of each layer are described in details. The concept of data semantic annotation tag (DSAT) are proposed. And the technologies, methods and classifications of DSAT are described in details. At last, the relevant issues about data linked layer are analyzed in details.
Smart City Guide Using Mobile Augmented Reality
Zhang Yunchao, Chen Jing, Wang Yongtian, and Liu Yue
2014, 51(2):  302-310. 
Asbtract ( 1036 )   HTML ( 0)   PDF (3319KB) ( 1022 )  
Related Articles | Metrics
A new technique for smart city guide using mobile augmented reality is proposed, which satisfies the personalized, multi-scale, comprehensive needs of users and presents active interface with virtual-real fusion. Mobile side is limited by computing power and resource storage capacity. However, mobile devices usually integrate multiple inertial sensors, which are portable and easy to display. Server side is used for city-scale location recognition based on vocabulary tree method. Dynamic partition method with GPS information reduces the range of image retrieval. Hierarchical k-means clustering on BRISK feature with binary descriptors improves the real-time performance of vocabulary tree. Hybrid features based on BRISK and optical flow are executed in parallel for real-time and robust tracking. Regular re-initialization with BRISK feature is used for reducing errors generated by optical flow. Matching point sets mapping is applied for eliminating drift of feature points during initialization of BRISK feature. Sequence frames and keyframe information are used for reducing jitter with pose estimation. Experimental results on UKbench and real environment demonstrate the advantage of virtual-real fusion for city-scale smart guide. Users can easily interact with surrounding real environment. The prototype system has been successfully applied to smart guide system of Shanghai Telecom Experience Venue and other such guide systems.
Community-Based Bidirectional Feedback System for Hybrid Worm Containment in Mobile Internet
Yang Hailu, Zhang Jianpei, and Yang Jing
2014, 51(2):  311-324. 
Asbtract ( 537 )   HTML ( 2)   PDF (3571KB) ( 493 )  
Related Articles | Metrics
Aiming at the problem that the existing worm containment methods can’t reply the mobile Internet worm attack which mixes long-range and short-range attack, this paper proposes a mobile Internet mixed worm bidirectional feedback and containment system based on community. The system consists of SIN (social information networks) containment unit and GIN (geographic information networks) feedback unit. The SIN containment unit is a type of online community quarantine strategy, which contains worms within the community by identifying the access nodes between communities and designing the corresponding worm label delivery algorithm. The GIN feedback unit collects the users’ short range communication records, GPS location data and the historical security information committed by SIN to realize the trust-assessment. Through feeding back the results to SIN containment unit, the GIN limits the next communication decisions of community internal nodes, accordingly reduces the spreading speed of worms inside the community and realizes the bi-directional loop between the SIN containment unit and GIN feedback unit. Simulation experiments have proved that the method proposed by this paper has feasibility and effectiveness.
Provably Secure Certificateless Trusted Access Protocol for WLAN Without Pairing
Ma Zhuo, Zhang Junwei, Ma Jianfeng, and Ji Wenjiang
2014, 51(2):  325-333. 
Asbtract ( 490 )   HTML ( 0)   PDF (1128KB) ( 624 )  
Related Articles | Metrics
A pairing-free certificateless trusted access protocol for WLAN is proposed based on the certificateless public key cryptography and the trusted computing technologies. This protocol does not require the use of certificates and yet does not have the inherent key escrow feature of identity-based public key cryptography (ID-PKC). Taking the efficiency of this protocol into consideration, the following strategies are adopted in our protocol design. The platform authentication and integrity verification of station (STA) to authentication server (AS) are achieved during the authentication procedure. In addition, the explicit key agreement between STA and access point (AP) is adopted without 4-way handshake. Therefore, the mutual authentication and unicast session key agreement between STA and AP as well as the platform trusted verification are realized within 3 protocol rounds. In particular, the point multiplication on the elliptic curve is used instead of bilinear which causes significant computation overhead in the traditional certificateless public key cryptography. The security properties of the new protocol are examined using a very strong security model—the extended Canetti-Krawczyk (eCK) model, and the results show that the protocol is secure on the assumption that the Gap Diffie-Hellman problem is a difficult problem. The analytic comparisons show that the new protocol is very efficient in both computing and communication costs.
An Improved Direct Anonymous Attestation Scheme
Tan Liang, Meng Weiming, and Zhou Mingtian
2014, 51(2):  334-343. 
Asbtract ( 583 )   HTML ( 0)   PDF (1024KB) ( 579 )  
Related Articles | Metrics
DAA (direct anonymous attestation), which not only resolves the bottleneck of the privacy CA (certificate authority), but also realizes anonymous and attestation, is one of the best schemes among all attestation of identity schemes currently. But due to complexity and time-consuming of the original DAA scheme, the application of DAA scheme is hindered largely. A new improved direct anonymous attestation based on the discrete logarithm problem of elliptic curves is presented. The scheme still belongs to ECC(elliptic curve cryptography)-DAA, and the scheme’s process and framework are almost same as those of other schemes. But compared with other schemes, the scheme’s main operations are point addition and scalar multiplication of elliptic curves system, the whole complexity is largely decreased, and the scheme’s key and signature length are much shorter. Meanwhile, the scheme reduces the computational cost of each entity in Join protocol, Sign protocol and Verify algorithm, including TPM (trusted platform module), Host, Issuer, Verifier. It gives a practical solution to ECC-based TPM in protecting the privacy of the TPM. This paper gives a detailed security proof of the proposed scheme in ideal-system/real-system security model which shows that the scheme meets the security requirements of unforgeability, variable anonymity and unlinkability.
Game-Theoretic Mechanism for Cryptographic Protocol
Tian Youliang, Peng Chenggen, Ma Jianfeng, Jiang Qi, and Zhu Jianming
2014, 51(2):  344-352. 
Asbtract ( 670 )   HTML ( 0)   PDF (1608KB) ( 531 )  
Related Articles | Metrics
Both game theory and secure communication protocols focus on the designing and analyzing mechanisms for parties in a collaborative manner. Yet the two fields developed very different sets of goals and formalisms. This paper studies the secure communication protocol problem in the game-theoretic setting. The goal of this paper is to formulate computation and communication rules of a secure communication protocol based on Nash equilibrium in the game-theoretic framework. We firstly propose a game-theoretic model of secure protocols, including the player set, information set, available action, action sequence, player function, and utility function using the idea from universally composable security. Since our mode combines with the universally composable ideal, secure protocols can be concurrently run within this model. Secondly, the formalized definition of secure protocols is given according to concept of Nash equilibrium. Thirdly, we give an instance of secure protocol under the game-theoretic mechanism. Finally, the analysis shows that our mechanism is effective.
UC Security Model of Position-Based Key Exchange
Zhang Junwei, Ma Zhuo, Ma Jianfeng, and Ji Wenjiang
2014, 51(2):  353-359. 
Asbtract ( 738 )   HTML ( 4)   PDF (853KB) ( 552 )  
Related Articles | Metrics
The goal of position-based cryptography is to use the geographical position of a party as its only credential to achieve some cryptographic tasks, such as position-based encryption. Position-based key exchange should have the property that if there is a prover at the claimed position, then at the end of the protocol, the verifiers should share a uniform key with it while for any one group of colluding adversaries should look indistinguishable from a key drawn uniformly at random. The provable security of key exchange in position-based cryptography is investigated in this paper. In the universally composable framework, the provable secure model of position-based key exchange is proposed. According to the security requirements of position-based key exchange, the ideal functionality of position-based key exchange is presented. For any one group of colluding adversaries, the shared key derived from the ideal functionality is indistinguishable from a random key. At the same time, the ideal functionality of bounded retrieval model is designed as one of the set-up assumptions in position-based cryptography. In addition, the position-based key exchange protocol in 1-dimension space, as an example, can securely realize the functionality of position-based key exchange in the bounded retrieval model.
A Trusted Recovery Model for Assurance of Integrity Policy Validity
Yuan Chunyang, Xu Junfeng, and Zhu Chunge,
2014, 51(2):  360-372. 
Asbtract ( 846 )   HTML ( 2)   PDF (2146KB) ( 399 )  
Related Articles | Metrics
Access control is one of the most important protection mechanisms of current mainstream operating systems. It is the process of mediating every request to resources and data maintained by a system and determining whether the request should be granted or denied. The access control decision is enforced by a mechanism implementing regulations established by a security policy. There are some typical security policies of access control. The mainstream operating systems is inadequate to support multi-policy at the same time for enforcing different access control decision. Integrity of multi-policy is an important part of access control research in secure systems. Trusted recovery is the necessary function of high-level security operating system. The objective of trusted recovery is to ensure the maintenance of the security and accountability properties of a system in the face of failures. This paper presents a trusted recovery monitoring model, which can solve some limits of strict security policy for access control. Firstly, the framework of model is given. The formal Clark-Wilson model and its improved model PCW (Poveys Clark-Wilson) are implemented by configuring TE (type enforcement) and RBAC (role-based access control) model. Secondly, combining the characteristics of a file system in operating system, this paper presents how to recover the file system to its last consistency secure state, in conservative and optimistic recovery policy respectively, by analyzing audit logs and undoing some malicious operations. This method can recover the system to a secure state in the face of failures and improves the availability of the system. It provides an important exploration for the design and implementation of the trusted recovery mechanisms of our own high-level secure operating system.
Malware Classification Approach Based on Valid Window and Naive Bayes
Zhu Kenan, Yin Baolin, Mao Yaming, and Hu Yingnan
2014, 51(2):  373-381. 
Asbtract ( 1058 )   HTML ( 3)   PDF (1684KB) ( 804 )  
Related Articles | Metrics
Malware classification is the key problem in the field of malicious code analysis and intrusion detection. Existing malware classification approaches have low efficiency and poor accuracy because the raw behavior analysis data is large-scale with high noise data and interfered by random factors. To solve the above issues, taking the malware behavior reports as raw data, this paper analyzes the malware behavior characteristics, the operation similarity, the interference situation of random factors and noisy behavior data. Then it proposes a parameter valid window model for system call which improves the ability of operation sequence to describe behavior similarity. On this basis, the paper presents a malware classification approach based on naive Bayes machine learning model and parameter valid window. Moreover, an automatic malware behavior classifier prototype called MalwareFilter is designed and implemented in this paper. In case study, we evaluate the prototype using system call sequence reports generated through true malware. The experiment results show that our approach is effective, and the performance and accuracy of training and classification are improved through parameter valid window.
Key Technology in Distributed File System Towards Big Data Analysis
Zhou Jiang, Wang Weiping, Meng Dan, Ma Can, Gu Xiaoyan, and Jiang Jie
2014, 51(2):  382-394. 
Asbtract ( 1423 )   HTML ( 3)   PDF (4297KB) ( 2094 )  
Related Articles | Metrics
With the arrival of big data period, data analysis and processing are becoming a more important technology which the data center and Internet companies depend on. Mass data storage is a hotspot topic in big data analysis with the expansion of information and variety of data structure. Traditional distributed file systems are lack of the new demands in scalability, reliability and performance. In this paper, a cluster file system towards big data analysis is designed, which is named Clover. Clover uses the namespace management based on directory sharding and consistent hashing to solve the problem of metadata extension. It provides metadata consistency for distributed transactions through a modified two-phase commit protocol. Moreover, Clover presents a highly available mechanism based on the shared storage pool. It achieves metadata reliability with hot standby and global state recovery mechanism. The evaluation results reveal that Clover could improve metadata performance linearly with the average value from 5.13% to 159.32% by adding one metadata server. Namespace management and distributed transactions would cause the degradation of performance on multiple metadata servers, but the influence is negligible (less than 10%). Comparing with HDFS, Clover could keep the similar throughput and quickly recover from metadata server failures. Practical application tests show that Clover is suitable for building high scalable and high available storage system.
Characteristics Research on Modern Data Center Network
Deng Gang, Gong Zhenghu, and Wang Hong
2014, 51(2):  395-407. 
Asbtract ( 636 )   HTML ( 5)   PDF (2037KB) ( 2204 )  
Related Articles | Metrics
In recent years, with the rapid development of cloud computing and extensive application of virtualization, the composition, structure, function, scale and application mode of data center network are profound changing. Thorough analysis of its characteristics is the basic research on modern data center network and is of important significance. It can help us to understand its work mechanism, improve its performance, and reasonably design its resource management scheme and system. However, comprehensive analysis and research on the characteristics of data center network is still lack. In this paper, we study a couple of basic characteristics of modern data center network from several important aspects, including structures, application properties, flow characteristics and virtualization features. Some preliminary conclusions of each aspect are obtained: for modern data center network, its mainly constructed based on commodity Ethernet switches and its application mode is changing from monopolization or physical isolation mode to cloud computing, and its flow exhibits locality and dynamic. To support the migration of virtual machine, data center network should be a two-layer network. In the last, influences of these characteristics on network resource management are further studied, which are also the important research directions in the future. We hope these studies can offer a useful support to the theory study and system design.
A Topic-Oriented Clustering Approach for Domain Services
Li Zheng, Wang Jian, Zhang Neng, Li Zhao, He Chengwan, and He Keqing
2014, 51(2):  408-419. 
Asbtract ( 692 )   HTML ( 0)   PDF (2735KB) ( 574 )  
Related Articles | Metrics
With the development of SOA and SaaS technologies, the scale of services on the Internet shows a trend of rapid growth. Faced with the abundant and heterogeneous services, how to efficiently and accurately discover user desired services becomes a key issue in service-oriented software engineering. Services clustering is an important technology to facilitate services discovery. However, the existing clustering approaches are only for a single type of service documents, and they do not consider the domain characteristic of services. To avoid these limitations, on the basis of domain-oriented services classification, this paper proposes a services clustering model named as DSCM based on probability and domain characteristic, and then proposes a topic-oriented clustering approach for domain services based on the DSCM model. The proposed clustering approach can cluster services described in WSDL, OWL-S, and text, which can effectively solve the problem of single service document type. Finally, experiments are conducted on real services from ProgrammableWeb to demonstrate the effectiveness of the proposed approach. Experimental results show that the proposed approach can cluster services more accurately. Compared with the approaches of classical latent Dirichlet allocation (LDA) and K-means, the proposed approach can achieve better in the purity of cluster and F-measure, which can greatly promote on demand services discovery and composition.
Dynamic Service Composition Mechanism Based on OSGi
Luo Juan, Zhou Feng, and Li Renfa
2014, 51(2):  420-428. 
Asbtract ( 446 )   HTML ( 0)   PDF (2397KB) ( 455 )  
Related Articles | Metrics
Internet of Things has been rapidly developed and popularized in recent years, but the serious coupling between application and sensing devices makes development processes quite difficult and complicated. Due to the limitation of original OSGi in sharing services among different nodes, a distributed lightweight middleware structure is proposed based on OSGi. In this distributed OSGi structure, various kinds of functions provided by networked devices are abstracted to services. So that we can take advantage of SOA to manage all networked nodes/devices using services form for decoupling. Single service provided by nodes/devices may only have limited function, and networked devices have mobility and endurance ability limitation, etc. In order to adapt to dynamically changed network and application requirements, a service composition mechanism named DscGOM is designed. DscGOM includes service composition path choosing mechanism and service redirection mechanism. The experimental results show that DscGOM mechanism can be faster and more effective than traditional method, which generates the composition path of services to satisfy the dynamic network demand. When a network device dies or exits unexpectedly, the mechanism can quickly get an alternative composition path and restore the execution.
A VM-centric Approach for Dynamic Layer Binding
Zhu Changpeng, Zhao Yinliang, Han Bo, Zeng Qinghua, and Liu Songjia
2014, 51(2):  429-444. 
Asbtract ( 550 )   HTML ( 0)   PDF (2262KB) ( 480 )  
Related Articles | Metrics
Some context-oriented programming languages are implemented, but all layers in these languages are compiled into executable codes of programs, which increases the size of executable codes and restricts application ranges of programs. In this paper, a VM-centric approach is proposed to address these issues. It incorporates object composition and delegation into VM to implement layer activation, and extends existing VM services to support dynamic layer binding. To assure that the approach preserves type safety properties of the program, a calculus built on Featherweight Java is developed to describe it. Based on the calculus, the influences that the approach imposes on the type safety of the program are formally analyzed, the constraints on it are also proposed, and a formally proof is presented which ensures that the approach preserves the type safety of the program when these constraints are satisfied. Under guidance of the calculus, an implementation of the approach is presented and evaluated. The calculus and the implementation illustrate how to extend Java-like languages to support dynamic layer binding in a type safe way.
Research on New Non-Volatile Storage
Shen Zhirong, Xue Wei, and Shu Jiwu,
2014, 51(2):  445-453. 
Asbtract ( 785 )   HTML ( 6)   PDF (1084KB) ( 672 )  
Related Articles | Metrics
Recently, the performance gap between CPU and storage system has been continually increasing, resulting in the consequence that the storage system becomes the bottleneck of performance improvement of the overall computer systems. With the rapid development of microelectronics technology, new non-volatile storage devices that have the metrics of non-volatility, low power consumption, good scalability and shock resistance, are attracting a great attention from academia and industry. This paper introduces several new non-volatile storage devices (i.e., STT-RAM, RRAM, PCRAM and FeRAM) and compares their performance characteristics with those of traditional storage devices. We further discuss the current exploratory works that seek for lower power consumption, higher reliability and better scalability by applying the new non-volatile storage devices to the current three levels of storage architecture (i.e., cache-level, main-memory-level and external-storage-level). A detailed analysis is then presented which focuses on some strategies to mitigate the inherent drawbacks of the new non-volatile storage devices in the application, such as the limited write endurance and the performance imbalance between the read and write operations. Finally, a panoramic summary is given and the possible future development tendencies are discussed.
A Fault-Tolerant Deflection Router with Reconfigurable Bidirectional Link for NoC
Feng Chaochao, Zhang Minxuan, Li Jinwen, and Dai Yi
2014, 51(2):  454-463. 
Asbtract ( 490 )   HTML ( 0)   PDF (4138KB) ( 497 )  
Related Articles | Metrics
With the CMOS technology scaling down to the nanometer domain, continuing decrease in the feature size of integrated circuits leads to the increase in susceptibility to transient and permanent faults. Supporting fault-tolerance in NoC is highly important for the reliable data transmission on chip-multiprocessors. A fault-tolerant deflection router with reconfigurable bidirectional link for NoC (called BiFTDR) is proposed to protect against transient and permanent faulty links. A pair of reconfigurable bidirectional links connect two neighboring BiFTDR routers. The direction of the bidirectional links can be reconfigured dynamically according to the fault status of the link and the information of the arriving packets. The BiFTDR router can achieve fault-tolerance without misrouting in the case of unidirectional link faults. In addition, the router does not need the routing table, which can reduce the hardware overhead significantly. Simulation results illustrate that in synthetic traffic patterns, the BiFTDR router achieves 10% and 19% less average latency than a reinforcement-learning-based fault-tolerant deflection router under 5 and 15 permanent faulty links respectively. In the real application traffic workloads, compared with the average latency of the network without faulty links, the performance degradation of the BiFTDR router is less than 1%. For transient faults, the performance of the BiFTDR router can achieve graceful degradation even at a high fault rate. The BiFTDR router is synthesized in 65nm technology, and is shown to achieve the frequency of 500MHz with smaller area and power consumption overhead.