Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 April 2015, Volume 52 Issue 4
Study of The Long-Range Evolution of Online Human-Interest Based on Small Data
Li Yong,Meng Xiaofeng, Liu Ji, Wang Changqing
2015, 52(4):  779-788.  doi:10.7544/issn1000-1239.2015.20148336
Asbtract ( 1098 )   HTML ( 1)   PDF (1844KB) ( 1040 )  
Related Articles | Metrics
The availability of network big data, such as those from online human surfing log, e-commerce and communication log, makes it possible to probe into and quantify the dynamics of human-interest. These online behavioral data is called “small data” in the era of big data, which can help explaining many complex socio-economic phenomena. A fundamental assumption of Web user behavioral modeling is that the user’s behavior is consistent with the Markov process and the user’s next behavior only depends on his current behavior regardless of the historical behaviors of the past. However, Web user’s behavior is a complex process and often driven by human interests. We know little about regular pattern of human-interest. In this paper, using more than 30000 online users behavioral log dataset from CNNIC, we explore the use of block entropy as a dynamics classifier for human-interest behaviors. We synthesize several entropy-based approaches to apply information theoretic measures of randomness and memory to the stochastic and deterministic processes of human-interests by using discrete derivatives and integrals of the entropy growth curve. Our results are, however preliminary, that the Web user’s behavior is not a Markov process, but a aperiodic infinitary long-range memory power-law process. Further analysis finds that the predictability gain can exceed 95.3 percent when users click 7 consecutive points online, which can provide theoretical guidance for accurate prediction of online user’s interests in the era of big data.
Automatic Selection of Paper Reviewer Based on Scientific Collaboration Network
Wan Meng, He Lianghua
2015, 52(4):  789-797.  doi:10.7544/issn1000-1239.2015.20148407
Asbtract ( 1057 )   HTML ( 0)   PDF (3217KB) ( 989 )  
Related Articles | Metrics
In this paper, we study two tightly coupled topics in selecting paper reviewers from authors’ scientific collaboration network (SCN): network construction and community detection. Based on the fact that the authors of one journal can be selected as reviewers and the reviewers of one manuscript should come from different research communities, we firstly evaluate the collaboration among all authors according to their signatures and construct the normalized collaboration network. For the second key problem of detecting the communities of one scientific collaboration network, considering it is much sparse and has few connections with inter community for one vertex, we apply the method of orthogonal matching pursuit to calculate compressive collaboration information. We conduct several experiments on simulated and real journal author datasets. Although there is no standard to evaluate different kinds of scientific collaboration network, the community detection accuracy rate and the stability of all authors are used to evaluate the performance of the proposed method. We can see from the vertex linkage matrix that our designed scientific collaboration network has good character of vertex grouping. The extensive study of our detection method in simulated data shows that the proposed method has a great advantage in the detection rate and stability. The significant improvement is about 60% compared with the classic methods.
A Data Driven Cognitive Routing Protocol for Information-Centric Networking
Cao Jian,Wang Xingwei,Zhang Jinhong,Huang Min
2015, 52(4):  798-805.  doi:10.7544/issn1000-1239.2015.20148404
Asbtract ( 874 )   HTML ( 1)   PDF (2277KB) ( 749 )  
Related Articles | Metrics
With the rapid development of network technology and the boosting of new types of network applications, the amount of data in network is growing dramatically, which brings significant challenges to the Internet based on TCP/IP. In order to support the user access to the massive data, the information-centric networking (ICN) has been proposed and has become a hot research topic of the future Internet. In this paper, a data driven cognitive routing protocol in ICN is proposed. Each routing node has been endowed the following cognitive behaviors: perception, inquiry, learning, reasoning and feedback. It gets the information about its local topology, the information about its processed routing requests and the popular contents by perception. It builds the neighbor caching table by inquiry. It remembers the information about the experienced path by learning. It analyzes the relationship among nodes with the friend caching table built by reasoning. It makes routing decisions based on guidance by feedback. Simulation results have shown that the proposed protocol is feasible and effective with good performance.
Energy Efficient Routing Algorithm Based on Software Defined Data Center Network
Dong Shi,Li Ruixuan,Li Xiaolin
2015, 52(4):  806-812.  doi:10.7544/issn1000-1239.2015.20148419
Asbtract ( 997 )   HTML ( 0)   PDF (2856KB) ( 1097 )  
Related Articles | Metrics
Because the data center network is a platform for cloud computing and the next generation of network technology, the increasing network data can meet the requirements of users, and at the same time greatly increase the energy consumption of data centers. Many energy saving strategies for data center network are studied, and the combination strategies of hardware and software are mainly used to complete the design of energy-saving models. In order to further improve energy efficient, from the angle of network load balancing and energy saving routing, a new energy efficient routing algorithm is presented. Its basic idea is firstly to make quantitative analysis on the load balancing, and then to put forward energy-saving routing algorithm combined load balancing and energy saving with bandwidth limited, and the overall network accessibility and reliability are fully considered. The algorithm provides a new perspective for energy efficient data center. Compared with traditional energy saving routing, the algorithm can guarantee the high reliability and low energy consumption of the network. Some useful conclusions are obtained through the analysis and explanation of the experimental data, which lay a solid foundation for further research.
MIL-RoQ: Monitoring, Identifying and Locating the RoQ Attack in Backbone Network
Wen Kun,Yang Jiahai,Cheng Fengjuan,Yin Hui, Wang Jianfeng
2015, 52(4):  813-822.  doi:10.7544/issn1000-1239.2015.20148347
Asbtract ( 903 )   HTML ( 1)   PDF (3062KB) ( 673 )  
Related Articles | Metrics
Reduction of quality (RoQ) attack is an atypical denial of service (DoS) attack, which exploits the vulnerability of TCP’s adaptive behavior that can seriously reduce or inhibit the throughput of TCP flows. While most of the defensive methods are studied on the single network access link (router), the RoQ attack can not only launch on the single network link, but also attack towards several links or even entire network, which causes more severe consequences. In order to obtain a global perspective from the network and identify the attack, in this paper we propose a traffic anomaly analysis method to monitor, identify and locate the RoQ attack in backbone network on the basis of principal component analysis (PCA) and spectrum analysis techniques. Experimental results demonstrate that our method can analyze and find anomalies in the traffic from several downstream links in backbone network, and also locate and identify the RoQ attacks accurately. Meanwhile, our method can significantly reduce the computation and complexity as it only needs to analyze local traffic data about anomalous links.
petaPar: A Scalable and Fault Tolerant Petascale Free Mesh Simulation System
Li Leisheng,Wang Chaowei,Ma Zhitao,Huo Zhigang,Tian Rong
2015, 52(4):  823-832.  doi:10.7544/issn1000-1239.2015.20131332
Asbtract ( 964 )   HTML ( 0)   PDF (2449KB) ( 568 )  
Related Articles | Metrics
With the emergence of petaflops (10\+15 FLOPS) systems, numerical simulation has entered a new era—a times opening a possibility of using 10\+4 to 10\+6 processor cores in one single run of parallel computing. In order to take full advantages of the powerfulness of the petaflops and post-petaflops supercomputing infrastructures, two aspects of grand challenges including the scalability and the fault tolerance must be addressed in a domain application. petaPar is a highly scalable and fault tolerant meshfree/particle simulation code dedicated to petascale computing. Two popular particle methods, smoothed particle hydrodynamics (SPH) and material point method (MPM), are implemented in a unified object-oriented framework. The parallelization of both SPH and MPM consistently starts from the domain decomposition of a regular background grid. The scalability of the code is assured by fully overlapping the inter-MPI process communication with computation and a dynamic load balance strategy. petaPar supports both flat MPI and MPI+Pthreads hierarchial parallelization. Application-specific lightweight checkpointing is used in petaPar to deal with the issue of fault tolerance. petaPar is designed to be able to automatically self-restart from any number of MPI processes, allow a dynamic change of computing resources arisen in a scenario of, for example, nodal failure and connection timeout etc. Experiments are performed on the Titan petaflops supercomputer. It is shown that petaPar linearly scales up to 2.6×10\+5 CPU cores with the excellent parallel efficiency of 100% and 96% for the multithreaded SPH and the multithreaded MPM, respectively, and the performance of the multithreaded SPH is improved by up to 30% compared with the flat MPI implementation.
Heterogeneous Computing and Optimization on Tianhe-2,Supercomputer System for High-Order Accurate CFD Applications
Wang Yongxian,Zhang Lilun,Che Yonggang,Xu Chuanfu,Liu Wei,Cheng Xinghua
2015, 52(4):  833-842.  doi:10.7544/issn1000-1239.2015.20131922
Asbtract ( 985 )   HTML ( 1)   PDF (2844KB) ( 577 )  
Related Articles | Metrics
There still exist great challenges when simulating the large-scale computational fluid dynamics (CFD) applications on the contemporary supercomputer systems with many-core heterogeneous architecture like Tianhe-2, which is also one of the research hotspots in this field. In this paper, we focus on exploring the techniques of efficient parallel simulations on the heterogeneous high-performance computing (HPC) platform for large-scale CFD applications with high-order accurate scheme. Some approaches and strategies of performance optimization matched with both the characteristic of CFD application and the architectures of heterogeneous HPC platform are proposed from the perspective of task decomposition, exploration of parallelism, optimization for multi-threaded running, vectorization by employing single-instruction multiple-data (SIMD), optimization for the cooperation of both CPUs and co-processors, and so on. To evaluate the performance of these techniques, some numerical experiments are performed on Tianhe-2,supercomputer system with the maximum number of grid points achieving 1.228×1011, and the total amount of processors and/or co-processors being 590000. Such a large-scale CFD simulation with high-order accurate scheme has to our best knowledge never been attempted before. It shows that the optimized code can get the speedup of 2.6X on CPU and co-processor hybrid platform than that on the CPU platform only, and perfect scalability is also observed from the test results. The present work redefines the frontier of high performance computing for fluid dynamics simulations on heterogeneous platform.
GPU-Accelerated Incomplete Cholesky Factorization Preconditioned Conjugate Gradient Method
Chen Yao,Zhao Yonghua,Zhao Wei,Zhao Lian
2015, 52(4):  843-850.  doi:10.7544/issn1000-1239.2015.20131919
Asbtract ( 1272 )   HTML ( 3)   PDF (2315KB) ( 683 )  
Related Articles | Metrics
Incomplete Cholesky factorization preconditioned conjugate gradient (ICCG) method is effective to solve large sparse symmetric positive definite linear systems. However, ICCG method requires solving two sparse triangular linear systems during each iteration. The inherent serialism of solving sparse triangular becomes a bottleneck which prevents high efficient parallelization of ICCG method on GPU platform. In this paper, an effective method to accelerate solving sparse triangular on GPU platform is proposed. In order to increase the multi-thread parallelism of solving sparse triangular on GPU platform, level scheduling is exploited for the sparse triangular matrixes which incomplete Cholesky factorization generates. For further improving the parallel performance of solving sparse triangular, approximate minimum degree (AMD) algorithm is used to reorder the coefficient matrix before level scheduling. Moreover, a novel method, taking advantage of the level information to reorder the sparse triangular matrices after level scheduling, is applied. These two methods can decrease the number of levels during level scheduling and optimize GPU memory access pattern to utilize memory coalescing in solving sparse triangular, respectively. Numerical experiments indicate that compared with ICCG method implemented with NVIDIA CUSPARSE, applying the above methods can obtain more than 100% performance improvement on average.
Performance Characterization and Efficient Parallelization of MASNUM Wave Model
Zhang Zhiyuan,Zhou Yufeng,Liu Li, Yang Guangwen
2015, 52(4):  851-860.  doi:10.7544/issn1000-1239.2015.20131415
Asbtract ( 1331 )   HTML ( 1)   PDF (4129KB) ( 801 )  
Related Articles | Metrics
Marine science and numerical modeling (MASNUM) is a numerical wave model developed by China, which has been widely used in wave forecasting for ocean disaster prevention and reduction, ocean transportation and military activities. With the increasing demands on higher forecasting precision and climate research, higher and higher resolution becomes a main stream in wave model development. Although the fast development of high-performance computer provides increasing computing power for high-resolution model, parallel version of model is always inefficient to achieve sufficient performance acceleration that can improve the parallel efficiency of the wave model and can shorten the running wall time. In this paper, we firstly characterize the performance of the MASNUM model on a modern high-performance computer to reveal several performance bottlenecks. Then, we propose several parallel optimizations, which dramatically improve communication performance, I/O performance and load balance of two dimension parallel decomposition. And these parallel optimizations consequently significantly improve the overall parallel efficiency and scaling performance of MASNUM model. When we use 960 CPU cores in order to check the MASNUM performance acceleration, the improved parallel version can achieve 4315-fold speedup with the baseline of sequential performance. Based on our experiments, we suggest setting some parallel efficient strategies in order to achieve the high parallel efficiency of other numerical models.
HDF5 Based Parallel I/O Techniques for Multi-Zone Structured Grids CFD Applications
Yang Lipeng, Che Yonggang
2015, 52(4):  861-868.  doi:10.7544/issn1000-1239.2015.20131920
Asbtract ( 860 )   HTML ( 1)   PDF (2819KB) ( 554 )  
Related Articles | Metrics
Computational fluid dynamics (CFD) is one of the most important high performance computing (HPC) areas. CFD applications commonly access large volumes of data. In case of large-scale CFD parallel computing, serial I/O performance does not match the computing performance, hence it becomes the performance bottleneck. Parallel I/O is an effective way to solve this problem. HDF5 (hierarchical data format v5) has provided excellent mechanisms to manage scientific data as well as effective ways to implement parallel I/O. High-order simulator for aerodynamics (HOSTA) is a multi-zone structured grids CFD application that can solve real world flow problems. This paper implements the parallel I/O method in HOSTA, based on the HDF5 file format and corresponding parallel I/O application programming interface. Detailed performance evaluation is performed with real CFD simulation cases on a HPC system equipped with 6 I/O service nodes. The results show that our method is both scalable and efficient. For a delta wing test case, parallel I/O achieves the performance speedup of 21.27 compared with the serial I/O. It achieves maximal I/O throughput of 5.81GBps and improves the application’s performance by over 10%, as compared with the original code. For a simple airfoil test case with larger grid size, our parallel I/O achieves maximal I/O throughput of 6.72GBps.
A Highly Scalable Parallel Algorithm for 3D Prestack Kirchhoff Time Migration
Zhao Changhai, Wang Shihu, Luo Guoan, Wen Jiamin, Zhang Jianlei
2015, 52(4):  869-878.  doi:10.7544/issn1000-1239.2015.20131915
Asbtract ( 631 )   HTML ( 0)   PDF (3041KB) ( 536 )  
Related Articles | Metrics
To support increasing survey sizes and processing complexity, we propose a practical approach that implements the large-scale parallel processing of 3D prestack Kirchhoff time migration(PKTM) on clusters of multi-core nodes. The parallel algorithm is based on three-level decomposition of the imaging space. Firstly, the imaging space is partitioned by offsets. Each node runs in just one process, and all processes are divided into several distinct groups. The imaging work of common-offset space is assigned to a group, and the common-offset input traces are dynamically distributed to the processes of the group. Once all input traces are migrated, the local imaging sections of all the processes in a group are added to form the final common-offset image. In a node, the common-offset imaging section is further partitioned equally by common middle point (CMP) into as many blocks as the number of CPU cores, and the computing threads share the same input traces and spread the sampled points to a different set of imaging points. If the size of a common-offset imaging section exceeds the total physical memory on the compute node, the whole imaging space should be firstly partitioned along in-line direction so that each common-offset imaging space can fit in memory. The algorithm greatly reduces the memory requirement, does not introduce overlapping input traces between any processes, and makes it easy to implement fault-tolerance application. An implementation of the algorithm demonstrats high scalability and excellent performance in our experiment with actual data. Parallelism is scaled to efficiently use up to 497 nodes and 7552,threads.
Privacy Requirement Description and Checking Method in Cloud Computing
Ke Changbo, Huang Zhiqiu
2015, 52(4):  879-888.  doi:10.7544/issn1000-1239.2015.20131906
Asbtract ( 823 )   HTML ( 1)   PDF (3767KB) ( 796 )  
Related Articles | Metrics
Cloud computing has been a computing paradigm to provide services for users. However, it is difficult to control and protect personal privacy information because of its opening, virtualization, multi-tenancy and service outsourcing characters. Therefore, how to prevent user privacy information from being used and propagated in cloud computing illegally has become a research focus. In this work, we propose a semantic-oriented privacy requirement description method and checking mechanism. First of all, we describe the user privacy requirement and privacy policy of service provider based on description logic. Secondly, we address the privacy requirement checking framework. Namely, we build the knowledge base through privacy disclosure assertion of user map to TBox and privacy disclosure assertion of service provider map to ABox, and then reason the TBox and ABox by taking advantage of the Tableau algorithm. In the end, we check whether there are the conflicts between user privacy requirement and privacy policy of service provider through experiment and case analysis. Namely, we build the privacy requirement checking model with Protégé of Stanford University, and prove the consistency of conceptions in model and the satisfiability between the conceptions and the logic axioms with Pellet reasonor. Thereby, the correctness and feasibility of our method is certified.
Towards Runtime Dynamic Provision of Virtual Resources Using Feedforward and Feedback Control
Yu Ling, Xie Yi, Chen Bihuan, Peng Xin, Zhao Wenyun
2015, 52(4):  889-897.  doi:10.7544/issn1000-1239.2015.20131908
Asbtract ( 744 )   HTML ( 0)   PDF (2392KB) ( 538 )  
Related Articles | Metrics
With the prevalence and development of cloud computing, more and more applications are deployed on the cloud servers so as to utilize virtual resources, which can scale on-demand and are priced on a pay-per-use basis. Thus, it becomes an important research problem to ensure optimal operation of applications as well as to achieve cost-benefit provision and utilization of virtual resources. Traditional manual adjustment methods will not only increase the burden on system administrators, but also the accuracy is poor and there is a certain delay. Existing dynamic resource provision methods are mostly triggered after a runtime quality problem happens, thus involving an adaptation delay. Also, these methods neglect the negative impact of the heterogeneity of virtual resources. To address these problems, in this paper we propose a method for dynamic provision of virtual resources based on control theory. This method combines a feedforward and a feedback controller to respectively tune the number of virtual resources and the load for each virtual resource in order to achieve the optimal operation of application systems and the high-efficient use of virtual resources. An experimental study demonstrates that our method can effectively achieve optimal operation of applications as well as improve the utilization ratio of virtual resources.
Detection of JNI Memory Leaks Based on Extended Bytecode
Jiang Tingyu,Wang Peng,Yang Shu,Ru Zhen,Dong Yuan,Wang Shengyuan,Ji Zhiyuan
2015, 52(4):  898-906.  doi:10.7544/issn1000-1239.2015.20131909
Asbtract ( 1311 )   HTML ( 5)   PDF (1597KB) ( 729 )  
Related Articles | Metrics
The Java native interface(JNI)enables Java code running in a Java virtual machine(JVM) to be called by native code, but the difference of security features between languages makes it a security weakness, which cannot be detected by existing analysis methods. Commonly used detection methods are mainly based on the analysis of intermediate language, which is invalid in this JNI case, since the lack of an intermediate representation to bridge Java and C++. This paper analyzes JNI from a Java/C++ cross-language perspective and focuses on memory leaks which frequently occur in JNI calls. In order to overcome language barriers, this paper proposes extended Bytecode (Bytecode*) instructions as interpretation of C++ semantics. Our contributions are described as follows: 1)Define a block memory model which is compatible with both Java and C++;2) Design translation rules from C++ to extended Java Bytecode based on LLVM/LLJVM;3)Construct a method call graph, extract abstract and detect memory leaks in JNI calls by interprocedural analysis. Experiments on typical JNI code with memory leak features show that our analysis work can detect memory leaks in Java/C++ accurately, and is of important significance in cross-linguistic programming and vulnerability analysis.
A Mechanism for Transparent Data Caching
Wang Yanshi,Wang Wei,Liu Zhaohui,Wei Jun, Huang Tao
2015, 52(4):  907-917.  doi:10.7544/issn1000-1239.2015.20131910
Asbtract ( 790 )   HTML ( 2)   PDF (3179KB) ( 656 )  
Related Articles | Metrics
Data caching is an important technology to promote system performance. However, most of the existing data caching solutions need application developers to rewrite the application and to take a large effort to manually manage the caching data. All these disadvantages give rise to the cost of cache deployment and management. A new caching mechanism named EasyCache is proposed in this work, which aims at integrating with existing applications transparently. EasyCache, a key/value store, can scale easily for big data and is compatible with the common data access interfaces and SQL syntax. SQL statements are translated into a sequence of predicts which are specifically designed for key/value stores and rule-based optimization model is proposed simultaneously. EasyCache supports automatic loading of caching data and provides different policies to guarantee data consistency. Developers can easily finish the deployment of EasyCache by replacing the original database driver with EasyCache’s driver, without any modification of existing applications’ source codes. The effectiveness of the EasyCache is illustrated via a detailed set of experimentation using the TPC-W benchmark. The numerical results show that we improve the response speed and throughput by up to 10x and 1x respectively when increasing the number of the table data entries or the number of concurrent users.
Representation and Compound Reasoning of Vague Region Relations and Direction Relations
Li Song,Zhang Liping,Hao Xiaohong, Hao Zhongxiao
2015, 52(4):  918-928.  doi:10.7544/issn1000-1239.2015.20131352
Asbtract ( 602 )   HTML ( 0)   PDF (1792KB) ( 557 )  
Related Articles | Metrics
Representation and reasoning of Vague region relations and direction relations have important significance in spatial database, network information security, data mining and artificial intelligence, etc. To deal with the complex representations and the compound reasoning of Vague region relations and direction relations, Vague region relations and direction relations are systematically analyzed based on the Vague sets which can deal with a great deal of uncertainty information. Based on the Vague sets, the intersection matrices and the representation model of the Vague regions are given. To handle the uncertainty of the direction relations caused by the ambiguity of Vague regions, Vague direction points and Vague direction space are defined based on the Vague sets and the intersection matrices of the direction relations are studied. To analyze and reason the dynamic Vague direction relations, the dynamic adjacency table of the Vague direction space are given. Furthermore, the methods for the reverse direction relations and the related reasoning of the Vague region relations and the Vague direction relations are studied also. The theoretical research and the experimental analysis show that the production in this work can deal with the key problems of the Vague region relations and the Vague direction relations and it can handle the complex reasoning.
A High Performance Management Schema of Metadata Clustering for Large-Scale Data Storage Systems
Xiao Zhongzheng,Chen Ningjiang,Wei Jun, Zhang Wenbo
2015, 52(4):  929-942.  doi:10.7544/issn1000-1239.2015.20131911
Asbtract ( 974 )   HTML ( 1205)   PDF (3862KB) ( 833 )  
Related Articles | Metrics
An efficient, decentralized metadata management schema plays a vital role in large-scale distributed storage systems. The Hash-based partition schema and tree-based partition schema pay huge cost for expansion, and are sensitive to changes in cluster. In response to these problems, CH-MMS(consistent Hash based metadata management schema), is proposed. Virtual MDS (metadata server) is introduced in CH-MMS, and good effect for the cluster’s load balance is proved. Combining the standby mechanism with lazy-update policy, CH-MMS achieves fast failover and zero migration when the cluster changes. Due to its distributed metadata structure, CH-MMS has fast metadata lookup speed. In order to solve the problem that the Hash structure will cause damage to file system hierarchical semantics, a simple and flexible mechanism based on regular expression matching has been introduced. The following work is presented in the paper: 1)Expound the architecture of CH-MMS; 2)Introduce the core data structure of layout-table, virtual MDS and lazy-update policy, and their relevant algorithms; 3)Qualitatively analyze scalability and fault tolerance. The prototype system and simulation show that, CH-MMS is metadata-balancing and has fast failover, flexible expansion and zero migration when cluster changes. CH-MMS can meet the needs of flexible, efficient metadata management of large-scale storage systems with increasing data.
Noisy Image Super-Resolution Reconstruction Based on Sparse Representation
Dou Nuo,Zhao Ruizhen,Cen Yigang,Hu Shaohai, Zhang Yongdong
2015, 52(4):  943-951.  doi:10.7544/issn1000-1239.2015.20140047
Asbtract ( 903 )   HTML ( 3)   PDF (2455KB) ( 717 )  
Related Articles | Metrics
Denoising and super-resolution reconstruction are performed separately in traditional methods for noisy image super-resolution reconstruction, while in the noisy image super-resolution reconstruction method based on sparse representation and dictionary learning the two processes are compounded together. Since an image patch can be well represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary, two dictionaries are trained respectively from noisy low- and clean high- resolution image patches by enforcing the similarity of two sparse representations with respect to their own dictionary. Given a noisy low-resolution image, sparse representations of low-resolution patches via trained low-dictionary are computed, then the high-resolution image can be reconstructed from high-resolution patches with the help of the related low-resolution sparse representations and trained high-dictionary, after global optimization a clean high-resolution is obtained to accomplish the goal of image super-resolution and denosing simultaneously. The experiments show that zooming low-resolution image to a middle-resolution using locally adaptive zooming algorithm for extracting features can get a better reconstructed image than bicubic interpolation algorithm. By setting the parameter λ, we can obtain the best performance both in super-resolution and denoising with absolute advantages in image quality and visual effect, which demonstrates the validity and robustness of our algorithm.
Multi-Band Image Fusion Based on Embedded Multi-Scale Transform
Lin Suzhen, Zhu Xiaohong, Wang Dongjuan, Wang Xiaoxia
2015, 52(4):  952-959.  doi:10.7544/issn1000-1239.2015.20131736
Asbtract ( 827 )   HTML ( 2)   PDF (3266KB) ( 536 )  
Related Articles | Metrics
Multi-band images fusion can improve the effect of the target detection. In view of the differences among multi-band images often reduced by using the sequential fusion, a method of multi-band image fusion is proposed by embedded multi-scale transform (EMT) and local difference feature. The detailed procedure is shown as follows: Firstly, multi-band images are decomposed respectively with support value transform (SVT). Secondly, using the method of quad-tree (QT), the last layer of low-frequency image for most dispersed grey value image is decomposed into blocks which are regarded as the standard to decompose the others’ last layer of low-frequency image. Thirdly, using disjunctive combination of the possibility theory, corresponding blocks of the multi-band images are fused in feature-level. Then, all blocks are traversed to get low frequency fused block images which are mosaicked. Lastly, the final image is got through inverse transformation of mosaic image and support sequence fused image. The fused results of visible image, infrared medium-wave image and long-wave image show that: the effect is significant based on quad-tree decomposition; compared with the simple quad-tree decomposition fusion, the method of EMT successfully increases the edge intensity by 13.31%, the contrast ratio by 2.63%, the entropy by 4.26% and decreases the running time by 87.11%. Thus the validity of the method is proved.
Edge Cluster Based Large Graph Partitioning and Iterative Processing in BSP
Leng Fangling,Liu Jinpeng,Wang Zhigang,Chen Changning,Bao Yubin,Yu Ge,Deng Chao
2015, 52(4):  960-971.  doi:10.7544/issn1000-1239.2015.20131343
Asbtract ( 945 )   HTML ( 7)   PDF (3076KB) ( 728 )  
Related Articles | Metrics
With the development of Internet and the gradual maturity of related techniques in recent years, the processing of large graphs has become a new hot research topic. Since it is not appropriate for traditional cloud computing platforms to process graph data iteratively, such as Hadoop, researchers have proposed some solutions based on the BSP model, such as Pregel, Hama and Giraph. However, since graph algorithms need to frequently exchange intermediate results in accordance with the graph’s topological structure, the tremendous communication overhead impacts the processing performance of systems based on the BSP model greatly. In this paper, we first analyze the solutions proposed by the well-known BSP-based systems in reducing communication overhead, and then propose a graph partition strategy named edge cluster based vertically hybrid partitioning (EC-VHP), building a cost benefit model to study its effectiveness to the communication overhead. Then based on EC-VHP, we propose a vertex-edge computation model, and design both a plain hash index structure and a multi-queue parallel sequential index structure to further improve the processing efficiency of message communication. Finally, our experiments on real and synthetic data sets demonstrate the efficiency and accuracy of the EC-VHP and the index mechanism.
DSlT: An Evidence Reasoning Method for Information Fusion in Wireless Sensor Networks
Chen Hao,Wang Rui,Sun Rongli,Xiao Kejiang, Cui Li
2015, 52(4):  972-982.  doi:10.7544/issn1000-1239.2015.20131527
Asbtract ( 843 )   HTML ( 0)   PDF (2685KB) ( 611 )  
Related Articles | Metrics
Information fusion in wireless sensor networks has recently been a focal point of research, meanwhile with many research challenges. The major challenges include the problem of high conflicting information fusion processing and the requirement for light-weight algorithms with low computational complexity. In the paper, an evidence reasoning method based on the logic expression, namely DSlT, is proposed. By definition of the new combination rule of evidence based on the logic operation and by strict reservation of local conflict, DSlT deals with high conflicting information fusion. By defining new focal elements, the combination amount of focal elements is lowered greatly. Accordingly, the computation cost is reduced dramatically. To verify the performance of DSlT, we conduct two experiments. The first example experiment shows that our approach can effectively deal with high conflicting information fusion. Additionally, compared with DSmT, the computation cost of DSlT is reduced by 81.08% in the process of 3-dimensional evidence fusion. In the real scene experiment, vehicle classification is the application background. The traffic information acquisition platform based on an image sensor network is used for collecting image data of vehicles. The comparison results further indicate the efficiency and advancement of DSlT. The experiments fully reveal the potential application prospect of DSlT in the research field of information fusion in wireless sensor networks.
An Energy-Efficient and Privacy-Preserving Range Query Processing in Two-Tiered Wireless Sensor Networks
Dai Hua,Yang Geng,Xiao Fu,Zhou Qiang,He Ruiliang
2015, 52(4):  983-993.  doi:10.7544/issn1000-1239.2015.20140066
Asbtract ( 829 )   HTML ( 0)   PDF (2459KB) ( 592 )  
Related Articles | Metrics
Applying range query processing in wireless sensor networks (WSNs) while preserving data privacy is a challenge. This paper proposes an energy-efficient and privacy-preserving range query processing in two-tiered wireless sensor networks, which is denoted as EPRQ. In data storing phase, each sensor node in the query range firstly encrypts its collected data, and then encodes them into the minimized comparison factors by 0-1,encoding and hashed message authentication coding mechanism. After that, it transmits the encoded and encrypted data to the corresponding storage node. When the base station begins a range query, the bounds of the range are encoded into comparison factors, and then disseminate them to the corresponded storage nodes. According to the numerical comparison property of 0-1,encoding verification mechanism, an encrypted data set containing the query result is generated by such storage nodes, even without knowing the actual values of the collected data and queried range. Then, the storage nodes send such encrypted data set to the base station as query responses, and the final query result is obtained after decryption by the base station. The theoretical analysis and experimental results show that the EPRQ ensure the privacy of the collected data, the query result and the query range, and it has better performance than the existing methods in the energy consumption.