Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 May 2017, Volume 54 Issue 5
Edge Computing—An Emerging Computing Model for the Internet of Everything Era
Shi Weisong, Sun Hui, Cao Jie, Zhang Quan, Liu Wei
2017, 54(5):  907-924.  doi:10.7544/issn1000-1239.2017.20160941
Asbtract ( 3612 )   HTML ( 123)   PDF (4113KB) ( 3329 )  
Related Articles | Metrics
With the proliferation of Internet of things (IoT) and the burgeoning of 4G/5G network, we have seen the dawning of the IoE (Internet of everything) era, where there will be a huge volume of data generated by things that are immersed in our daily life, and hundreds of applications will be deployed at the edge to consume these data. Cloud computing as the de facto centralized big data processing platform is not efficient enough to support these applications emerging in IoE era, i.e., 1) the computing capacity available in the centralized cloud cannot keep up with the explosive growing computational needs of massive data generated at the edge of the network; 2) longer user-perceived latency caused by the data movement between the edge and the cloud;3) privacy and security concerns from data owners in the edge; 4) energy constraints of edge devices. These issues in the centralized big data processing era have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Leveraging the power of cloud computing, edge computing has the potential to address the limitation of computing capability, the concerns of response time requirement, bandwidth cost saving, data safety and privacy, as well as battery life constraint. “Edge” in edge computing is defined as any computing and network resources along the path between data sources and cloud data centers. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.
A Survey on Sensor-Cloud
Zeng Jiandian, Wang Tian, Jia Weijia, Peng Shaoliang, Wang Guojun
2017, 54(5):  925-939.  doi:10.7544/issn1000-1239.2017.20160492
Asbtract ( 1820 )   HTML ( 11)   PDF (4914KB) ( 1187 )  
Related Articles | Metrics
Wireless sensor network (WSN) has extended people’s ability of gathering information and integrated physical world with information world. In recent years, the emerging cloud computing has made remarkable development, which has injected new vitality into WSN. It leads to new applications and services, expands the ability of data processing and storage in WSN and even produces the sensor-cloud system. Sensor-cloud can deal with the information generated by underlying sensor network, and provide remote services for upper users, which enables users to gather, process, analyze, store and share sensed data according to their demands. In this paper, we investigate the existing sensor-cloud system in detail. We firstly introduce the background, system architecture and applications of the sensor-cloud, then summarize the characteristics of the existing sensor-cloud systems. We reveal the main problems of the existing sensor-cloud systems, such as poor bandwidth, high latency and high failure rate, et al. Moreover, we study the sensor-cloud structure based on fog computing, design the basic methods of solving the before-mentioned problems and discuss the future research directions.
A Formal General Framework of Internet Address Mechanisms
Zhu Liang, Xu Ke, Xu Lei
2017, 54(5):  940-951.  doi:10.7544/issn1000-1239.2017.20151139
Asbtract ( 541 )   HTML ( 1)   PDF (2799KB) ( 396 )  
Related Articles | Metrics
The address mechanism is the most essential and important part of the Internet architecture and its evolution determines the capacity of the Internet to accommodate the innovative applications. The traditional IP-based address strategy gets the current Internet into ossification which makes the architectural innovation become a consensus. Many novel address strategies make significant extensions or innovations for the traditional model but lack of common design principles and consistent expression model. It has become difficult to insight into future evolution progress of the address schemes for the diversity and heterogeneity. Moreover, we believe that a diversity address mechanism might coexist in the Internet architecture to meet the ecological evolution of network applications. To tackle the above problems, by researching the evolution of the Internet address mechanisms and abstracting a minimal architectural core, a general framework for accommodating the diversity and heterogeneity of various address strategies is proposed in this paper, including: 1) formal and verifiable conceptual model forms a consistent theoretical framework within which the invariants and design constraints can be expressed; 2) abstract multi-dimensional and extensible interface primitives and interactive patterns with the communication axioms to provide a proof framework for the Internet address schemes; 3) derive working prototype implementations—Universal Engine of Address Schemes which allows us to construct the various address mechanisms with flexibility and support the evaluation, evolution and coexistence of the Internet address strategies, in order to meet the ecological evolution of network applications.
Research on Improving the Control Plane’s Reliability in SDN Based on Byzantine Fault-Tolerance
Li Junfei, Hu Yuxiang, Wu Jiangxing
2017, 54(5):  952-960.  doi:10.7544/issn1000-1239.2017.20160055
Asbtract ( 915 )   HTML ( 2)   PDF (2224KB) ( 607 )  
Related Articles | Metrics
Software defined network (SDN) proposes the architecture of separating the control logic and forwarding devices in networks, which brings the open API for freely programing and makes the network management more fine. However, while the centralized control of SDN brings innovation and convenience for network applications, it also brings other problems, for example the reliability problem and the scalability problem simultaneously. For the problem of control plane’s reliability in SDN, the method that voting deals with the same OpenFlow messages by combing multiple controllers to a quorum view is proposed to tolerate Byzantine faults, which is different from the current OpenFlow protocol. Firstly, we concretely explain the network structure, workflow and exception handling in the application of Byzantine fault-tolerance algorithm with the feature of SDN, and establish the analytical model of multi-controller’s deployment. Secondly, we design a heuristic algorithm to solve the problem of multi-controller’s deployment. Finally, to verify the fault tolerance method and deploy algorithms by simulation, experimental results show that this method can effectively deal with controllers’ faults, improving the reliability of the control layer, but it will sacrifice the system’s performance at some level. Meanwhile, the deployment algorithm can effectively reduce the transmission delay of processing OpenFlow request.
Resource Allocation Algorithm Based on D2D Pairs Grouping in TDD System
Zhang Zufan, Wang Lisha, Chen Meiling
2017, 54(5):  961-968.  doi:10.7544/issn1000-1239.2017.20151128
Asbtract ( 927 )   HTML ( 3)   PDF (2424KB) ( 529 )  
Related Articles | Metrics
Due to multiple D2D users and one cellular user sharing the same downlink channel resources in TDD systems, an algorithm of resource allocation based on grouping for D2D pairs is proposed to maximize the system throughput, which consists of three parts. This algorithm firstly determines the D2D groups number with the system channels number and the user grouping position centre in terms of the distance among the D2D pairs, and divides the remaining D2D pairs into corresponding groups according to its degree of effect on the communication outage probability of users within the same D2D group. Afterwards, by comparing the interference impact of D2D group on the cellular user, the matching algorithm is used to determine and share the corresponding cellular channel resource for D2D group. Finally, according to the different QOS of cellular users and D2D pairs, the D2D pairs with serious interference are deleted and the D2D pairs sharing the cellular channel resource are finally determined. Simulation results show that more D2D pairs can access to the system and the system throughput is improved by the proposed algorithm.
A Probabilistic Barrier Coverage Model and Effective Construction Scheme
Fan Xinggang, Xu Junchao, Che Zhicong, Ye Wenhao
2017, 54(5):  969-978.  doi:10.7544/issn1000-1239.2017.20151182
Asbtract ( 925 )   HTML ( 2)   PDF (3946KB) ( 504 )  
Related Articles | Metrics
Barrier coverage is one of hot spots in the wireless sensor network. The probabilistic sensing model is closer to the actual situation than 0-1 sensing model. However, there is seldom study about probabilistic barrier coverage. This paper mainly studies virtual radius according to the probabilistic sensing model and the demand of detecting distance. And this paper also proposes the binary probabilistic barrier coverage model in which the neighbor virtual sensing circles are tangent. The CPBMN (construction of probabilistic barrier of minimum node) is also proposed based on this probabilistic barrier model. Firstly, the optimal target locations are determined by the binary probabilistic barrier coverage mode. Secondly, the Hungary algorithm selects the corresponding optimal mobile nodes to shift its target location. Thirdly, the vertical barriers between two horizontal adjacent probabilistic barrier segments are created. The K-probabilistic barriers in the whole area are created by combining these 1-probabilistic barriers in each subarea together. Simulation results show our method can effectively constitute probabilistic barrier coverage. Compared with other methods, it can decrease 70% energy consumption.
Indoor Positioning Algorithm for WLAN Based on KDDA and SFLA-LSSVR
Zhang Yong, Li Feiteng, Wang Yujie
2017, 54(5):  979-985.  doi:10.7544/issn1000-1239.2017.20160025
Asbtract ( 1098 )   HTML ( 2)   PDF (1977KB) ( 478 )  
Related Articles | Metrics
The time-varying received signal strength (RSS) degrades the indoor positioning accuracy in wireless local area network (WLAN). A novel indoor positioning algorithm based on kernel direct discriminant analysis (KDDA) and shuffled frog leaping algorithm and least square support vector regression (SFLA-LSSVR) is proposed to address the problem. Firstly the proposed algorithm employs kernel function strategy to map RSS signal to the field of nonlinear, which is sampled from each access point (AP), and extracts nonlinear features effectively, and reconstructs the positioning information, and discards the redundant positioning features and noise. Secondly, LSSVR algorithm is employed to build the mapping relation model between positioning features and physical locations, and SFLA is employed to optimize the parameters of the relation model, and then test points locations are predicted by using the relation model. Experimental results show that the positioning accuracy of the proposed algorithm is much superior to WKNN, ANN, LSSVR algorithm under the condition of the same sampling numbers, and the number of RSS signal which is sampled from each AP is significantly reduced in the same positioning accuracy, and the proposed algorithm is a WLAN indoor positioning algorithm with good performance.
Semantic Event Region Query Processing in Sensor Networks
Li Yinglong, Zhu Yihua, Lü Mingqi
2017, 54(5):  986-997.  doi:10.7544/issn1000-1239.2017.20160629
Asbtract ( 760 )   HTML ( 2)   PDF (2835KB) ( 602 )  
Related Articles | Metrics
Sensor networks can be viewed as resources constrained distributed database systems, of which a significant challenge is to develop reliable, energy-efficient methods to extract useful information from distributed sensor data. Most of the existing event (region) detection approaches rely on using raw sensory data, which results in a large amount of data transmission as well as is time-consuming. However, it is difficult to ensure accurate results due to the imprecision and uncertainty of the raw sensor data. In many cases, users neither care about these raw sensory data nor pay attention to the data format during in-network filtering or fusion, but want to get natural language-like semantic event information, such as “how serious it is”, “is it credible?” Moreover, the main technique of the existing event detection is neighboring cooperation, which requires great data exchange between neighboring nodes. It is costly in terms of energy and time. This paper proposes a novel fuzzy methodology based semantic event region query processing approach. Semantic event information instead of raw sensor data is used for in-network fusion, and fuzzy method based distributed semantic event information description, filtering and fusion approaches are devised. The experimental evaluation based on real data set show that the proposed approach has good performance in terms of energy efficiency and reliability.
Knowledge Embedded Bayesian MA Fuzzy System
Gu Xiaoqing, Wang Shitong
2017, 54(5):  998-1011.  doi:10.7544/issn1000-1239.2017.20160011
Asbtract ( 674 )   HTML ( 1)   PDF (4271KB) ( 696 )  
Related Articles | Metrics
The most distinctive characteristic of fuzzy system is its high interpretability. But the fuzzy rules obtained by classical cluster based fuzzy systems commonly need to cover all features of input space and often overlap each other. Specially, when facing the high-dimension problem, the fuzzy rules often become more sophisticated because of too much features involved in antecedent parameters. In order to overcome these shortcomings, based on the Bayesian inference framework, knowledge embedded Bayesian Mamdan-Assilan type fuzzy system (KE-B-MA) is proposed by focusing on the Mamdan-Assilan (MA) type fuzzy system. First, the DC (dont care) approach is incorporated into the selection of fuzzy membership centers and features of input space. Second, in order to enhance the classification performance of obtained fuzzy systems, KE-B-MA learns both antecedent and consequent parameter of fuzzy rules simultaneously by a Markov chain Monte Carlo (MCMC) method, and the obtained parameters can be guaranteed to be global optimal solutions. The experimental results on a synthetic dataset and several UCI machine datasets show that the classification accuracy of KE-B-MA is comparable to several classical fuzzy systems with distinctive ability of providing explicit knowledge in the form of interpretable fuzzy rules. Rather than being rivals, fuzziness in KE-B-MA and probability can be well incorporated.
Multi-Objective Particle Swarm Optimization Based on Grid Ranking
Li Li, Wang Wanliang, Xu Xinli, Li Weikun
2017, 54(5):  1012-1023.  doi:10.7544/issn1000-1239.2017.20160074
Asbtract ( 1303 )   HTML ( 5)   PDF (4278KB) ( 993 )  
Related Articles | Metrics
In multi-objective evolutionary algorithms, the majority of researches are Pareto-based. However, the efficiency of Pareto optimality in objective space will deteriorate when there are numerous weak dominance relations. Aiming at this problem, this paper presents a framework of grid-based ranking. By integrating gird strategy, which features both convergence and distribution, with the particle swarm optimization (PSO), we propose a novel grid-based ranking multi-objective particle swarm optimization (MOPSO). Unlike the strategy of Pareto-based dominance which conducts a pairwise comparison between individuals, the grid-based ranking mechanism combines the individual dominance information in the entire solution space, and takes advantage of this information to sort. As a result, we gain the merits of the relationship between individuals in the population effectively and efficiently. By incorporating the distance between particles and approximate optimal front, we reinforce the judgement of the merits of the relationship among particles in the solution space. The experimental assessment indicates that the proposed method in this paper has relative advantages in both convergence and distribution. On this basis, we discuss the influence of grid partition on efficiency in terms of the distribution of ranks over the process of evolutionary, which verifies the efficiency of the algorithm from the other aspect.
Mutual Information Based Granular Feature Weighted k-Nearest Neighbors Algorithm for Multi-Label Learning
Li Feng, Miao Duoqian, Zhang Zhifei, Zhang Wei
2017, 54(5):  1024-1035.  doi:10.7544/issn1000-1239.2017.20160351
Asbtract ( 930 )   HTML ( 3)   PDF (2076KB) ( 819 )  
Related Articles | Metrics
All features contribute equally to compute the distance between any pair of instances when finding the nearest neighbors in traditional kNN based multi-label learning algorithms. Furthermore, most of these algorithms transform the multi-label problem into a set of single-label binary problems, which ignore the label correlation. The performance of multi-label learning algorithm greatly depends on the input features, and different features contain different knowledge about the label classification, so the features should be given different importance. Mutual information is one of the widely used measures of dependency of variables, and can evaluate the knowledge contained in the feature about the label classification. Therefore, we propose a granular feature weighted k-nearest neighbors algorithm for multi-label learning based on mutual information, which gives the feature weights according to the knowledge contained in the feature. The proposed algorithm firstly granulates the label space into several label information granules to avoid the problem of label combination explosion problem, and then calculates feature weights for each label information granule, which takes label combinations into consideration to merge label correlations into feature weights. The experimental results show that the proposed algorithm can achieve better performance than other common multi-label learning algorithms.
Joint Acoustic Modeling of Multi-Features Based on Deep Neural Networks
Fan Zhengguang, Qu Dan, Yan Honggang, Zhang Wenlin
2017, 54(5):  1036-1044.  doi:10.7544/issn1000-1239.2017.20160031
Asbtract ( 957 )   HTML ( 5)   PDF (1726KB) ( 684 )  
Related Articles | Metrics
In view of the complementary information and the relevance when training acoustic modes of different acoustic features, a joint acoustic modeling method of multi-features based on deep neural networks is proposed. In this method, similar to DNN multimodal and multitask learning, part of the DNN hidden layers are shared to make the association among the DNN acoustic models built with different features. Through training the acoustic models together, the common hidden explanatory factors are exploited among different learning tasks. Such exploitation allows the possibility of knowledge transferring across different learning tasks. Moreover, the number of the model parameters is decreased by using the low-rank matrix factorization method to reduce the training time. Lastly, the recognition results from different acoustic features are combined by using recognizer output voting error reduction (ROVER) algorithm to further improve the performance. Experimental results of continuous speech recognition on TIMIT database show that the joint acoustic modeling method performs better than modeling independently with different features. In terms of phone error rates (PER), the result combined by ROVER based on the joint acoustic models yields a relative gain of 4.6% over the result based on the independent acoustic models.
A New Active Contour Model Based on Adaptive Fractional Order
Zhang Guimei, Xu Jiyuan, Liu Jianxin
2017, 54(5):  1045-1056.  doi:10.7544/issn1000-1239.2017.20160301
Asbtract ( 679 )   HTML ( 4)   PDF (3923KB) ( 571 )  
Related Articles | Metrics
Region scalable fitting (RSF) active contour model has limitation in segmenting image with weak texture and weak edge, troubled by inclining to local minimum and slow evolution speed. Aiming at the problem, this paper proposes a new active contour model with fractional order derivative operator capable of adjusting degree adaptively. Firstly, the global Grünwald-Letnikov (G-L) fractional gradient is integrated with the RSF model, which can strengthen the gradient of regions with intensity inhomogeneity and weak texture. As a result, both the robustness to initial location of evolution curve and efficiency of image segmentation are improved. Secondly, the Gaussian kernel function in local fitting term is replaced by bilateral filtering, and the blurred boundary caused by Gaussian kernel function in the process of curve evolution can be tackled. Lastly, an adaptive fractional order mathematical model is constructed based on the gradient magnitude and information entropy of image, therefore the optimal fractional degree is adjusted adaptively. Theoretical analysis and experimental results show that the proposed algorithm is capable of segmenting images with intensity inhomogeneity and weak texture. And the optimal degree of fractional order derivative operator can be calculated adaptively. Furthermore, the presented method is capable of avoiding falling into local optimum, thus the efficiency of image segmentation can be improved.
A Fast Discriminant Feature Extraction Framework Combining Implicit Spatial Smoothness with Explicit One for Two-Dimensional Image
Zhu Kuaikuai, Tian Qing, Chen Songcan
2017, 54(5):  1057-1066.  doi:10.7544/issn1000-1239.2017.20160158
Asbtract ( 726 )   HTML ( 2)   PDF (1745KB) ( 539 )  
Related Articles | Metrics
Images have two-dimensional inherent spatial structures, and the pixels spatially close to each other have similar gray values, which means images are locally spatially smooth. To extract features, traditional methods usually convert an original image into a vector, resulting in the destruction of spatial structure. Thus 2D image-based feature extraction methods emerge, typically, such as 2DLDA and 2DPCA, which reduce time complexity significantly. However,2D-based methods manipulate on the whole raw (or column) of an image, leading to spatially under-smoothing. To overcome such shortcomings, spatial regularization is proposed by explicitly imposing a Laplacian penalty to constrain the projection coefficients to be spatially smooth and has achieved better performance than 2D-based methods, but sharing the genetic high computing cost with 1D methods. Implicit spatial regularization (ISR) constrains spatial smoothness within each local image region by dividing and reshaping image and then executing 2D-based feature extraction methods, resulting in a performance improvement of the typical bi-side 2DLDA over SSSL (a typical ESR method). However, ISR obtains the spatial smooth implicitly but has lack of explicit spatial constraints such that the feature space obtained by ISR is still not smooth enough. The optimization criteria of bi-side 2DLDA are not jointly convex simultaneously, resulting in high computing cost and globally optimal solution cannot be guaranteed. Inspired by statements above, we introduce a novel linear discriminant model called fast discriminant feature extraction framework combining implicit spatial smoothness with explicit one for two-dimensional image recognition (2D-CISSE). The key step of 2D-CISSE is to preprocess spatial smooth for images, then ISR is executed. 2D-CISSE not only retains spatial smooth explicitly, but also reinforces the explicit spatial constraints. Not only can it achieve globally optimal solution, but it also have generality, i.e. any out-of-shelf image smoothing methods and 2D-based feature extraction methods can be embedded into our framework. Finally, experimental results on four face datasets (Yale, ORL, CMU PIE and AR) and handwritten digit datasets (MNIST and USPS) demonstrate the effectiveness and superiority of our 2D-CISSE.
Group and Rank Fusion for Accurate Image Retrieval
Liu Shenglan, Feng Lin, Sun Muxin, Liu Yang
2017, 54(5):  1067-1076.  doi:10.7544/issn1000-1239.2017.20150949
Asbtract ( 908 )   HTML ( 3)   PDF (2256KB) ( 590 )  
Related Articles | Metrics
Single feature is not discriminative to describe informational content of an image, which is always a shortcoming in traditional image retrieval task. It can be seen that one image can be described by different but complemented features. So multi-feature fusion ranking methods should be further researched to improve the ranking list of query in image retrieval. Most existing multi-feature fusion methods only focus on the nearest neighbors in image retrieval. When the ranking result of each neighbor graph is poor, it is hard to get ideal image retrieval result after graph fusion. In order to solve the problem, this paper proposes a novel multi-feature fusion method for image retrieval—GRF(group ranking fusion). The proposed method divides similar images of a data set into the same group, and improves the retrieval result of neighbor graph through similar images group. The GRF method expands the fusion scope in the premise of guaranteeing retrieval precision premise. At last, the experimental results on three standard data sets demonstrate that GRF can effectively utilize multi-feature graph to improve the performance of image retrieval.
Optimizing and Implementing the High Dynamic Range Video Algorithom
Wu An, Jin Xi, Du Xueliang, Zhang Kening, Yao Chunhe, Ma Shufen
2017, 54(5):  1077-1085.  doi:10.7544/issn1000-1239.2017.20160122
Asbtract ( 1471 )   HTML ( 4)   PDF (3234KB) ( 805 )  
Related Articles | Metrics
In contrast to the HDR image processing algorithm, the computation complexity of HDR video processing algorithm make the hardware implementation consume much more logics and storage resources, which poses an enormous obstacle for the existing algorithms to achieve real-time processing. As a consequence, a new algorithm for real-time hardware implementation is demanded. In this paper, we propose a fully pipelined hardware system processing HDR video in real-time, which takes advantage of parallel configurable characteristics of FPGA. Our system obtains a series of low dynamic range (LDR) images adopting varying exposure time algorithm and places their camera response curves in the FPGA look-up table (LUT). Then the translated float data is stored in the BRAM or FIFO modules in parallel pipeline. Finally, the image is displayed in the device by adopting rapid global Tone Mapping algorithm. The entire HDR video processing system is realized in Xilinx Kintex-7 FPGA board. Results show that the processing efficiency can reach 65 f/s for the 1 920×1 080 resolution video when the system clock rate is 120 MHz, which is sufficient for the real-time processing requirements.
An Elastic Scalable Stream Processing Framework Based on Actor Model
Zhan Hanglong, Liu Lantao, Kang Lianghuan, Cao Donggang, Xie Bing
2017, 54(5):  1086-1096.  doi:10.7544/issn1000-1239.2017.20151044
Asbtract ( 982 )   HTML ( 4)   PDF (4510KB) ( 731 )  
Related Articles | Metrics
In the era of big data, stream processing has been widely applied in financial industry, advertising, Internet of things, social networks and many other fields. In streaming scenarios, the generation speed of stream data tends to be fluctuant and difficult to predict. If the streaming peak is larger than system capacity, the system may run slowly or even crash, which leads to job failure. If excessive resources are provided in case of streaming peak, there can be unnecessary waste under light load. In order to solve the matching problem between stream processing load and resources, stream processing system should be elastically scalable, which means that provided resources can be adjusted automatically according to the real-time change of stream flow. Although some researches have made great progress in stream processing, it is still an open problem that how to design an elastic scalable system. This paper introduces eSault, an elastically scalable stream processing framework based on Actor model. eSault firstly manages the processing units stratified hierarchically based on Actor model, and realizes scalability with two-layer routing mechanism. On this basis, eSault proposes an overload judgment algorithm based on data processing delay and light load judgment algorithm based on the data processing speed to efficiently allocate the resources, and achieve elastically scalable stream processing. Experiments show that eSault has good performance, and can achieve flexible scalability well.
An Orthogonal Decomposition Based Design Method and Implementation for Big Data Processing System
Xiang Xiaojia, Zhao Xiaofang, Liu Yang, Gong Guanjun, Zhang Han
2017, 54(5):  1097-1108.  doi:10.7544/issn1000-1239.2017.20151062
Asbtract ( 798 )   HTML ( 2)   PDF (3420KB) ( 574 )  
Related Articles | Metrics
Big data stimulates a revolution in data storage and processing field, resulting in the thriving of big data processing systems, such as Hadoop, Spark, etc, which build a brand new platform with platform independence, high throughput, and good scalability. On the other hand, substrate platform underpinning these systems are ignored because their designation and optimization mainly focus on the processing model and related frameworks & algorithms. We here present a new loose coupled, platform dependent big data processing system designation & optimization method which can exploit the power of underpinning platform, including OS and hardware, and get more benefit from these local infrastructures. Furthermore, based on local OS and hardware, two strategies, that is, lock-free based storage and super optimization based data processing execution engine, are proposed. Directed by the aforementioned methods and strategies, we present Arion, a modified version of vanilla Hadoop, which show us a new promising way for Hadoop optimization, meanwhile keeping its high scalability and upper layer platform independence. Our experiments prove that the prototype Arion can accelerate big data processing jobs up to 7.7%.
Addressing Transient and Intermittent Link Faults in NoC with Fault-Tolerant Method
Ouyang Yiming, Sun Chenglong, Li Jianhua, Liang Huaguo, Huang Zhengfeng, Du Gaoming
2017, 54(5):  1109-1120.  doi:10.7544/issn1000-1239.2017.20151017
Asbtract ( 1185 )   HTML ( 1)   PDF (5600KB) ( 625 )  
Related Articles | Metrics
As the link is the critical path between routers in NoC,it will seriously affect the network performance when faults occur in the link. For this reason, we propose a high reliable fault-tolerant method addressing transient and intermittent link faults. The method can detect real-time data error occurring in the network, and then define that whether the fault is transient fault or intermittent fault, thereby realizing fault-tolerance. As a result, it not only alleviates the network congestion and decreases the data delay, but also ensures the correct transmission of data, effectively guaranteeing the high reliability of the system. It is well known that when a transient fault occurs in the link, the fault link will result in a data error, which cannot be corrected properly. Therefore, the proposed method set up the retransmission buffer and then the backup data will be retransmitted. If an intermittent fault occurs, the packet transmission is truncated. To solve this problem, the proposed method adds a pseudo head flit and a pseudo tail flit to the truncated data, then re-routing begins and the occupied resource is released. Experimental results show that, in different fault conditions, this method outperforms the comparison objects with significant reduction in average packet latency and obvious improvement in throughput. In a word, this scheme can effectively improve network reliability in addition to ensuring network performance.
Hierarchical Configuration Memory Design for Coarse-Grained Reconfigurable SoC
Shen Jianliang, Li Sikun, Liu Lei, Wang Guanwu, Wang Xin, Liu Qinrang
2017, 54(5):  1121-1129.  doi:10.7544/issn1000-1239.2017.20150889
Asbtract ( 806 )   HTML ( 1)   PDF (2629KB) ( 509 )  
Related Articles | Metrics
The generate efficiency and quality of configuration information directly affect the operation effect of the coarse grained reconfigurable SoC. Since the traditional approach treats the configuration memory as a whole, and each processing unit needs to read configuration information from the memory, the operation efficiency is low and the power consumption is large. In this paper, a low power hierarchical configuration information storage architecture is designed, which divides configuration information into separate operating configuration information and interconnect configuration information, and then generates the configuration information based on the context. Experimental results show that the configuration information generation method proposed in this paper can reduce power consumption of 23.7%-32.6% while keeping the same performance. At the same time, because of the separation of the operation and the configuration information, the configuration information capacity is small, so it has a great advantage in configuration speed and performance.