2016 Vol. 53 No. 1
Abstract:
Nowadays, the Internet network has grown into a super-complex system from a small network in a laboratory, and its performance has been of great concern. The network performance is an important indicator of evaluating the network service performance, which can be widely used in service selection, congestion control, routing selection, network performance optimization, future network system architecture design, and so on. Many Internet performance measurement technologies are developed for these application requirements.In this paper, we systematically summarize the development of the existing network performance measurement technologies: first of all, the network performance measurement technologies is classified into different models, and the advantages and disadvantages of performance measurement technologies are well studied from different points of view; and then, the network performance measurement technologies can be divided into three stages: the measurement based on “what you see is what you get”, the large-scale distributed measurement based on path composition, and the big data driven QoE measurement, so the development and evolution of performance measurement technologies are well understood; finally, the challenges of network performance measurement technologies are deeply analyzed, and with the rapid development of the Internet network applications, the content which is needed to be studied in the future is pointed out, as well as the direction of development.
Nowadays, the Internet network has grown into a super-complex system from a small network in a laboratory, and its performance has been of great concern. The network performance is an important indicator of evaluating the network service performance, which can be widely used in service selection, congestion control, routing selection, network performance optimization, future network system architecture design, and so on. Many Internet performance measurement technologies are developed for these application requirements.In this paper, we systematically summarize the development of the existing network performance measurement technologies: first of all, the network performance measurement technologies is classified into different models, and the advantages and disadvantages of performance measurement technologies are well studied from different points of view; and then, the network performance measurement technologies can be divided into three stages: the measurement based on “what you see is what you get”, the large-scale distributed measurement based on path composition, and the big data driven QoE measurement, so the development and evolution of performance measurement technologies are well understood; finally, the challenges of network performance measurement technologies are deeply analyzed, and with the rapid development of the Internet network applications, the content which is needed to be studied in the future is pointed out, as well as the direction of development.
Abstract:
Using virtual backbone in wireless sensor network can effectively save energy, reduce interference, and prolong lifetime, which has a wide application in the field of geometric routing and topology control. Virtual backbone can be modeled as a connected dominating set (CDS) in a graph. This paper introduces the state of art of approximation algorithms on CDS and its variations. The focus is put on theoretical results and methods. The purpose is to provide a reference for researchers who are interested in this field.
Using virtual backbone in wireless sensor network can effectively save energy, reduce interference, and prolong lifetime, which has a wide application in the field of geometric routing and topology control. Virtual backbone can be modeled as a connected dominating set (CDS) in a graph. This paper introduces the state of art of approximation algorithms on CDS and its variations. The focus is put on theoretical results and methods. The purpose is to provide a reference for researchers who are interested in this field.
2016, 53(1): 26-37.
DOI: 10.7544/issn1000-1239.2016.20150654
Abstract:
In the information era, the great diversity of application demands calls for the adoption of different wireless communication protocols. As Internet of things (IoT) has gotten dramatic development in recent years, those wireless protocols are included in a common networking framework. With IoT applications getting proliferated, we will witness the co-existence of multiple wireless protocols in the same space, especially in indoor environments. Due to the different communication standards, generally those co-existing protocols cannot directly share information with each other, leading to inevitable interference and degraded network performance. Co-existence of wireless protocols thus becomes a hot topic in both academic and industrial fields. Based on the survey of recent studies in wireless network co-existence, this article illuminates the root causes of the co-existence problem and analyzes its impacts on network designs and performance. The taxonomy of wireless network co-existence is presented, which categories the existing works into three classes: elimination of homogeneous interference, identification of heterogeneous interference, and cross-protocol communication. The potential research directions in this area are further discussed.
In the information era, the great diversity of application demands calls for the adoption of different wireless communication protocols. As Internet of things (IoT) has gotten dramatic development in recent years, those wireless protocols are included in a common networking framework. With IoT applications getting proliferated, we will witness the co-existence of multiple wireless protocols in the same space, especially in indoor environments. Due to the different communication standards, generally those co-existing protocols cannot directly share information with each other, leading to inevitable interference and degraded network performance. Co-existence of wireless protocols thus becomes a hot topic in both academic and industrial fields. Based on the survey of recent studies in wireless network co-existence, this article illuminates the root causes of the co-existence problem and analyzes its impacts on network designs and performance. The taxonomy of wireless network co-existence is presented, which categories the existing works into three classes: elimination of homogeneous interference, identification of heterogeneous interference, and cross-protocol communication. The potential research directions in this area are further discussed.
2016, 53(1): 38-52.
DOI: 10.7544/issn1000-1239.2016.20150652
Abstract:
With the growing deployment of wireless communication technologies, radio spectrum is becoming a scarce resource. The current static spectrum management leads to low spectrum utilization in the spatial and temporal dimensions. Auction mechanism is believed to be an effective method among the most effective tools to solve or relieve the problem of radio spectrum shortage. However, designing a practical spectrum auction mechanism has to consider five major challenges: strategic behaviors of rational users, channel heterogeneity, channel spatial reusability, preference diversity and social welfare maximization. In this paper, we give a though literature survey about spectrum auction mechanism design, and point out the disadvantage of the existing works. We also present our recent work in heterogeneous spectrum management. We model the problem of heterogeneous spectrum allocation as a combinatorial auction. By jointly considering the five design challenges, we propose an efficient channel allocation mechanism and a price calculation scheme. We also prove that the proposed mechanism satisfies the strategy-proofness, and achieves approximately efficient social welfare.
With the growing deployment of wireless communication technologies, radio spectrum is becoming a scarce resource. The current static spectrum management leads to low spectrum utilization in the spatial and temporal dimensions. Auction mechanism is believed to be an effective method among the most effective tools to solve or relieve the problem of radio spectrum shortage. However, designing a practical spectrum auction mechanism has to consider five major challenges: strategic behaviors of rational users, channel heterogeneity, channel spatial reusability, preference diversity and social welfare maximization. In this paper, we give a though literature survey about spectrum auction mechanism design, and point out the disadvantage of the existing works. We also present our recent work in heterogeneous spectrum management. We model the problem of heterogeneous spectrum allocation as a combinatorial auction. By jointly considering the five design challenges, we propose an efficient channel allocation mechanism and a price calculation scheme. We also prove that the proposed mechanism satisfies the strategy-proofness, and achieves approximately efficient social welfare.
2016, 53(1): 53-67.
DOI: 10.7544/issn1000-1239.2016.20150656
Abstract:
Data transfers, such as the common shuffle and incast communication patterns, contribute most of the network traffic in MapReduce like working paradigms and thus have severe impacts on application performance in modern data centers. This motivates us to bring opportunities for performing the inter-flow data aggregation during the transmission phase as early as possible rather than just at the receiver side. In this paper, we first examine the gain and feasibility of the inter-flow data aggregation with novel data center network structures. To achieve such a gain, we model the incast minimal tree problem. We propose two approximate incast tree construction methods, RS-based and ARS-based incast trees. We are thus able to generate an efficient incast tree solely based on the labels of incast members and the data center topology. We further present incremental methods to tackle the dynamic and fault-tolerant issues of the incast tree. Based on a prototype implementation and large-scale simulations, we demonstrate that our approach can significantly decrease the amount of network traffic, save the data center resources, and reduce the delay of job processing. Our approach for BCube and FBFLY can be adapted to other data centers structures with minimal modifications.
Data transfers, such as the common shuffle and incast communication patterns, contribute most of the network traffic in MapReduce like working paradigms and thus have severe impacts on application performance in modern data centers. This motivates us to bring opportunities for performing the inter-flow data aggregation during the transmission phase as early as possible rather than just at the receiver side. In this paper, we first examine the gain and feasibility of the inter-flow data aggregation with novel data center network structures. To achieve such a gain, we model the incast minimal tree problem. We propose two approximate incast tree construction methods, RS-based and ARS-based incast trees. We are thus able to generate an efficient incast tree solely based on the labels of incast members and the data center topology. We further present incremental methods to tackle the dynamic and fault-tolerant issues of the incast tree. Based on a prototype implementation and large-scale simulations, we demonstrate that our approach can significantly decrease the amount of network traffic, save the data center resources, and reduce the delay of job processing. Our approach for BCube and FBFLY can be adapted to other data centers structures with minimal modifications.
2016, 53(1): 68-79.
DOI: 10.7544/issn1000-1239.2016.20150663
Abstract:
Deep neural networks (DNNs) and their learning algorithms are well known in the academic community and industry as the most successful methods for big data analysis. Compared with traditional methods, deep learning methods use data-driven and can extract features (knowledge) automatically from data. Deep learning methods have significant advantages in analyzing unstructured, unknown and varied model and cross field big data. At present, the most widely used deep neural networks in big data analysis are feedforward neural networks (FNNs). They work well in extracting the correlation from static data and suiting for data application scenarios based on classification. But limited by its intrinsic structure, the ability of feedforward neural networks to extract time sequence features is weak. Infinite deep neural networks, i.e. recurrent neural networks (RNNs) are dynamical systems essentially. Their essential character is that the states of the networks change with time and couple the time parameter. Hence they are very suit for extracting time sequence features. It means that infinite deep neural networks can perform the prediction of big data. If extending recurrent structure of recurrent neural networks in the time dimension, the depth of networks can be infinite with time running, so they are called infinite deep neural networks. In this paper, we focus on the topology and some learning algorithms of infinite deep neural networks, and introduce some successful applications in speech recognition and image understanding.
Deep neural networks (DNNs) and their learning algorithms are well known in the academic community and industry as the most successful methods for big data analysis. Compared with traditional methods, deep learning methods use data-driven and can extract features (knowledge) automatically from data. Deep learning methods have significant advantages in analyzing unstructured, unknown and varied model and cross field big data. At present, the most widely used deep neural networks in big data analysis are feedforward neural networks (FNNs). They work well in extracting the correlation from static data and suiting for data application scenarios based on classification. But limited by its intrinsic structure, the ability of feedforward neural networks to extract time sequence features is weak. Infinite deep neural networks, i.e. recurrent neural networks (RNNs) are dynamical systems essentially. Their essential character is that the states of the networks change with time and couple the time parameter. Hence they are very suit for extracting time sequence features. It means that infinite deep neural networks can perform the prediction of big data. If extending recurrent structure of recurrent neural networks in the time dimension, the depth of networks can be infinite with time running, so they are called infinite deep neural networks. In this paper, we focus on the topology and some learning algorithms of infinite deep neural networks, and introduce some successful applications in speech recognition and image understanding.
2016, 53(1): 80-92.
DOI: 10.7544/issn1000-1239.2016.20150636
Abstract:
Affective computing (AC) is a new field of emotion research along with the development of computing technology and human-machine interaction technology. Emotion recognition is a crucial part of the AC research framework. Emotion recognition based on physiological signals provides richer information without deception than other techniques such as facial expression, tone of voice, and gestures. Many studies of emotion recognition have been conducted, but the classification accuracy is diverse due to variability in stimuli, emotion categories, devices, feature extraction and machine learning algorithms. This paper reviews all works that cited DEAP dataset (a public available dataset which uses music video to induce emotion and record EEG and peripheral physiological signals) and introduces detailed methods and algorithms on feature extraction, normalization, dimension reduction, emotion classification, and cross validation. Eventually, this work presents the application of AC on game development, multimedia production, interactive experience, and social network as well as the current limitations and the direction of future investigation.
Affective computing (AC) is a new field of emotion research along with the development of computing technology and human-machine interaction technology. Emotion recognition is a crucial part of the AC research framework. Emotion recognition based on physiological signals provides richer information without deception than other techniques such as facial expression, tone of voice, and gestures. Many studies of emotion recognition have been conducted, but the classification accuracy is diverse due to variability in stimuli, emotion categories, devices, feature extraction and machine learning algorithms. This paper reviews all works that cited DEAP dataset (a public available dataset which uses music video to induce emotion and record EEG and peripheral physiological signals) and introduces detailed methods and algorithms on feature extraction, normalization, dimension reduction, emotion classification, and cross validation. Eventually, this work presents the application of AC on game development, multimedia production, interactive experience, and social network as well as the current limitations and the direction of future investigation.
2016, 53(1): 93-112.
DOI: 10.7544/issn1000-1239.2016.20150403
Abstract:
Human action recognition is an important issue in the field of computer vision. Compared with object recognition in still images, human action recognition has more concerns on the spatio-temporal motion changes of interesting objects in image sequences. The extension of 2D image to 3D spatio-temporal image sequence increases the complexity of action recognition greatly, Meanwhile, it also provides a wide space for various attempts on different solutions and techniques on human action recognition. Recently, many new algorithms and systems on human action recognition have emerged, which indicates that it has become one of the hottest topics in computer vision. In this paper, we propose a taxonomy of human action recognition in chronological order to classify action recognition methods into different periods and put forward general summaries of them. Compared with other surveys, the proposed taxonomy introduces human action recognition methods and summarizes their characteristics by analyzing the action dataset evolution and responding recognition methods. Furthermore, the introduction of action recognition datasets coincides with the trend of big data-driven research idea. Through the summarization on related work, we also give some prospects on future work.
Human action recognition is an important issue in the field of computer vision. Compared with object recognition in still images, human action recognition has more concerns on the spatio-temporal motion changes of interesting objects in image sequences. The extension of 2D image to 3D spatio-temporal image sequence increases the complexity of action recognition greatly, Meanwhile, it also provides a wide space for various attempts on different solutions and techniques on human action recognition. Recently, many new algorithms and systems on human action recognition have emerged, which indicates that it has become one of the hottest topics in computer vision. In this paper, we propose a taxonomy of human action recognition in chronological order to classify action recognition methods into different periods and put forward general summaries of them. Compared with other surveys, the proposed taxonomy introduces human action recognition methods and summarizes their characteristics by analyzing the action dataset evolution and responding recognition methods. Furthermore, the introduction of action recognition datasets coincides with the trend of big data-driven research idea. Through the summarization on related work, we also give some prospects on future work.
Abstract:
Vision plays an important role in both the human interaction and human-nature interaction. Furthermore, equipping the terminals with the intelligent visual recognition and interaction is one of the core challenges in artificial intelligence and computer technology, and also one of lofty goals. With the rapid development of visual recognition techniques, in recent years the emerging new techniques and problems have been produced. Correspondingly, the applications with the intelligent interaction also present a few new characteristics, which are changing our original understanding of the visual recognition and interaction. We give a survey on image recognition techniques, covering recent advances in regarding to visual recognition, visual description, visual question and answering (VQA). Specifically, we first focus on the deep learning approaches for image recognition and scene classification. Next, the latest techniques in visual description and VQA are analyzed and discussed. Then we introduce visual recognition and interaction applications in mobile devices and robots. Finally, we discuss future research directions in this field.
Vision plays an important role in both the human interaction and human-nature interaction. Furthermore, equipping the terminals with the intelligent visual recognition and interaction is one of the core challenges in artificial intelligence and computer technology, and also one of lofty goals. With the rapid development of visual recognition techniques, in recent years the emerging new techniques and problems have been produced. Correspondingly, the applications with the intelligent interaction also present a few new characteristics, which are changing our original understanding of the visual recognition and interaction. We give a survey on image recognition techniques, covering recent advances in regarding to visual recognition, visual description, visual question and answering (VQA). Specifically, we first focus on the deep learning approaches for image recognition and scene classification. Next, the latest techniques in visual description and VQA are analyzed and discussed. Then we introduce visual recognition and interaction applications in mobile devices and robots. Finally, we discuss future research directions in this field.
2016, 53(1): 123-137.
DOI: 10.7544/issn1000-1239.2016.20150662
Abstract:
Change detection in remote sensing imagery is a significant issue to detect the changes happening during a period of time at the same area. The change detection task based on synthetic aperture radar (SAR) imagery has been widely concerned in recent years due to their independence on time or weather condition. This paper first gives out a brief introduction to the classical steps along with some traditional methods, and then puts its emphasis on the summary of the burgeoning methods proposed recently. By improving the traditional methods, these state-of-the-art algorithms aim at generating a difference image and analyzing it by using the threshold, clustering, graph cut and level set methods, obtaining some satisfactory results and making a contribution to an accurate detection. The algorithms are introduced from the elementary to the profound, and their performance is compared theoretically. To demonstrate their effectiveness, two datasets are tested on these algorithms and an objective comparison is made to show the different properties of these algorithms. Finally, several meaningful viewpoints based on the practical problems for the future research of change detection are proposed, throwing light upon some further research directions.
Change detection in remote sensing imagery is a significant issue to detect the changes happening during a period of time at the same area. The change detection task based on synthetic aperture radar (SAR) imagery has been widely concerned in recent years due to their independence on time or weather condition. This paper first gives out a brief introduction to the classical steps along with some traditional methods, and then puts its emphasis on the summary of the burgeoning methods proposed recently. By improving the traditional methods, these state-of-the-art algorithms aim at generating a difference image and analyzing it by using the threshold, clustering, graph cut and level set methods, obtaining some satisfactory results and making a contribution to an accurate detection. The algorithms are introduced from the elementary to the profound, and their performance is compared theoretically. To demonstrate their effectiveness, two datasets are tested on these algorithms and an objective comparison is made to show the different properties of these algorithms. Finally, several meaningful viewpoints based on the practical problems for the future research of change detection are proposed, throwing light upon some further research directions.
Abstract:
There are many different kinds of cloud computing platforms, such as CloudStack, OpenStack, Eucalyptus, and so on, which differentiate from each other in management abilities and management styles. Even in a particular cloud platform, there are also different kinds of virtualization technologies, such as Xen, KVM, VMware, etc. Recent years, with the rapid development of private cloud and hybrid cloud, the heterogeneity degree of infrastructure is aggravated. Fault tolerance (FT) mechanisms are usually supported by the management ability and management style of the infrastructure. As a result, a fault-tolerant mechanism needs to be repeatedly implemented on different platforms. Meanwhile, this directly causes the obvious growing difficulty and increasing amount of time consumption in FT mechanism. In order to reach the goal of achieving FT mechanism among different platforms, we propose a model-based, cross-platform FT mechanism development approach in this paper. To validate the effectiveness and practicability of model-based development approach, we implemente seven fault tolerance mechanisms in CloudStack and OpenStack. A series of experiments show that failover is implemented effectively by these FT mechanisms, and the reliability and availability of the FT target are improved. With high reusability (over 90%) of the code, the FT mechanisms in this thesis can function cross different platforms. Analysis of the questionnaire survey conducted among developers show that our approach can improve the development experience and development efficiency.
There are many different kinds of cloud computing platforms, such as CloudStack, OpenStack, Eucalyptus, and so on, which differentiate from each other in management abilities and management styles. Even in a particular cloud platform, there are also different kinds of virtualization technologies, such as Xen, KVM, VMware, etc. Recent years, with the rapid development of private cloud and hybrid cloud, the heterogeneity degree of infrastructure is aggravated. Fault tolerance (FT) mechanisms are usually supported by the management ability and management style of the infrastructure. As a result, a fault-tolerant mechanism needs to be repeatedly implemented on different platforms. Meanwhile, this directly causes the obvious growing difficulty and increasing amount of time consumption in FT mechanism. In order to reach the goal of achieving FT mechanism among different platforms, we propose a model-based, cross-platform FT mechanism development approach in this paper. To validate the effectiveness and practicability of model-based development approach, we implemente seven fault tolerance mechanisms in CloudStack and OpenStack. A series of experiments show that failover is implemented effectively by these FT mechanisms, and the reliability and availability of the FT target are improved. With high reusability (over 90%) of the code, the FT mechanisms in this thesis can function cross different platforms. Analysis of the questionnaire survey conducted among developers show that our approach can improve the development experience and development efficiency.
2016, 53(1): 155-164.
DOI: 10.7544/issn1000-1239.2016.20150669
Abstract:
With the rapid growth of computer software in both scale and complexity, more and more attention has been paid by software developers on the reliability and security issue. Null-pointer dereference is a kind of errors which often occur in programs.This paper proposes a CEGAR based null-pointer dereference verification approach for C programs. With this method, first, a linear temporal logic (LTL) formula is used to specify the null-pointer dereference problem. Then whether null-pointer dereference occuring in a program is checked by a CEGAR based model checking approach. In order to verify null-pointer dereference problem in a total automatic way, this paper also studies how to generate temporal logic formulas automatically with respective to null-pointer dereference problem. Experimental results show that the proposed approach is useful in practice for checking null-pointer dereference in C programs with large scale.
With the rapid growth of computer software in both scale and complexity, more and more attention has been paid by software developers on the reliability and security issue. Null-pointer dereference is a kind of errors which often occur in programs.This paper proposes a CEGAR based null-pointer dereference verification approach for C programs. With this method, first, a linear temporal logic (LTL) formula is used to specify the null-pointer dereference problem. Then whether null-pointer dereference occuring in a program is checked by a CEGAR based model checking approach. In order to verify null-pointer dereference problem in a total automatic way, this paper also studies how to generate temporal logic formulas automatically with respective to null-pointer dereference problem. Experimental results show that the proposed approach is useful in practice for checking null-pointer dereference in C programs with large scale.
2016, 53(1): 165-192.
DOI: 10.7544/issn1000-1239.2016.20150661
Abstract:
Entity alignment on knowledge base has been a hot research topic in recent years. The goal is to link multiple knowledge bases effectively and create a large-scale and unified knowledge base from the top-level to enrich the knowledge base, which can be used to help machines to understand the data and build more intelligent applications. However, there are still many research challenges on data quality and scalability, especially in the background of big data. In this paper, we present a survey on the techniques and algorithms of entity alignment on knowledge base in decade, and expect to provide alternative options for further research by classifying and summarizing the existing methods. Firstly, the entity alignment problem is formally defined. Secondly, the overall architecture is summarized and the research progress is reviewed in detail from algorithms, feature matching and indexing aspects. The entity alignment algorithms are the key points to solve this problem, and can be divided into pair-wise methods and collective methods. The most commonly used collective entity alignment algorithms are discussed in detail from local and global aspects. Some important experimental and real world data sets are introduced as well. Finally, open research issues are discussed and possible future research directions are prospected.
Entity alignment on knowledge base has been a hot research topic in recent years. The goal is to link multiple knowledge bases effectively and create a large-scale and unified knowledge base from the top-level to enrich the knowledge base, which can be used to help machines to understand the data and build more intelligent applications. However, there are still many research challenges on data quality and scalability, especially in the background of big data. In this paper, we present a survey on the techniques and algorithms of entity alignment on knowledge base in decade, and expect to provide alternative options for further research by classifying and summarizing the existing methods. Firstly, the entity alignment problem is formally defined. Secondly, the overall architecture is summarized and the research progress is reviewed in detail from algorithms, feature matching and indexing aspects. The entity alignment algorithms are the key points to solve this problem, and can be divided into pair-wise methods and collective methods. The most commonly used collective entity alignment algorithms are discussed in detail from local and global aspects. Some important experimental and real world data sets are introduced as well. Finally, open research issues are discussed and possible future research directions are prospected.
2016, 53(1): 193-205.
DOI: 10.7544/issn1000-1239.2016.20148143
Abstract:
Due to user mobility and favorite of collective activities, the distribution of users in WLANs is seriously uneven and changeable. When a lot of users congest in a WLAN, the WLAN performance degrades and the user experience becomes worse. To address dynamical congestion in a WLAN, existing solutions are unpractical. In this paper, through introducing shadow access point (SAP) and station mapping, a solution called splitting and restructuring dynamically (SRD) is proposed, and formal analysis of station mapping and performance is conducted, and an algorithm for the optimal mapping is devised. According to the change of WLAN status, SRD can dynamically split an overcrowded WLAN to multiple sub-WLANs and restructure them into a centralized WLAN. So, the distribution of stations in all sub-WLANs can be monitored and controlled centralizedly. SRD can reduce the number of stations in each sub-WLAN, and improve user throughput and alleviate the impact of both collisions and multi-rate. The simulation results show that SRD can improve the WLAN throughput a lot. Besides, SRD requires no modifications on user devices.
Due to user mobility and favorite of collective activities, the distribution of users in WLANs is seriously uneven and changeable. When a lot of users congest in a WLAN, the WLAN performance degrades and the user experience becomes worse. To address dynamical congestion in a WLAN, existing solutions are unpractical. In this paper, through introducing shadow access point (SAP) and station mapping, a solution called splitting and restructuring dynamically (SRD) is proposed, and formal analysis of station mapping and performance is conducted, and an algorithm for the optimal mapping is devised. According to the change of WLAN status, SRD can dynamically split an overcrowded WLAN to multiple sub-WLANs and restructure them into a centralized WLAN. So, the distribution of stations in all sub-WLANs can be monitored and controlled centralizedly. SRD can reduce the number of stations in each sub-WLAN, and improve user throughput and alleviate the impact of both collisions and multi-rate. The simulation results show that SRD can improve the WLAN throughput a lot. Besides, SRD requires no modifications on user devices.
2016, 53(1): 206-215.
DOI: 10.7544/issn1000-1239.2016.20148120
Abstract:
Understanding how communication networks form and evolve is a crucial research issue in complex network analysis. Various methods are proposed to explore networks generation and evolution mechanism. However, the previous methods usually pay more attention to macroscopic characteristics rather than microscopic characteristics, which may lead to lose much information of individual patterns. Since communication network is associated closely with user behaviours, the model of communication network also takes into consideration the individual patterns. By implicitly labeling each network node with a latent attribute-activity level, we introduce an efficent approach for the simulation and modeling of communication network based on topic model. We illustrate our model on a real-world email network obtained from email logs. Experimental results show that the synthetic network preserves some of the global characteristics and individual behaviour patterns. Besides, due to privacy policy and restricted permissions, it is arduous to collect a real large-scale communication network dataset in a short time. Much research work is constrained by the absence of real large-scale datasets. To address this problem, we can use this model to generate a large-scale synthetic communication network by a small amount of captured communication stream. Moreover, it has linear runtime complexity and can be paralleled easily.
Understanding how communication networks form and evolve is a crucial research issue in complex network analysis. Various methods are proposed to explore networks generation and evolution mechanism. However, the previous methods usually pay more attention to macroscopic characteristics rather than microscopic characteristics, which may lead to lose much information of individual patterns. Since communication network is associated closely with user behaviours, the model of communication network also takes into consideration the individual patterns. By implicitly labeling each network node with a latent attribute-activity level, we introduce an efficent approach for the simulation and modeling of communication network based on topic model. We illustrate our model on a real-world email network obtained from email logs. Experimental results show that the synthetic network preserves some of the global characteristics and individual behaviour patterns. Besides, due to privacy policy and restricted permissions, it is arduous to collect a real large-scale communication network dataset in a short time. Much research work is constrained by the absence of real large-scale datasets. To address this problem, we can use this model to generate a large-scale synthetic communication network by a small amount of captured communication stream. Moreover, it has linear runtime complexity and can be paralleled easily.
Abstract:
Localization is one of the important preconditions for wireless sensor networks (WSNs) applications.Traditional range-based localization algorithms need large amounts of pair-wise distance measurements between sensor nodes.However, noise and data missing are inevitable in distance ranging, which may degrade localization accuracy drastically. To address this challenge, a novel localization algorithm for WSNs based on L1-norm regularized matrix completion (L1NRMC) is proposed in this paper. By utilizing the natural low rank feature of the Euclidean distance matrix (EDM) between nodes, the recovery of partly sampled noisy distance matrix is formulated as an L1-norm regularized matrix completion problem, which is solved by alternating direction method of multipliers (ADMM) and operator splitting technology.Based on the reconstructed EDM, the classical MDS-MAP algorithm is applied to obtain the coordinates of all the unknown nodes.This algorithm can not only detect and remove outliers, but also smooth the common Gaussian noise implicitly. Simulation results demonstrate that compared with traditional node localization algorithms, our algorithm achieves high accuracy from only small fraction of distance measurements and resists various types of ranging noise, which makes our algorithm suitable for resource-limited WSNs.
Localization is one of the important preconditions for wireless sensor networks (WSNs) applications.Traditional range-based localization algorithms need large amounts of pair-wise distance measurements between sensor nodes.However, noise and data missing are inevitable in distance ranging, which may degrade localization accuracy drastically. To address this challenge, a novel localization algorithm for WSNs based on L1-norm regularized matrix completion (L1NRMC) is proposed in this paper. By utilizing the natural low rank feature of the Euclidean distance matrix (EDM) between nodes, the recovery of partly sampled noisy distance matrix is formulated as an L1-norm regularized matrix completion problem, which is solved by alternating direction method of multipliers (ADMM) and operator splitting technology.Based on the reconstructed EDM, the classical MDS-MAP algorithm is applied to obtain the coordinates of all the unknown nodes.This algorithm can not only detect and remove outliers, but also smooth the common Gaussian noise implicitly. Simulation results demonstrate that compared with traditional node localization algorithms, our algorithm achieves high accuracy from only small fraction of distance measurements and resists various types of ranging noise, which makes our algorithm suitable for resource-limited WSNs.