Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 November 2017, Volume 54 Issue 11
Trajectory Prediction Algorithm in VANET Routing
Li Yang, Wang Zhe, Zhang Chuwen, Dai Huichen, Xu Wenquan, Ji Xuefeng, Wan Ying, Liu Bin
2017, 54(11):  2421-2433.  doi:10.7544/issn1000-1239.2017.20170359
Asbtract ( 1371 )   HTML ( 4)   PDF (5253KB) ( 940 )  
Related Articles | Metrics
In vehicular ad hoc network (VANET), geographic routing protocols can preferably adapt to frequent topology changes and unstable link quality. Beacon messages are needed to share the positions of neighboring nodes, so forwarding decisions in the interval of successive beacon messages may be inaccurate due to the movement of the vehicle nodes. In this situation, trajectory prediction is needed to amend the positions of the vehicle nodes. Existing prediction algorithms are either lack of universality or suffered from large prediction errors. To solve the problems above, this paper proposes a new trajectory prediction algorithm, which is based on the measurement result that the vehicle accelerations obey normal distribution. The new algorithm uses linear regression to do the prediction and applies a feedback mechanism to amend error. The new trajectory prediction algorithm can greatly improve the prediction accuracy in several real trajectory trace tests. Then this paper proposes a new position based instant routing protocol. In instant routing protocol, a forwarder uses the predicted position of neighboring nodes and destination node to calculate the next hop. We apply our new trajectory prediction algorithm in instant routing to predict and update vehicle positions in real time. We use SUMO to generate real maps and vehicle trajectory traces, and use NS3 to do the simulation. Experimental results show that instant routing with the new trajectory prediction algorithm outperforms the traditional GPSR protocol and instant routing without trajectory prediction in terms of packet delivery ratio and network latency, while reducing protocol processing overhead remarkably.
Adaptive Trajectory Prediction for Moving Objects in Uncertain Environment
Xia Zhuoqun, Hu Zhenzhen, Luo Junpeng, Chen Yueyue
2017, 54(11):  2434-2444.  doi:10.7544/issn1000-1239.2017.20170309
Asbtract ( 1029 )   HTML ( 2)   PDF (4266KB) ( 788 )  
Related Articles | Metrics
The existing methods for trajectory prediction are difficult to describe the trajectory of moving objects in complex and uncertain environment accurately. In order to solve this problem, this paper proposes an self-adaptive trajectory prediction method for moving objects based on variation Gaussian mixture model (VGMM) in dynamic environment (ESATP). Firstly, based on the traditional mixture Gaussian model, we use the approximate variational Bayesian inference method to process the mixture Gaussian distribution in model training procedure. Secondly, variational Bayesian expectation maximization iterative is used to learn the model parameters and prior information is used to get a more precise prediction model. This algorithm can take a priory information. Finally, for the input trajectories, parameter adaptive selection algorithm is used automatically to adjust the combination of parameters, including the number of Gaussian mixture components and the length of segment. Experimental results perform that the ESATP method in the experiment shows high predictive accuracy, and maintains a high time efficiency. This model can be used in products of mobile vehicle positioning.
Joint Routing and Scheduling in Cognitive Radio Vehicular Ad Hoc Networks
Zhang Huyin, Wang Jing, Tang Xing
2017, 54(11):  2445-2455.  doi:10.7544/issn1000-1239.2017.20170377
Asbtract ( 906 )   HTML ( 1)   PDF (2538KB) ( 689 )  
Related Articles | Metrics
Cognitive radio vehicular ad hoc networks (CR-VANETs) have been envisioned to solve the problem of spectrum scarcity and improved spectrum resource efficiency in vehicle-to-vehicle communication by exploiting cognitive radio into the vehicular ad hoc networks. Most existing routing protocols for cognitive radio networks or vehicular ad hoc networks cannot be applied to CR-VANETs directly due to the high-speed mobility of vehicles and dynamically changing availability of cognitive radio channels. At present, the routing research for CR-VANETs is relatively few. How to utilize the spectrum resources effectively and moreover reduce the spectrum band consumption caused by routing hops is still a pending problem. Aspiring to meet these demands and challenges, this paper presents a joint routing and scheduling, which combines the scheduling of spectrum resources and the goal of minimizing routing hops in CR-VANETs. To achieve this goal, we first establish a network model and a CR spectrum model to predict the contact duration between vehicles and the probability of spectrum availability. We define the communication link consumption and the weight of channel according to these parameters. Then we transform the optimization objective into a routing scheme with minimizing hop count, subject to constraint on the scheduling of spectrum resource, and moreover prove this routing scheme is NP-hard. To tackle this issue, a hybrid heuristic algorithm is composed by a particle swarm optimization with fast convergence and a genetic algorithm with population diversity. Simulation results demonstrate that our proposal provides better routing hop counts compared with other CR-VANETs protocols.
Attribute Based Encryption Method with Revocable Dynamic and Static Attributes for VANETs
He Qian, Liu Peng, Wang Yong
2017, 54(11):  2456-2466.  doi:10.7544/issn1000-1239.2017.20170373
Asbtract ( 966 )   HTML ( 0)   PDF (2531KB) ( 592 )  
Related Articles | Metrics
The data secure sharing in vehicular ad hoc network (VANET) usually uses group encryption mode. However it is difficult to construct group and to manage group key for vehicular terminal with high mobility. Ciphertext-policy attribute-based encryption (CP-ABE) is a kind of new solution for VANETs' communication security. In the traditional CP-ABE strategy, it has several shortcomings, such as high decryption computation complex, and attributes revoking requires the re-encrypting of the whole cipher-text and the inflexible construction of access policy tree. These shortcomings lead to the limited application of CP-ABE in VANETs. In order to solve these problems, an ABE with revocable dynamic and static attributes (ABE-RDS) is proposed for the data secure sharing of cloud storage in VANETs. In the ABE-RDS, dynamic attribute and static attribute are managed separately, and combination policy tree is constructed, and main decryption part with high computation cost is delegated to servers using decryption proxy. In addition, the vehicular terminal can revoke attributes and refresh dynamic attributes through global and local trusted authority. The proposed ABE-RDS is secure, and it has superiority over traditional CP-ABE in space and time complexity. The performance of ABE-RDS in vehicular terminal decryption, attribute revocation, and system concurrent is evaluated with experiments.
Trajectory Privacy Protection Based on Road Segment Report in VANETs
Wu Xuangou, Wang Pengfei, Zheng Xiao, Fan Xu, Wang Xiaolin
2017, 54(11):  2467-2474.  doi:10.7544/issn1000-1239.2017.20170371
Asbtract ( 844 )   HTML ( 2)   PDF (3328KB) ( 708 )  
Related Articles | Metrics
Vehicular ad hoc networks (VANETs) provide the related techniques and solutions for intelligent transportation, urban planning, pollution reduction and other issues. VANET applications usually require vehicle users to report continuous road location information, which brings a serious threat to personal trajectory privacy. However, the existing trajectory protection techniques are mainly focused on location-based protection, which cannot be applied to road segment based trajectory privacy protection effectively. In this paper, we propose a new road segment data gathering framework with trajectory privacy protection consideration in VANETs. In our framework, we give the trajectory privacy protection definition, formulate the problem model of road segment based data report, and prove that the problem is a NP-hard problem. In addition, we also present approximated algorithms to solve the problem. The experimental results show that our algorithms have good performance in both user's trajectory protection and coverage rate of data gathering.
Emergency Message Broadcast Method Based on Huffman-Like Coding
Wu Libing, Fan Jing, Wang Jing, Nie Lei, Wang Hao
2017, 54(11):  2475-2486.  doi:10.7544/issn1000-1239.2017.20170368
Asbtract ( 815 )   HTML ( 5)   PDF (4552KB) ( 652 )  
Related Articles | Metrics
The development of urban city greatly promotes the application of vehicular ad-hoc network, among which the safety-related emergency message broadcast is one of the key research points. The emergency message broadcast needs to meet the requirements for the quality of service such as low latency, high reliability, high scalability and so on. Most existing emergency message broadcasting methods, when selecting the next hop forwarding node, assume that there is an approximately equal probability of being selected as the relay area for each location, and the nodes of all positions are treated equally, which lacks the study of the distribution of the optimal node position so that it cannot adapt well to the distribution of the optimal forwarding node. However, the key to reducing the delay in emergency messaging is to quickly determine the appropriate relay forwarding node. Therefore, in order to further improve the timeliness of emergency message broadcasting and reduce the propagation delay, in this paper, we propose a Huffman coding-based emergency message broadcasting method. Generally, we first analyze the probability distribution of the optimal forwarding nodes in urban roads. And based on it, we then use the principle of Huffman coding to design a fast partition method, which can achieve the goals of quickly selecting optimal relay node, reducing the delay of emergency message broadcast, and improving the speed of emergency message transmission by minimizing the optimal node selection time. Our simulation results show that the proposed method can reduce the delay of emergency message broadcasts in different scenarios by 5.3%~18.0%, and improve the speed of emergency message transmission by 8.9%~24.5%.
Check Algorithm of Data Integrity Verification Results in Big Data Storage
Xu Guangwei, Bai Yanke, Yan Cairong, Yang Yanbin, Huang Yongfeng
2017, 54(11):  2487-2496.  doi:10.7544/issn1000-1239.2017.20160825
Asbtract ( 1221 )   HTML ( 6)   PDF (2419KB) ( 773 )  
Related Articles | Metrics
Cloud storage is one of the most widely used applications in cloud computing. It makes it convenient for users to access and share the data yet producing data integrity issues such as data corruption and loss. The existing remote data verification algorithms are based on the trusted third party who works as a public verifier to verify the outsourced data integrity. In this case, the verifier has a potential threat to provide false verification results, which cannot ensure the reliability of data verification. Especially, the situation can be even worse while the verifier is in collusion with the cloud storage providers. In this paper, we propose a check algorithm of incredible verification results in data integrity verification (CIVR) to resist the attacks of forgery and deception in incredible verification results. We utilize double verification proofs, i.e., integrity verification proof and incredible check proof, to execute the cross check. The integrity verification proof is to verify whether the data are intact. The incredible check proof is to check whether the verification results are correct. Moreover, the algorithm constructs the check tree to ensure the reliability of verification results. Theoretical analysis and simulation results show that the proposed algorithm can guarantee the reliability of verification results and increase the efficiency by improving the valid verification.
A High Performance and Reliable Hybrid Host Cache System
Li Chu, Feng Dan, Wang Fang
2017, 54(11):  2497-2507.  doi:10.7544/issn1000-1239.2017.20160793
Asbtract ( 847 )   HTML ( 2)   PDF (2680KB) ( 429 )  
Related Articles | Metrics
Modern date centers widely use network storage systems as shared storage solutions. Storage server typically deploys the redundant array of independent disks (RAID) technique to provide high reliability, e.g., RAID5/6 can tolerate one/two disk failures. Compared with traditional hard disk drives (HDDs), solidstate drives (SSDs) have lower access latency but higher price. As a result, clientside SSDbased caching has gained more and more popularity. Writeback policy can significantly accelerate the storage I/O performance, however, it fails to ensure date consistency and durability under SSD failures. Writethough policy simplifies the consistence model, but fails to accelerate the write accesses. In this paper, we design and implement a new hybrid host cache (HHC). HHC selectively stores mirrored dirty cache blocks into HDDs in a logstructured manner, and utilizes the write barrier to guarantee the data consistency and durability. Through reliability analysis, we show that the HHC layer has much longer mean time to data loss (MTTDL) than the corresponding backend storage array. In addition, we implement a prototype of HHC and evaluate its performance in comparison with other competitors by using Filebench. The experimental results show that under various workloads, HHC achieves comparable performance compared with the writeback policy, and significantly outperforms the writethrough policy.
DCuckoo: An Efficient Hash Table with On-Chip Summary
Jiang Jie, Yang Tong, Zhang Mengyu, Dai Yafei, Huang Liang, Zheng Lianqing
2017, 54(11):  2508-2515.  doi:10.7544/issn1000-1239.2017.20160795
Asbtract ( 942 )   HTML ( 0)   PDF (1889KB) ( 644 )  
Related Articles | Metrics
Hash tables are extensively used in many computer-related areas because of their efficiency in query and insertion operations. However, Hash tables have two disadvantages: collisions and memory inefficiency. To solve these two disadvantages, minimal perfect Hash table uses N locations to store N incoming elements. However, MPHT doesn't support incremental updates. Therefore, in this paper, combining Cuckoo hashing and d-left hashing, we propose a novel Hash table architecture called DCuckoo, which ensures fast query speed, fast update speed in worst cases, efficient utilization of memory and dynamic capacity change. In DCuckoo, multiple sub-tables and Cuckoo hashing's mechanism of transferring existing elements are used to improve the load factor. Pointers except for ones in the last sub-table are eliminated for less wasted space. Also, in order to optimize the query performance, fingerprints and bitmaps are used as a summary in on-chip memory to reduce off-chip memory accesses. The bucket will be probed only if the corresponding fingerprint is matched in on-chip memory. We conduct a series of experiments to compare the performance of DCuckoo and other five Hash table schemas. Results demonstrate that DCuckoo eliminates shortcomings of both Cuckoo hashing and d-left hashing, hence DCuckoo achieves the four design goals.
Improving Cloud Platform Based on the Runtime Resource Capacity Evaluation
Zhou Mosong, Dong Xiaoshe, Chen Heng, Zhang Xingjun
2017, 54(11):  2516-2533.  doi:10.7544/issn1000-1239.2017.20160700
Asbtract ( 873 )   HTML ( 1)   PDF (7923KB) ( 497 )  
Related Articles | Metrics
There is a mismatch between computing resource supply and demand in cloud computing platform resource management, which leads to the performance degradation. This paper establishes a runtime computing resource available capacity evaluation model base on similar tasks. The model uses the characteristic of cloud computing workload in which similar tasks have the same execution logic, evaluates computing resource available capacity according to similar tasks avoiding computing resource consumption in executing benchmark. This paper applies the model to propose a computing resource capacity evaluation method called RCE, which considers many factors and evaluates runtime computing resource available capacity classified by resource type. This method gets accurate evaluation results timely with little cost. We apply RCE results in some algorithms to match computing resource supply and demand, and improve cloud computing platform performance. We test RCE method and algorithms base on RCE in dedicated and real cloud computing environments. The test results show that the RCE method gets runtime evaluation results timely and the evaluation results reflect computing resource available capacity accurately. Moreover, the RCE method supports the optimization of algorithm and platform effectively. And algorithms base on RCE resolve the mismatch problem between resource supply and demand, and significantly improve the performance of cloud computing platform.
A Sliced Multi-Rail Interconnection Network for Large-Scale Clusters
Shao En, Yuan Guojun, Huan Zhixuan, Cao Zheng, Sun Ninghui
2017, 54(11):  2534-2546.  doi:10.7544/issn1000-1239.2017.20151069
Asbtract ( 748 )   HTML ( 0)   PDF (5848KB) ( 485 )  
Related Articles | Metrics
In large-scale clusters, the design of interconnection network is facing greater challenges. Firstly, the increasing computing capacity of a single node requires the network providing higher bandwidth and lower latency. Secondly, the increasing number of nodes requires the network to have extremely better scalability. Thirdly, the increasing scale of system leads to worse performance of collective communication, which is harmful to the performance and scalability of applications. Fourthly, the increasing number of devices requires the network to have better reliability. As the performance of computing nodes keeps increasing, interconnection network has gradually become the bottleneck of large-scale computing system. However, switch chip, the core component of interconnection network, can offer limited aggregate bandwidth because of the constraint of physical processes and packaging technologies. With the co-design of network architecture and switch micro-architecture, this paper proposes a sliced multi-rail network architecture regarding the given aggregate bandwidth. Through mathematical modeling and network simulation, we studies the performance boundaries of sliced multi-rail network. Evaluation results show that the average latency of the short message (less than 128B)can be increased by more than 10 times.
Self-Adaptive Streaming Big Data Learning Algorithm Based on Incremental Tangent Space Alignment
Tan Chao, Ji Genlin, Zhao Bin
2017, 54(11):  2547-2557.  doi:10.7544/issn1000-1239.2017.20160712
Asbtract ( 729 )   HTML ( 2)   PDF (2565KB) ( 670 )  
Related Articles | Metrics
Manifold learning is developed to find the observed data's low-dimension embeddings in high dimensional data space. As a type of effective nonlinear dimension reduction method, it has been widely applied to the machine learning field, such as data mining and pattern recognition, etc. However, when processing a large scale data stream, the complexity of time is too high for many traditional manifold learning algorithms, including out of sample learning algorithm, incremental learning algorithm, online learning algorithm, and so on. This paper presents a novel self-adaptive learning algorithm based on incremental tangent space alignment (named SLITSA) for big data stream processing. SLITSA adopts the incremental PCA to construct the subspace incrementally, and can detect the intrinsic low dimensional manifold structure of data streams online or incrementally. In order to ensure the convergence of SLITSA and reduce the reconstruction error, it can also construct a new tangent space for adjustment during the iterative process. Experiments on artificial data sets and real data sets show that the classification accuracy and time efficiency of the proposed algorithm are better than other manifold learning algorithms, which can be extended to the application of streaming data and real-time big data analytics.
Random Search Learning Algorithm of BN Based on Super-Structure
Lü Yali, Wu Jiajie, Liang Jiye, Qian Yuhua
2017, 54(11):  2558-2566.  doi:10.7544/issn1000-1239.2017.20160715
Asbtract ( 767 )   HTML ( 0)   PDF (2698KB) ( 514 )  
Related Articles | Metrics
Recently, Bayesian network(BN) plays a vital role in knowledge representation and probabilistic inference. BN structure learning is crucial to research on BN inference. However, there are some disadvantages in the most two-stage hybrid learning method of BN structure: it is easy to lose edges with weak relationship in the first stage, when we learn the super-structure; hill climbing search method is easily plunged into local optimum in the second stage. To avoid the two disadvantages, the super-structure of BN is firstly learned by Opt01ss algorithm, which makes the result miss few edges as much as possible. Secondly, based on super-structure, three search operators are given to analyze the random selection rule of the initial network and address the random optimization strategy for the initial network. Further, SSRandom learning algorithm of BN structure is proposed. The algorithm is a good way to jump out of local optimum extremum to a certain extent. Finally, the learning performance of the proposed SSRandom algorithm is verified by the experiments on the standard Survey, Asia and Sachs networks, by comparing with other three hybrid algorithms according to four evaluation indexs, such as the sensitivity, specificity, Euclidean distance and the percentage of overall accuracy.
The Category Representation of Machine Learning Algorithm
Xu Xiaoxiang, Li Fanzhang, Zhang Li, Zhang Zhao
2017, 54(11):  2567-2575.  doi:10.7544/issn1000-1239.2017.20160350
Asbtract ( 964 )   HTML ( 2)   PDF (1891KB) ( 805 )  
Related Articles | Metrics
For a long time, it is thought that the representation is one of the bottleneck problems in the field of machine learning. The performance of machine learning methods is heavily dependent on the choice of data representation. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field and include topics such as deep learning, feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, etc. The range of domains to which these techniques apply is also very broad, from vision to speech recognition, text understanding, etc. Thus, the research on new representation methods for machine learning is a piece of work which is long-term, explorative and meaningful. Based on this, we propose several basic concepts of category representation of machine learning methods via the category theory. We analyze the decision tree, support vector machine, principal component analysis and deep neural network with category representation and give the corresponding category representation for each algorithms: the category representation of decision tree, slice category representation of support vector machine, and functor representation of the neural network. We also give the corresponding theoretical proof and feasibility analysis. According to further reach of category representation of machine learning algorithms, we find the essential relationship between support vector machine and principal component analysis. Finally, we confirm the feasibility of the category representation method using the simulation experiments.
Calculate Semantic Similarity Based on Large Scale Knowledge Repository
Zhang Libo, Sun Yihan, Luo Tiejian
2017, 54(11):  2576-2585.  doi:10.7544/issn1000-1239.2017.20160578
Asbtract ( 885 )   HTML ( 1)   PDF (2208KB) ( 606 )  
Related Articles | Metrics
With the continuous growth of the total of human knowledge, semantic analysis on the basis of the structured big data generated by human is becoming more and more important in the application of the fields such as recommended system and information retrieval. It is a key problem to calculate semantic similarity in these fields. Previous studies acquired certain breakthrough by applying large scale knowledge repository, which was represented by Wikipedia, but the path in Wikipedia didn't be fully utilized. In this paper, we summarize and analyze the previous algorithms for evaluating semantic similarity based on Wikipedia. On this foundation, a bilateral shortest paths algorithm is provided, which can evaluate the similarity between words and texts on the basis of the way human beings think, so that it can take full advantage of the path information in the knowledge repository. We extract the hyperlink structure among nodes, whose granularity is finer than that of articles form Wikipedia, then verify the universal connectivity among Wikipedia and evaluate the average shortest path between any two articles. Besides, the presented algorithm evaluates word similarity and text similarity based on the public dataset respectively, and the result indicates the great effect obtained from our algorithm. In the end of the paper, the advantages and disadvantages of proposed algorithm are summed up, and the way to improve follow-up study is proposed.
Rumor Propagation Analysis Model Inspired by Gravity Theory for Online Social Networks
Tan Zhenhua, Shi Yingcheng, Shi Nanxiang, Yang Guangming, Wang Xingwei
2017, 54(11):  2586-2599.  doi:10.7544/issn1000-1239.2017.20160434
Asbtract ( 1042 )   HTML ( 10)   PDF (5255KB) ( 683 )  
Related Articles | Metrics
The influence of rumor propagation in online social networks (OSN) could result in great damage to social life, and it has been a hot topic to discover rumor propagation pattern. Traditional Epidemic-like rumor propagation models based on SIR, are generally coarse-grained for OSN but do not fully consider the features of OSN, such as personalization dimensions of users' behavior and information attributes. Inspired by gravity theory, this paper proposes a novel rumor propagation analysis model named gravity-inspired rumor propagation model (GRPModel), and tries to find a new pattern of rumor propagation from the perspectives both of users' properties and rumors' attributes. In GRPModel, user influence and rumor influence are modeled mathematically by user relations and information attributes, and fully consider their personalized features. We collect experimental real data from Sina Weibo, which is a famous OSN in China, and investigate features of users and real rumors. Experiments prove the effectiveness and efficiency.
Dual Fine-Granularity POI Recommendation on Location-Based Social Networks
Liao Guoqiong, Jiang Shan, Zhou Zhiheng, Wan Changxuan
2017, 54(11):  2600-2610.  doi:10.7544/issn1000-1239.2017.20160502
Asbtract ( 879 )   HTML ( 0)   PDF (4007KB) ( 491 )  
Related Articles | Metrics
Point of interest recommendation is a new form of popular recommendation in location-based social network (LBSN). Utilizing the rich information contained in the LBSN to do personalized recommendation can enhance user experience effectively and enhance user's dependence on LBSN. Facing the challenging problems in LBSN, such as no explicit user preferences, non-consistency of interest, the sparseness of data, and so on, a dual fine-granularity POI recommendation strategy is proposed, of which, on the one hand, the historical check-in information of each user is divided into 24 time periods in hours; on the other hand, each POI is divided into a number of potential topics and distribution. Both the information of user's check-in and comments are used to mine user's topic preference in different time periods for Top-N recommendation of the POIs. In order to achieve the recommendation ideas, first of all, according to the comments information on the visited POIs, we use LDA topic generation model to extract the topic distribution of each POI. Secondly, for each user, we divide each user's check-in data into 24 time periods, and connect it with the topic distribution of the corresponding POIs to map user interest preference on each topic in different periods. Finally, in order to solve the issue of data sparse, we use higher order singular value decomposition algorithm to decompose the third-order tensor of user-topic-time to get more accurate interest score of users on each topic in all time periods. The experiments on a real dataset show that the proposed approach outperforms the state-of-the-art POI recommendation methods.
Dynamic Social Network Community Detection Algorithm Based on Hidden Markov Model
Yi Peng, Zhou Qiao, Men Haosong
2017, 54(11):  2611-2619.  doi:10.7544/issn1000-1239.2017.20160741
Asbtract ( 946 )   HTML ( 1)   PDF (2471KB) ( 622 )  
Related Articles | Metrics
With the continuous development of the Internet, most social networks have gradually demonstrated dynamic characteristics, and dynamic analysis of social network community has a very important significance on the understanding of the structure and function of social networks in real life. The HMM_DC algorithm (hidden Markov model based on dynamic community detection) is proposed according to the HMM (hidden Markov model) to detect the community in dynamic social network. Firstly, the algorithm transforms the community detection problem to get the optimal status chain in hidden Markov model considering the history information and characteristics in dynamic social networks. And the algorithm uses the observed chain and status chain to represent the community structure and node information, and can identify the community structure without extra information. Finally, this algorithm and three other algorithms are used to make comparable simulation experiments with VAST data set, ENRON data set and Facebook social network data set. Experimental results show that HMM_DC algorithm performs effectively and accurately in identifying the community structure in the dynamic social network and the value of Q and NMI can be raised greatly compared with other three algorithms.
Detect-Defray Mechanism Based Motivation Scheme for Selfish Nodes of Network Coding
Zhang Xiaoyu, Shang Tao, Liu Jianwei
2017, 54(11):  2620-2627.  doi:10.7544/issn1000-1239.2017.20160777
Asbtract ( 553 )   HTML ( 1)   PDF (1712KB) ( 482 )  
Related Articles | Metrics
Network coding is a new technology in the field of data transmission. In network coding, nodes are allowed to encode the data received from different nodes so that bandwidth utilization and network throughput can be improved. Whereas, if there are some selfish nodes in the network, these selfish nodes will delay or refuse coding and forwarding. It will increase the delay of network coding and reduce the efficiency of communication. When the problem gets more serious, the network coding will break off, and communication chaos will arise. In order to solve this problem, the motivation scheme based on detect-defray mechanism for selfish nodes of network coding is proposed. In the motivation scheme, firstly, the delay of nodes for encoding and forwarding will be detected, which can be the cost of network coding. Then it operates the defrayment part, in which network coding is regarded as a kind of economic behavior, and source nodes are consumers, and intermediate nodes are providers, and remuneration will be defrayed for providers from consumers when defrayment mechanism operates. Since intermediate nodes benefit from network coding when providing services, the enthusiasm of intermediate nodes for network coding can be improved, meanwhile the delay of network coding will be reduced. Scheme analysis shows that, the motivation scheme can reduce the delay of network coding and control the nodes' selfishness effectively. Finally, the effectiveness of network coding can be improved.
A Flow Table Usage-Aware QoS Routing Mechanism in Software Defined VANET
Fu Bin, Zha Lijia, Li Renfa, Xiao Xiongren
2017, 54(11):  2628-2638.  doi:10.7544/issn1000-1239.2017.20160922
Asbtract ( 718 )   HTML ( 1)   PDF (4438KB) ( 757 )  
Related Articles | Metrics
VANET can provide a wide range of security and non-security related services. However, the existing VANET is difficult to guarantee the QoS of these services. Software defined networking (SDN), which appears in a systematic way, can control network flexibly and separate the data from the control plane, bringing programming ability to the network. Firstly, this paper designs a software defined VANET architecture for heterogeneous multi network access. Secondly, a flow table usage-aware dynamic QoS provisioning framework is proposed, which allows us to manage the network in a modular way and supports the dynamic entering and exiting of the service flows. Finally, this paper establishes a flow table usage-aware QoS routing model with multi-service flows and multi-constraints. The model considers not only the parameters of link state such as packet loss, delay and throughput, but also the service requirements and the flow table usage, and provides VANET application services for a concurrent dynamic QoS routing. Experiments show that QoS routing mechanism proposed in this paper can meet not only the service requirements of their packet loss, latency and throughput, but also the capable of perceiving the usage of flow table so as to avoid the influence of flow table overflow for QoS routing mechanism, which further improves the performance of network QoS.
Output Feedback Control Based on Event-Based Sample in Wireless Sensor Networks
Xie Chenghan, Lu Saijie, Wang Hao, Peng Li
2017, 54(11):  2639-2645.  doi:10.7544/issn1000-1239.2017.20160643
Asbtract ( 780 )   HTML ( 5)   PDF (1884KB) ( 455 )  
Related Articles | Metrics
With the development of sensors, actuators and wireless network technology, wireless sensor networks have enabled a series of new applications in the past decade. However, we find that too much energy consumption and over-abundant occupancy rates of bandwidth are great challenges in these new applications because of the wireless channel transmission. Further, the energy consumption of the transmission accounts for 90% of the total energy of the battery in the general cases. So it has great practical significance to study the energy saving of the node's data transmission. In this paper, the problem we discuss above involving feedback control with limited actuation and transmission rate is considered. We study output feedback control based on an innovative event-triggered transmission scheme in a type of linear time-invariant discrete system. A good tradeoff between the actuator performance and communication rate can be achieved according to this transmission policy which decides the transfer time of the data packet. This kind of transmission strategy is designed through proving the upper bound on system performance, and then the corresponding output feedback control gain matrix is also calculated in detail. Finally, a numerical example is given to verify the potential and effectiveness of this theoretical transmission scheme.