2016 Vol. 53 No. 4
2016, 53(4): 729-741.
DOI: 10.7544/issn1000-1239.2016.20151146
Abstract:
“Internet+” is the sublimation and development of the Internet, which aims at promoting the Internet’s deep integration with economy and society to propel economic and social innovation and development. Under the “Internet+” circumstances, the Internet plays a role not only as a kind of information infrastructure, but also as a more important innovation element for improving production, trade and management of economic and social entities. Under such background, in this paper, the origin of “Internet+” and its meaning are analyzed, and the national actions to push it forward are described. From the aspects of promoting mass information interconnection and access, improving network management and network performance, supporting convenient network access and interaction, adapting to the integration of industrialization and informationization, and production-orientation, the various kinds of new networking paradigms are presented, which are suitable to “Internet+”. We then discuss the significant challenges faced by “Internet+” in the aspects of networking scalability, heterogeneity, performance and security as well as networked applications. In the end, we draw some conclusions.
“Internet+” is the sublimation and development of the Internet, which aims at promoting the Internet’s deep integration with economy and society to propel economic and social innovation and development. Under the “Internet+” circumstances, the Internet plays a role not only as a kind of information infrastructure, but also as a more important innovation element for improving production, trade and management of economic and social entities. Under such background, in this paper, the origin of “Internet+” and its meaning are analyzed, and the national actions to push it forward are described. From the aspects of promoting mass information interconnection and access, improving network management and network performance, supporting convenient network access and interaction, adapting to the integration of industrialization and informationization, and production-orientation, the various kinds of new networking paradigms are presented, which are suitable to “Internet+”. We then discuss the significant challenges faced by “Internet+” in the aspects of networking scalability, heterogeneity, performance and security as well as networked applications. In the end, we draw some conclusions.
2016, 53(4): 742-751.
DOI: 10.7544/issn1000-1239.2016.20151143
Abstract:
Internet plus TV tends to excessively consume storage space to achieve higher cache hit ratio. A novel cache schedule algorithm called PPRA(popularity prediction replication algorithm) is proposed in this paper based on programs popularity forecast. Firstly, according to statistical analysis from actual measurement, we apply random forests (RF) algorithm to construct a forecasting model of programs popularity. Subsequently, we use the principal component analysis (PCA) to overcome dimensionality curse and accelerate the forecasting process. Finally, we validate PPRA with authentic behavior data of a certain cable operator’s 1.3 million users in a period of 120 days. Our experimental results show that PPRA only consumes 30% storage space to achieve a fixed cache hit ratio compared with LRU and LFU algorithms, therefore the cost of Internet plus TV platform is saved.
Internet plus TV tends to excessively consume storage space to achieve higher cache hit ratio. A novel cache schedule algorithm called PPRA(popularity prediction replication algorithm) is proposed in this paper based on programs popularity forecast. Firstly, according to statistical analysis from actual measurement, we apply random forests (RF) algorithm to construct a forecasting model of programs popularity. Subsequently, we use the principal component analysis (PCA) to overcome dimensionality curse and accelerate the forecasting process. Finally, we validate PPRA with authentic behavior data of a certain cable operator’s 1.3 million users in a period of 120 days. Our experimental results show that PPRA only consumes 30% storage space to achieve a fixed cache hit ratio compared with LRU and LFU algorithms, therefore the cost of Internet plus TV platform is saved.
Abstract:
With the rapid growth of location-based social network (LBSN), point-of-interest (POI) recommendation has become an important mean to help people discover attractive locations. However, most of existing models of POI recommendation on LBSNs improve recommendation quality by exploiting the user check-in history behavior and contextual information(e.g., geographical information and social correlations), and they tend to ignore the review texts information accompanied with rating information for recommender models. While in reality, users only check in a few POIs in LBSN, which makes the user-POIs check-in history records and contextual information highly sparse, and causes a big challenge for POIs recommendations. To tackle this challenge, a novel POIs recommendation model called GeoSoRev is proposed in this paper, which combines users’ preference to a POI with geographical information, social correlations and reviews text on the basis of the classic recommendation model based on matrix factorization. Experimental results on two real-world datasets collected from Foursquare show that GeoSoRev achieves significantly superior precision and recalling rates compared with other state-of-the-art POIs recommendation models.
With the rapid growth of location-based social network (LBSN), point-of-interest (POI) recommendation has become an important mean to help people discover attractive locations. However, most of existing models of POI recommendation on LBSNs improve recommendation quality by exploiting the user check-in history behavior and contextual information(e.g., geographical information and social correlations), and they tend to ignore the review texts information accompanied with rating information for recommender models. While in reality, users only check in a few POIs in LBSN, which makes the user-POIs check-in history records and contextual information highly sparse, and causes a big challenge for POIs recommendations. To tackle this challenge, a novel POIs recommendation model called GeoSoRev is proposed in this paper, which combines users’ preference to a POI with geographical information, social correlations and reviews text on the basis of the classic recommendation model based on matrix factorization. Experimental results on two real-world datasets collected from Foursquare show that GeoSoRev achieves significantly superior precision and recalling rates compared with other state-of-the-art POIs recommendation models.
2016, 53(4): 764-775.
DOI: 10.7544/issn1000-1239.2016.20151079
Abstract:
With the wide spread and pervasion of social network, it brings more opportunities and novel problems for deep research on signed network, where link prediction is one of key problems in signed network. Interactional opinions and status theory are contributed to explain the construction and sign property of link relations, and provide theoretical principles for improving prediction quality. Therefore, this paper investigates link prediction problem in signed network from the perspective of interactional opinions and status theory, and constructs link prediction model by studying the strong correlation between two inducements and link relationship. Firstly, it explores interactional opinions to enhance the reliability of the decomposed matrix, and makes up for the limitations of status theory. Then, it models interactional opinions as enhanced reliability factor of matrix, and models status theory as the regularization terms. Finally, we construct the model of link prediction in signed network, namely MF-SI. Experimental results demonstrate that the model of MF-SI owns the best prediction quality compared with other baseline methods, which shows that the method of integrating interactional opinions with status theory implements link prediction in signed network.
With the wide spread and pervasion of social network, it brings more opportunities and novel problems for deep research on signed network, where link prediction is one of key problems in signed network. Interactional opinions and status theory are contributed to explain the construction and sign property of link relations, and provide theoretical principles for improving prediction quality. Therefore, this paper investigates link prediction problem in signed network from the perspective of interactional opinions and status theory, and constructs link prediction model by studying the strong correlation between two inducements and link relationship. Firstly, it explores interactional opinions to enhance the reliability of the decomposed matrix, and makes up for the limitations of status theory. Then, it models interactional opinions as enhanced reliability factor of matrix, and models status theory as the regularization terms. Finally, we construct the model of link prediction in signed network, namely MF-SI. Experimental results demonstrate that the model of MF-SI owns the best prediction quality compared with other baseline methods, which shows that the method of integrating interactional opinions with status theory implements link prediction in signed network.
2016, 53(4): 776-784.
DOI: 10.7544/issn1000-1239.2016.20151172
Abstract:
For the time being, the social network based on paper cooperation has gained a great deal of attention, but there exists inaccurate entity recognition, failing to update data in time, and uncertain data quality etc. In view of this, this paper puts forward the cooperation on the basis of the history project application, and the problem of the entity recognition attributes to a clustering problem. The computational complexity of the problem is proved. Then the algorithm is proposed to settle the problem. Finally, the efficiency of the algorithm is verified by the experiments on real data.
For the time being, the social network based on paper cooperation has gained a great deal of attention, but there exists inaccurate entity recognition, failing to update data in time, and uncertain data quality etc. In view of this, this paper puts forward the cooperation on the basis of the history project application, and the problem of the entity recognition attributes to a clustering problem. The computational complexity of the problem is proved. Then the algorithm is proposed to settle the problem. Finally, the efficiency of the algorithm is verified by the experiments on real data.
2016, 53(4): 785-797.
DOI: 10.7544/issn1000-1239.2016.20151134
Abstract:
In cloud computing environment, service provider (SP) can pay for the resources from infrastructure provider (InP) on-demand to deploy their services. In the case, SP can focus on service business without considering their physical infrastructures and expertise of maintenance. Only providing resources in term of virtual machines, the traditional InPs do not ensure network performance and bandwidth isolation. As the network virtualization is developed, especially the SDN concept, some researchers advocate InPs to provide resources in term of virtual data center (VDC) to solve these limits. Despite many advantages of VDC, there is also a new challenge that is the VDC embedding problem known as an NP-hard problem. With the goal of minimal cost and maximal revenue, it solves the problem of allocating resources to fulfill the SPs’ requirements. Considering the tradeoff of VDC reliability and embedding cost, a VDC embedding algorithm based on topological potential and modularity is proposed to improve acceptance ratio and the InPs’ revenue. Moreover, we further optimize the algorithm based on a given threshold by selecting high RevenueCost ratio VDCs. Extensive simulations show that compared with the existing algorithms, our approach is capable of reducing the core bandwidth consumption in data center. Furthermore, these proposals can accept more VDCs and obtain more revenue.
In cloud computing environment, service provider (SP) can pay for the resources from infrastructure provider (InP) on-demand to deploy their services. In the case, SP can focus on service business without considering their physical infrastructures and expertise of maintenance. Only providing resources in term of virtual machines, the traditional InPs do not ensure network performance and bandwidth isolation. As the network virtualization is developed, especially the SDN concept, some researchers advocate InPs to provide resources in term of virtual data center (VDC) to solve these limits. Despite many advantages of VDC, there is also a new challenge that is the VDC embedding problem known as an NP-hard problem. With the goal of minimal cost and maximal revenue, it solves the problem of allocating resources to fulfill the SPs’ requirements. Considering the tradeoff of VDC reliability and embedding cost, a VDC embedding algorithm based on topological potential and modularity is proposed to improve acceptance ratio and the InPs’ revenue. Moreover, we further optimize the algorithm based on a given threshold by selecting high RevenueCost ratio VDCs. Extensive simulations show that compared with the existing algorithms, our approach is capable of reducing the core bandwidth consumption in data center. Furthermore, these proposals can accept more VDCs and obtain more revenue.
2016, 53(4): 798-810.
DOI: 10.7544/issn1000-1239.2016.20151163
Abstract:
In the domain of energy Internet, smart city, etc, the massive smart devices collect large amount of data every day, and traditional enterprises need to perform lots of multi-dimensional analysis on these data to support decision-making. Recently, these enterprises try to solve the big data problem with technologies from Internet companies, for example, Hadoop and Hive etc. However, Hive has limited multi-dimensional index ability, and cannot satisfy the requirements of high-performance analysis in traditional enterprises. In this paper, we propose a distributed grid file based multi-dimensional index—DGFIndex to improve the multi-dimensional query performance of Hive. However, DGFIndex needs user to specify the splitting policy when creating index, which is not trivial for user when they are not familiar with data and query pattern. To solve it, we propose a novel MapReduce cost model to measure the DGFIndex-based query performance on specific splitting policy, a two-phase simulated annealing algorithm to search for the suitable splitting policy for DGFIndex, and finally decrease the total cost time of query set. The experimental results show that, DGFIndex improves 50%~114% query performance than original Compact Index in Hive. For static query set, compared with manual-specifying partition policy, our algorithm can choose suitable interval size for each index dimension, and decrease the cost time of query set at most 30%.
In the domain of energy Internet, smart city, etc, the massive smart devices collect large amount of data every day, and traditional enterprises need to perform lots of multi-dimensional analysis on these data to support decision-making. Recently, these enterprises try to solve the big data problem with technologies from Internet companies, for example, Hadoop and Hive etc. However, Hive has limited multi-dimensional index ability, and cannot satisfy the requirements of high-performance analysis in traditional enterprises. In this paper, we propose a distributed grid file based multi-dimensional index—DGFIndex to improve the multi-dimensional query performance of Hive. However, DGFIndex needs user to specify the splitting policy when creating index, which is not trivial for user when they are not familiar with data and query pattern. To solve it, we propose a novel MapReduce cost model to measure the DGFIndex-based query performance on specific splitting policy, a two-phase simulated annealing algorithm to search for the suitable splitting policy for DGFIndex, and finally decrease the total cost time of query set. The experimental results show that, DGFIndex improves 50%~114% query performance than original Compact Index in Hive. For static query set, compared with manual-specifying partition policy, our algorithm can choose suitable interval size for each index dimension, and decrease the cost time of query set at most 30%.
2016, 53(4): 811-823.
DOI: 10.7544/issn1000-1239.2016.20151150
Abstract:
The data delivery in traditional dedicated short range communication (DSRC) based vehicular ad hoc network (VANET) can hardly meet the transmission quality of service (QoS) requirement. Data transmission through mobile gateway can definitely extend the broadcast area and significantly reduce the remote transmission delay. This paper proposes a novel VANET architecture and data delivery method accordingly, which combines the idea of mobile cloud computing. We firstly provide the registration procedure of gateway server (GWS). Then, by jointly considering the historical data and real-time information, a GWS selection method by cloud is proposed to dynamically decide the participating GWSs and their service area. After acquiring the service information from GWS, the gateway consumer (GWC) can choose the optimal GWS from its GWS list by jointly considering communication load, link stability, channel quality, etc, and transmit the data to the selected GWS which will then send the data to cloud. Simulations with different scenarios in OMNeT++ and mathematical analysis demonstrate that the proposed method can achieve lower transmission delay and higher delivery success ratio.
The data delivery in traditional dedicated short range communication (DSRC) based vehicular ad hoc network (VANET) can hardly meet the transmission quality of service (QoS) requirement. Data transmission through mobile gateway can definitely extend the broadcast area and significantly reduce the remote transmission delay. This paper proposes a novel VANET architecture and data delivery method accordingly, which combines the idea of mobile cloud computing. We firstly provide the registration procedure of gateway server (GWS). Then, by jointly considering the historical data and real-time information, a GWS selection method by cloud is proposed to dynamically decide the participating GWSs and their service area. After acquiring the service information from GWS, the gateway consumer (GWC) can choose the optimal GWS from its GWS list by jointly considering communication load, link stability, channel quality, etc, and transmit the data to the selected GWS which will then send the data to cloud. Simulations with different scenarios in OMNeT++ and mathematical analysis demonstrate that the proposed method can achieve lower transmission delay and higher delivery success ratio.
2016, 53(4): 824-833.
DOI: 10.7544/issn1000-1239.2016.20151106
Abstract:
The Internet is considered under the comparative process of biological evolution, which can predict the future of the Internet. The IPv4 and IPv6 whole networks data from 2010 to 2014 authorized by CAIDA was adopted in this paper. On the basis of the previous researches, this study combines the Eigen’s definition of ‘vital signs’ with the Internet topology characters: the metabolism of the Internet is measured by the standard Internet structure entropy, and the process of the Internet self-replication is represented by the various degree distributions of K-core networks, and the errors happened in the process of Internet self-replication is characterized by the changes in the evolution of the average clustering coefficient and average path length. The vital signs are observed in both IPv4 and IPv6 Internet topologies, and IPv6 which has prominent vital signs behaves more vitality. The detection of the vital signs from Internet topology is a mark of introducing biological principles to researches on Internet topology successfully. For the further studies which are researched on or predict the dynamic of the Internet, the discovery in this study calls for attention to treat the Internet as a creature. It enriches the research methods and brings more space of development for the Internet researches. At the meantime, some advice for research direction about redesigning and rebuilding Internet is put forward in this study.
The Internet is considered under the comparative process of biological evolution, which can predict the future of the Internet. The IPv4 and IPv6 whole networks data from 2010 to 2014 authorized by CAIDA was adopted in this paper. On the basis of the previous researches, this study combines the Eigen’s definition of ‘vital signs’ with the Internet topology characters: the metabolism of the Internet is measured by the standard Internet structure entropy, and the process of the Internet self-replication is represented by the various degree distributions of K-core networks, and the errors happened in the process of Internet self-replication is characterized by the changes in the evolution of the average clustering coefficient and average path length. The vital signs are observed in both IPv4 and IPv6 Internet topologies, and IPv6 which has prominent vital signs behaves more vitality. The detection of the vital signs from Internet topology is a mark of introducing biological principles to researches on Internet topology successfully. For the further studies which are researched on or predict the dynamic of the Internet, the discovery in this study calls for attention to treat the Internet as a creature. It enriches the research methods and brings more space of development for the Internet researches. At the meantime, some advice for research direction about redesigning and rebuilding Internet is put forward in this study.
2016, 53(4): 834-844.
DOI: 10.7544/issn1000-1239.2016.20151165
Abstract:
As one of the most significant parts of “Internet +”, Internet of things (IoT) is being widely applied in various aspects of human society. IPv6 identification is the base of large-scale deployment and interconnection of IoT. Nevertheless, the 128-bit IPv6 address brings more storage and bandwidth consumptions for resource-constrained IoT. A novel compression mechanism-named IACH is proposed for the hierarchical IoT forwarding architecture, which mainly includes: removing invalid routing message at the end of IPv6 address, address stripping, and address extension. Moreover, the irregular outside IPv6 addresses can be translated into virtual addresses of IoT subnets by means of address mapping, which is compressed according the described mechanism. IACH is fully compatible with 6LoWPAN. Experiments and performance analysis show that IACH can significantly increase the real upper payload in packets transmission. In particular, the forwarding delay of IACH is shorter than standard 6LoWPAN for packets with the same length of IP-upper payload.
As one of the most significant parts of “Internet +”, Internet of things (IoT) is being widely applied in various aspects of human society. IPv6 identification is the base of large-scale deployment and interconnection of IoT. Nevertheless, the 128-bit IPv6 address brings more storage and bandwidth consumptions for resource-constrained IoT. A novel compression mechanism-named IACH is proposed for the hierarchical IoT forwarding architecture, which mainly includes: removing invalid routing message at the end of IPv6 address, address stripping, and address extension. Moreover, the irregular outside IPv6 addresses can be translated into virtual addresses of IoT subnets by means of address mapping, which is compressed according the described mechanism. IACH is fully compatible with 6LoWPAN. Experiments and performance analysis show that IACH can significantly increase the real upper payload in packets transmission. In particular, the forwarding delay of IACH is shorter than standard 6LoWPAN for packets with the same length of IP-upper payload.
2016, 53(4): 845-860.
DOI: 10.7544/issn1000-1239.2016.20151121
Abstract:
In the inter-domain routing system, the running of the border gateway protocol (BGP) is on the assumption that ASes trust each other, and there is lack of effective verification on the validity of the routing information, so the false information publishers have the chance to seriously threaten the security of the inter-domain routing system. However, the existing works can not effectively limit the generation and transmission of the false routing information, so this paper presents a trust model for inter-domain routing system to achieve the trust evaluation on the routing behavior of the ASes. In this model, the evaluator’s direct evaluation of the evaluated AS’s routing behavior and the evaluated AS’s neighbors’ direct evaluation, weight value is assigned to different direct evaluation to compute the trust degree of the evaluated AS. A routing announcement behavior prediction method is used to make the direct evaluation result accurately reflect the evaluated AS’s future probability of sending true routing information. In addition, in order to promote ASes to join in the trust recommending positively, an incentive mechanism is used, in which every AS evaluates the other ASes’ recommendation behavior in history and computes the corresponding recommendation probability for them. The simulation results show that, compared with other trust models for inter-domain routing system, the trust evaluation result of our model is more accurate to reflect the evaluated AS’s future probability of sending true routing information.
In the inter-domain routing system, the running of the border gateway protocol (BGP) is on the assumption that ASes trust each other, and there is lack of effective verification on the validity of the routing information, so the false information publishers have the chance to seriously threaten the security of the inter-domain routing system. However, the existing works can not effectively limit the generation and transmission of the false routing information, so this paper presents a trust model for inter-domain routing system to achieve the trust evaluation on the routing behavior of the ASes. In this model, the evaluator’s direct evaluation of the evaluated AS’s routing behavior and the evaluated AS’s neighbors’ direct evaluation, weight value is assigned to different direct evaluation to compute the trust degree of the evaluated AS. A routing announcement behavior prediction method is used to make the direct evaluation result accurately reflect the evaluated AS’s future probability of sending true routing information. In addition, in order to promote ASes to join in the trust recommending positively, an incentive mechanism is used, in which every AS evaluates the other ASes’ recommendation behavior in history and computes the corresponding recommendation probability for them. The simulation results show that, compared with other trust models for inter-domain routing system, the trust evaluation result of our model is more accurate to reflect the evaluated AS’s future probability of sending true routing information.
2016, 53(4): 861-872.
DOI: 10.7544/issn1000-1239.2016.20151037
Abstract:
With the advances of mobile Internet market, many customers use smartphones and tablets to replace desktops as the default Internet accessing tools. The demand for mobile data also increases rapidly. Despite the increasing popularity of mobile computing, exploiting its full potential is still difficult due to the expensive Internet access from mobile clients. The conflict of demand-cost has impeded the development of mobile Internet in some extent. It has become a critical issue that how to design the optimal data allocation mechanism for all participants from the global perspective. In this paper, we investigate a novel data allowance (DA) model which enables seamless collaboration between Internet content providers (CPs) and Internet service providers (ISPs). Taking Alibaba’s real-world deployment as an example, we provide a detailed economic analysis of this business model and reveal the following findings: Firstly, this model enables a more flexible relationship between ISPs and their customers, which can efficiently increase the active online time of mobile users. Secondly, the proposed CP-provided subsidization policy leads to a win-win solution for both CPs and users. Thirdly, the subsidization policy is restrained by some constraints which ensure the validity of the subsidizing process. We believe that our findings provide important insights for CPs and ISPs to design the effective subsidization mechanism for mobile users in the mobile Internet market.
With the advances of mobile Internet market, many customers use smartphones and tablets to replace desktops as the default Internet accessing tools. The demand for mobile data also increases rapidly. Despite the increasing popularity of mobile computing, exploiting its full potential is still difficult due to the expensive Internet access from mobile clients. The conflict of demand-cost has impeded the development of mobile Internet in some extent. It has become a critical issue that how to design the optimal data allocation mechanism for all participants from the global perspective. In this paper, we investigate a novel data allowance (DA) model which enables seamless collaboration between Internet content providers (CPs) and Internet service providers (ISPs). Taking Alibaba’s real-world deployment as an example, we provide a detailed economic analysis of this business model and reveal the following findings: Firstly, this model enables a more flexible relationship between ISPs and their customers, which can efficiently increase the active online time of mobile users. Secondly, the proposed CP-provided subsidization policy leads to a win-win solution for both CPs and users. Thirdly, the subsidization policy is restrained by some constraints which ensure the validity of the subsidizing process. We believe that our findings provide important insights for CPs and ISPs to design the effective subsidization mechanism for mobile users in the mobile Internet market.
2016, 53(4): 873-883.
DOI: 10.7544/issn1000-1239.2016.20148455
Abstract:
The coding mode space of high efficiency video coding (HEVC) is extremely large so it needs huge amount of computations for HEVC encoders to do mode decision (MD). Parallelizing HEVC encoding on many-core platforms is an efficient and promising approach to fulfill the high computational demands. Traditional coarse-grained parallelizing schemes such as Tiles and wavefront parallel processing (WPP) either cause too much quality loss or cant afford a high parallelism degree. In this paper, the potential parallelism in HEVC intra MD process is exploited, and a multi-level and fine-grained highly parallel intra MD method which works in a coding tree unit (CTU) is proposed. Specifically, the intra MD process in a CTU is divided into six types of sub-tasks, and the data dependencies among adjacent blocks that hinder parallel processing are analyzed and removed, including intra prediction dependency, prediction mode dependency and entropy coding dependency; consequently the MD computation for all fine-grained coding blocks of different levels within the same CTU can be computed concurrently. The proposed parallel MD method is implemented on Tile-Gx36 platform. Experimental results show that the proposed parallel MD method gets an overall speed up of more than 18x with acceptable quality loss (about 3% bit-rate increasing), compared with the non-parallel baseline HM.
The coding mode space of high efficiency video coding (HEVC) is extremely large so it needs huge amount of computations for HEVC encoders to do mode decision (MD). Parallelizing HEVC encoding on many-core platforms is an efficient and promising approach to fulfill the high computational demands. Traditional coarse-grained parallelizing schemes such as Tiles and wavefront parallel processing (WPP) either cause too much quality loss or cant afford a high parallelism degree. In this paper, the potential parallelism in HEVC intra MD process is exploited, and a multi-level and fine-grained highly parallel intra MD method which works in a coding tree unit (CTU) is proposed. Specifically, the intra MD process in a CTU is divided into six types of sub-tasks, and the data dependencies among adjacent blocks that hinder parallel processing are analyzed and removed, including intra prediction dependency, prediction mode dependency and entropy coding dependency; consequently the MD computation for all fine-grained coding blocks of different levels within the same CTU can be computed concurrently. The proposed parallel MD method is implemented on Tile-Gx36 platform. Experimental results show that the proposed parallel MD method gets an overall speed up of more than 18x with acceptable quality loss (about 3% bit-rate increasing), compared with the non-parallel baseline HM.
2016, 53(4): 884-891.
DOI: 10.7544/issn1000-1239.2016.20140726
Abstract:
Super-resolution (SR) reconstruction based on sparse representation and dictionary learning algorithm does not decompose the image at first. It reconstructs the image with its whole information based on sparse representation and dictionary learning algorithm directly. It is said that images can be decomposed into low-rank part and sparse part by low-rank matrix theory. Using different methods according to the characteristics of the different parts can be more effective to use the characteristics of the image. This paper proposes a super-resolution reconstruction method based on low-rank matrix and dictionary learning. The method obtains the low-rank part and sparse part of the original image via low-rank decomposition at first. The low-rank part retains most of the information of the image. The algorithm reconstructs the image based on dictionary learning method only for the low-rank part. The sparse part of the image reconstruction is not involved in the learning method, instead its reconstruction is based on linear interpolation method directly. Experimental results show that it can not only enhance the quality of the image reconstruction but also reduce the time of the reconstruction. Compared with existing algorithms, our method obtains better results in the visual effects, the peak signal to noise ratio and the running speed of the algorithm.
Super-resolution (SR) reconstruction based on sparse representation and dictionary learning algorithm does not decompose the image at first. It reconstructs the image with its whole information based on sparse representation and dictionary learning algorithm directly. It is said that images can be decomposed into low-rank part and sparse part by low-rank matrix theory. Using different methods according to the characteristics of the different parts can be more effective to use the characteristics of the image. This paper proposes a super-resolution reconstruction method based on low-rank matrix and dictionary learning. The method obtains the low-rank part and sparse part of the original image via low-rank decomposition at first. The low-rank part retains most of the information of the image. The algorithm reconstructs the image based on dictionary learning method only for the low-rank part. The sparse part of the image reconstruction is not involved in the learning method, instead its reconstruction is based on linear interpolation method directly. Experimental results show that it can not only enhance the quality of the image reconstruction but also reduce the time of the reconstruction. Compared with existing algorithms, our method obtains better results in the visual effects, the peak signal to noise ratio and the running speed of the algorithm.
2016, 53(4): 892-903.
DOI: 10.7544/issn1000-1239.2016.20140508
Abstract:
There is a certain deviation to obtain the threshold in three classical global thresholding algorithms which are Otsu algorithm, maximum entropy algorithm and minimum error algorithm. To solve this problem, a threshold optimization framework (TOF) of global thresholding algorithms using Gaussian fitting is proposed. Firstly, take advantage of the global threshold method to obtain the initial threshold in the optimization framework and divide the image into two parts of the background and object roughly. And then, Two Gaussian distributions are fitted by calculating the mean and variance of each part. Since the optimal threshold value is in the intersection location of two Gaussian distributions, the presented framework optimizes the thresholds using iterative approach until eventually converging to the optimal threshold position. In order to improve anti-noise performance, combined with the reconstruction of three-dimensional histogram and thinking of reducing the dimensionality, we propose a robust threshold optimization framework (RTOF) of global thresholding algorithms using Gaussian fitting. Finally, extensive experiments are performed and the results show that those thresholds derived from Otsu scheme, maximum entropy scheme and minimum error scheme using the proposed threshold optimization framework can converge to the optimal threshold position. Plus, the presented algorithm has robust anti-noise performance and high-efficiency.
There is a certain deviation to obtain the threshold in three classical global thresholding algorithms which are Otsu algorithm, maximum entropy algorithm and minimum error algorithm. To solve this problem, a threshold optimization framework (TOF) of global thresholding algorithms using Gaussian fitting is proposed. Firstly, take advantage of the global threshold method to obtain the initial threshold in the optimization framework and divide the image into two parts of the background and object roughly. And then, Two Gaussian distributions are fitted by calculating the mean and variance of each part. Since the optimal threshold value is in the intersection location of two Gaussian distributions, the presented framework optimizes the thresholds using iterative approach until eventually converging to the optimal threshold position. In order to improve anti-noise performance, combined with the reconstruction of three-dimensional histogram and thinking of reducing the dimensionality, we propose a robust threshold optimization framework (RTOF) of global thresholding algorithms using Gaussian fitting. Finally, extensive experiments are performed and the results show that those thresholds derived from Otsu scheme, maximum entropy scheme and minimum error scheme using the proposed threshold optimization framework can converge to the optimal threshold position. Plus, the presented algorithm has robust anti-noise performance and high-efficiency.
2016, 53(4): 904-920.
DOI: 10.7544/issn1000-1239.2016.20150158
Abstract:
Cloud storage is a novel data storage architecture. There are some challenges about data security and manageability in cloud. Cloud needs to provide secure and reliable data access service for users. Because of the variety and volume of the data in cloud, a fine-grained access control mechanism named attribute-based encryption (ABE) has been proposed to ensure data security. In ABE mechanism, data owner describes access privileges of data by access policies and encrypts the data with the policy. User can recover the data if and only if he matches with the policy. Due to various reasons, the access privilege is dynamic and changeable, which increases the difficulty of data management and costs lot of system resource in cloud. Thus, we construct a cloud storage architecture provided by fine-grained ciphertext access control mechanism by use of utilizing ABE which supports efficient, security and manageable data access service. Firstly, we propose a transformation method amongst the common types of access policy, such that the access policy is expressed more generaly. Secondly, we provide three methods to manage access policy: updating privilege, agency privilege and temporary privilege. All of the methods can reduce a lot of computation and communication cost brought by policy updating. Finally, we give the analysis and simulation about our scheme. The results show that our cloud storage architecture is security, efficient and manageable.
Cloud storage is a novel data storage architecture. There are some challenges about data security and manageability in cloud. Cloud needs to provide secure and reliable data access service for users. Because of the variety and volume of the data in cloud, a fine-grained access control mechanism named attribute-based encryption (ABE) has been proposed to ensure data security. In ABE mechanism, data owner describes access privileges of data by access policies and encrypts the data with the policy. User can recover the data if and only if he matches with the policy. Due to various reasons, the access privilege is dynamic and changeable, which increases the difficulty of data management and costs lot of system resource in cloud. Thus, we construct a cloud storage architecture provided by fine-grained ciphertext access control mechanism by use of utilizing ABE which supports efficient, security and manageable data access service. Firstly, we propose a transformation method amongst the common types of access policy, such that the access policy is expressed more generaly. Secondly, we provide three methods to manage access policy: updating privilege, agency privilege and temporary privilege. All of the methods can reduce a lot of computation and communication cost brought by policy updating. Finally, we give the analysis and simulation about our scheme. The results show that our cloud storage architecture is security, efficient and manageable.
2016, 53(4): 921-931.
DOI: 10.7544/issn1000-1239.2016.20150682
Abstract:
With the rapid development of integrated circuit technology, the number of integrated components on a chip continues to increase. Efficient interconnection between the processing units on chip becomes a key issue. Therefore firstly system-on-chip (SoC) and then two-dimensional networks-on-chip (2D NoC) have been proposed to cope with this problem. But now even the 2D NoC has reached a bottleneck in many aspects, so the design of Three-Dimensional networks-on-chip (3D NoC) is inevitable. 3D NoC has attracted the attention of the researchers from both Academia and industry. One of the key issues of 3D NoC is low-power mapping. We have previously proposed a 3D NoC low-power mapping algorithm based on improved genetic algorithm with good results. But when the scale of the problem gets larger, the amount of calculation increases gradually and operation efficiency is reduced significantly. To solve this problem, this paper proposes a new 3D NoC task mapping algorithm with power optimization based on a double improved genetic algorithm, and the simulation experiments are conducted to validate the algorithm. The results show that under the conditions of a large population size, the 3D NoC task mapping algorithm cannot only reduce the power, but also reduce the running time significantly.
With the rapid development of integrated circuit technology, the number of integrated components on a chip continues to increase. Efficient interconnection between the processing units on chip becomes a key issue. Therefore firstly system-on-chip (SoC) and then two-dimensional networks-on-chip (2D NoC) have been proposed to cope with this problem. But now even the 2D NoC has reached a bottleneck in many aspects, so the design of Three-Dimensional networks-on-chip (3D NoC) is inevitable. 3D NoC has attracted the attention of the researchers from both Academia and industry. One of the key issues of 3D NoC is low-power mapping. We have previously proposed a 3D NoC low-power mapping algorithm based on improved genetic algorithm with good results. But when the scale of the problem gets larger, the amount of calculation increases gradually and operation efficiency is reduced significantly. To solve this problem, this paper proposes a new 3D NoC task mapping algorithm with power optimization based on a double improved genetic algorithm, and the simulation experiments are conducted to validate the algorithm. The results show that under the conditions of a large population size, the 3D NoC task mapping algorithm cannot only reduce the power, but also reduce the running time significantly.
2016, 53(4): 932-940.
DOI: 10.7544/issn1000-1239.2016.20148278
Abstract:
City organization and residents behavior are one of the key researches in urban geography. With the rapid development of information technology, the impact of residents spatial and temporal behavior on urban spatial organization and structure shows a growing trend, therefore in-depth analysis of the spatio-temporal behavior of city space and urban residents has high research values. After the acquisition of Hangzhou mobile network traffic logs, the gathering patterns of urban residents are studied with spatial point pattern analysis, and the features of moving distance and direction are analyzed. Using grid approach, we divide the urban space into blocks, and focus on the emergence of hotspot point, the change rate of human flow, tidal effects on weekdays, and present the concept of blocks difference index which is used to cluster blocks and analyze the relationship between the correlation of blocks and their distances. Since our research data comes from mobile network traffic logs, it has a wide coverage and a large volume, which is ideal for search on residents and city behavior on large spatio-temporal scales.
City organization and residents behavior are one of the key researches in urban geography. With the rapid development of information technology, the impact of residents spatial and temporal behavior on urban spatial organization and structure shows a growing trend, therefore in-depth analysis of the spatio-temporal behavior of city space and urban residents has high research values. After the acquisition of Hangzhou mobile network traffic logs, the gathering patterns of urban residents are studied with spatial point pattern analysis, and the features of moving distance and direction are analyzed. Using grid approach, we divide the urban space into blocks, and focus on the emergence of hotspot point, the change rate of human flow, tidal effects on weekdays, and present the concept of blocks difference index which is used to cluster blocks and analyze the relationship between the correlation of blocks and their distances. Since our research data comes from mobile network traffic logs, it has a wide coverage and a large volume, which is ideal for search on residents and city behavior on large spatio-temporal scales.
2016, 53(4): 941-948.
DOI: 10.7544/issn1000-1239.2016.20140806
Abstract:
Attribute-value extraction is an important and challenging task in information extraction, which aims to automatically discover the values of attributes of named entities. In this paper, we focus on extracting these values from Chinese unstructured text. In order to make models easy to compute, current major methods of attribute-value extraction use only local feature. As a result, it may not make full use of global information related to attribute values. We propose a novel approach based on global feature to enhance the performance of attribute-value extraction. Two types of global feature are defined to capture the extra information beyond local feature, which are boundary distribution feature and value-name dependency feature. To our knowledge, this is the first attempt to acquire attribute values utilizing global feature. Then a new perceptron algorithm is proposed that can use all types of global feature. The proposed algorithm can learn the parameters of local feature and global feature simultaneously. Experiments are carried out on different kinds of attributes of some entity categories. Experimental results show that both precision and recall of our proposed approach are significantly higher than CRF model and averaged perceptron with only local feature. The proposed approach has a good generalization capability on open-domain.
Attribute-value extraction is an important and challenging task in information extraction, which aims to automatically discover the values of attributes of named entities. In this paper, we focus on extracting these values from Chinese unstructured text. In order to make models easy to compute, current major methods of attribute-value extraction use only local feature. As a result, it may not make full use of global information related to attribute values. We propose a novel approach based on global feature to enhance the performance of attribute-value extraction. Two types of global feature are defined to capture the extra information beyond local feature, which are boundary distribution feature and value-name dependency feature. To our knowledge, this is the first attempt to acquire attribute values utilizing global feature. Then a new perceptron algorithm is proposed that can use all types of global feature. The proposed algorithm can learn the parameters of local feature and global feature simultaneously. Experiments are carried out on different kinds of attributes of some entity categories. Experimental results show that both precision and recall of our proposed approach are significantly higher than CRF model and averaged perceptron with only local feature. The proposed approach has a good generalization capability on open-domain.