Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 August 2016, Volume 53 Issue 8
A Ranking Based Poisson Matrix Factorization Model for Point-of-Interest Recommendation
Yu Yonghong, Gao Yang,Wang Hao
2016, 53(8):  1651-1663.  doi:10.7544/issn1000-1239.2016.20160202
Asbtract ( 1339 )   HTML ( 3)   PDF (3051KB) ( 939 )  
Related Articles | Metrics
With the rapid growth of location-based social network (LBSN), point-of-interest (POI) recommendation has become an important means to meet users’ personalized demands and alleviate the information overload problem. However, traditional POI recommendation algorithms have the following problems: 1)most of existing POI recommendation algorithms simplify users’ check-in frequencies at a location, i.e., regardless how many times a user checks-in a location, they only use binary values to indicate whether a user has visited a POI; 2)matrix factorization based POI recommendation algorithms totally treat users’ check-in frequencies as ratings in traditional recommender systems and model users’ check-in behaviors using the Gaussian distribution; 3)traditional POI recommendation algorithms ignore that users’ check-in feedback is implicit and only positive examples are observed in POI recommendation. In this paper, we propose a ranking based Poisson matrix factorization model for POI recommendation. Specifically, we first utilize the Poisson distribution instead of the Gaussian distribution to model users’ check-in behaviors. Then, we use the Bayesian personalized ranking metric to optimize the loss objective function of Poisson matrix factorization and fit the partial order of POIs. Finally, we leverage a regularized term encoding geographical influence to constrain the process of Poisson matrix factorization. Experimental results on real-world datasets show that our proposed approach outperforms traditional POI recommendation algorithms.
Collaborative Filtering Recommendation Algorithm Combining Community Structure and Interest Clusters
Guo Hongyi, Liu Gongshen, Su Bo,Meng Kui
2016, 53(8):  1664-1672.  doi:10.7544/issn1000-1239.2016.20160175
Asbtract ( 1025 )   HTML ( 1)   PDF (2068KB) ( 938 )  
Related Articles | Metrics
Traditional collaborative filtering recommendation algorithms suffer from data sparsity, which results in poor recommendation accuracy. Social connections among users can reflect their interactions, which can be mixed into recommendation algorithms to improve the accuracy. Only straightforward social connections have been used by most current social recommendation algorithms, while users’ latent interest and cluster information haven’t been considered. In response to these circumstances, this paper proposes a collaborative filtering recommendation algorithm combining community structure and interest clusters. Firstly, overlapping community detection algorithm is used to detect the community structure existed in user social network, thus users in the same community have certain common characteristics. Meanwhile, we design a customized fuzzy clustering algorithm to discover users’ interest clusters, which uses item-category relationship and users’ activity history as input. Users in the same cluster are similar in generalized interest. We quantify users’ preference for each social community and interest cluster they belong to respectively. Then, we combine this two types of user group information into matrix factorization model by adding a clustering-based regularization term to improve the objective function. Experiments conducted on the Yelp dataset show that, in comparison to other methods including both traditional and social recommendation algorithms, our approach gets better recommendation results in accuracy.
Online Consumptions Prediction via Modeling User Behaviors and Choices
Zeng Xianyu, Liu Qi, Zhao Hongke, Xu Tong, Wang Yijun,Chen Enhong
2016, 53(8):  1673-1683.  doi:10.7544/issn1000-1239.2016.20160103
Asbtract ( 1347 )   HTML ( 4)   PDF (2515KB) ( 914 )  
Related Articles | Metrics
The rise of electronic e-commerce sites and the formation of the user’s online shopping habits, have brought a huge amount of online consumer behavioral data. Mining users’ preferences from these behavioral logs (e.g. clicking data) and then predicting their final consumption choices are of great importance for improving the conversion rate of e-commerce. Along this line, this paper proposes a way of combining users’ behavioral data and choice model to predict which item each user will finally consume. Specifically, we first estimate the optimum substitute in each consumption session by a utility function of users’ behavioral sequences, and then we build a latent factor based choice model (LF-CM) for the consumed items and the substitutes. In this way, the preference of users can be computed and the future consumptions can be predicted. One step further, to make full use of users’ information of choosing and improve the precision of consumption prediction, we also propose a learning-to-rank model (latent factor and sequence based choice model, LFS-CM), which considers all the items in one session. By integrating latent factors and utility function of users’ behavioral sequences, LFS-CM can improve the prediction precision. Finally, we use the real-world dataset of Tmall and evaluate the performance of our methods on a distributed environment. The experimental results show that both LF-CM and LFS-CM perform well in predicting online consumption behaviors.
Feature Selection Based on the Measurement of Correlation Information Entropy
Dong Hongbin, Teng Xuyang,Yang Xue
2016, 53(8):  1684-1695.  doi:10.7544/issn1000-1239.2016.20160172
Asbtract ( 1214 )   HTML ( 9)   PDF (2019KB) ( 962 )  
Related Articles | Metrics
Feature selection aims to select a smaller feature subset from the original feature set. The subset can provide the approximate or better performance in data mining and machine learning. Without transforming physical characteristics of features, fewer features give a more powerful interpretation. Traditional information-theoretic methods tend to measure features relevance and redundancy separately and ignore the combination effect of the whole feature subset. In this paper, the correlation information entropy is applied to feature selection, which is a technology in data fusion. Based on this method, we measure the degree of the independence and redundancy among features. Then the correlation matrix is constructed by utilizing the mutual information between features and their class labels and the combination of feature pairs. Besides, with the consideration of the multivariable correlation of different features in subset, the eigenvalue of the correlation matrix is calculated. Therefore, the sorting algorithm of features and an adaptive feature subset selection algorithm combining with the parameter are proposed. Experiment results show the effectiveness and efficiency on classification tasks of the proposed algorithms.
Consistent Collective Entity Linking Algorithm
Liu Qiao, Zhong Yun, Liu Yao, Wu Zufeng,Qin Zhiguang
2016, 53(8):  1696-1708.  doi:10.7544/issn1000-1239.2016.20160192
Asbtract ( 1041 )   HTML ( 1)   PDF (2387KB) ( 486 )  
Related Articles | Metrics
The goal of entity linking is to link entity mentions in the document to their corresponding entity in a knowledge base. The prevalent approaches can be divided into two categories: the similarity-based approaches and the graph-based collective approaches. Each of them has some pros and cons. The similarity-based approaches are good at distinguish entities from the semantic perspective, but usually suffer from the disadvantage of ignoring relationship between entities; while the graph-based approaches can make better use of the relation between entities, but usually suffer from bad discrimination on similar entities. In this work, we present a consistent collective entity linking algorithm that can take full advantage of the structured relationship between entities contained in the knowledge base, to improve the discrimination capability of the proposed algorithm on similar entities. We extensively evaluate the performance of our method on two public datasets, and the experimental results show that our method can be effective at promoting the precision and recall of the entity linking results. The overall performance of the proposed algorithm significantly outperform other state-of-the-art algorithms.
Influence Maximization Across Multi-Channels in Social Network
Li Xiaokang, Zhang Xi, Sun Hao,Sun Guangzhong
2016, 53(8):  1709-1718.  doi:10.7544/issn1000-1239.2016.20160211
Asbtract ( 947 )   HTML ( 1)   PDF (2635KB) ( 620 )  
Related Articles | Metrics
Social networks have widely attracted the interests of researchers in recent years because of their popularity. Influence maximization in social network is one of the most popular problems of social network fields. Influence maximization in social network is a problem to pick up k seed users from a social network, target them as seed users and propagate influence via the network, with the goal of maximizing the number of users influenced by seed nodes. The majority of previous work is based on a single channel. However, in real world, information is propagated via multiple channels. This paper takes information spread in multiple networks into consideration, proposes and formulates influence maximization problem across multi-channels in social network. The problem becomes to pick up k seed users from multiple networks and simultaneously propagate influence across multiple networks, maximizing the number of influenced users by seed set. We prove the problem is NP-hard under independent cascade model through reducing it into influence maximization in social network. According to the property of the problem, we put forward three efficient and effective approximation methods for it. Experiments show our proposed methods are effective on four real social networks.
Self-Adaptive Clustering Based on Local Density by Descending Search
Xu Zhengguo, Zheng Hui, He Liang,Yao Jiaqi
2016, 53(8):  1719-1728.  doi:10.7544/issn1000-1239.2016.20160136
Asbtract ( 950 )   HTML ( 0)   PDF (4102KB) ( 596 )  
Related Articles | Metrics
Cluster analysis is an important research domain of data mining. On the unsupervised condition, it is aimed at figuring out the class attributes of samples in a mixed data set automatically. For decades a certain amount of clustering algorithms have been proposed associated with different kinds of priori knowledge. However, there are still some knotty problems unsolved in clustering complex data sets, such as the unknown number and miscellaneous patterns of clusters, the unbalanced numbers of samples between clusters, and varied densities within clusters. These problems have become the difficult and emphatic points in the research nowadays. Facing these challenges, a novel clustering method is introduced. Based on the definition of local density and the intuition of ordered density in clusters, the new clustering method can find out natural partitions by self-adapted searching the boundaries of clusters. Furthermore, in the clustering process, it can overcome the straitened circumstances mentioned above, with avoiding noise disturbance and false classification. The clustering method is testified on 6 typical and markedly different data sets, and the results show that it has good feasibility and performance in the experiments. Compared with other classic clustering methods and an algorithm presented recently, in addition, the new clustering method outperforms them on 2 different evaluation indexes.
Tensor Representation Based Dynamic Outlier Detection Method in Heterogeneous Network
Liu Lu, Zuo Wanli,Peng Tao
2016, 53(8):  1729-1739.  doi:10.7544/issn1000-1239.2016.20160178
Asbtract ( 941 )   HTML ( 9)   PDF (3961KB) ( 1152 )  
Related Articles | Metrics
Mining rich semantic information hidden in heterogeneous information network is an important task in data mining. The value, data distribution and generation mechanism of outliers are all different from that of normal data. It is of great significance of analyzing its generation mechanism or even eliminating outliers. Outlier detection in homogeneous information network has been studied and explored for a long time. However, few of them are aiming at dynamic outlier detection in heterogeneous networks. Many issues need to be settled. Due to the dynamics of the heterogeneous information network, normal data may become outliers over time. This paper proposes a dynamic tensor representation based outlier detection method, called TRBOutlier. It constructs tensor index tree according to the high order data represented by tensor. The features are added to direct item set and indirect item set respectively when searching the tensor index tree. Meanwhile, we describe a clustering method based on the correlation of short texts to judge whether the objects in datasets change their original clusters and then detect outliers dynamically. This model can keep the semantic relationship in heterogeneous networks as much as possible in the case of fully reducing the time and space complexity. The experimental results show that our proposed method can detect outliers dynamically in heterogeneous information network effectively and efficiently.
Mining Patent Knowledge for Automatic Keyword Extraction
Chen Yiqun, Zhou Ruqi, Zhu Weiheng, Li Mengting,Yin Jian
2016, 53(8):  1740-1752.  doi:10.7544/issn1000-1239.2016.20160195
Asbtract ( 1453 )   HTML ( 6)   PDF (2962KB) ( 836 )  
Related Articles | Metrics
Keywords are important clues that can help a user quickly decide whether to skip, to scan, or to read the article. Keyword extraction plays an increasingly crucial role in information retrieval, natural language processing and other several text related researches. This paper addresses the problem of automatic keyword extraction and designs a novel automatic keyword extraction approach making use of patent knowledge. This approach can help computer to learn and understand the document as human being according to its background knowledge, finally pick out keywords automatically. The patent data set is chosen as external knowledge repository because of its huge amount of data, rich content, accurate expression and professional authority. This paper uses patent data set as the external knowledge repository serves for keyword extraction. An algorithm is designed to construct the background knowledge repository based on patent data set, also a method for automatic keyword extraction with novel word features is provided. This paper discusses the characters of patent data, mines the relation between different patent files to construct background knowledge repository for target document, and finally achieves keyword extraction. The related patent files of target document are used to construct background knowledge repository. The information of patent inventors, assignees, citations and classification are used to mining the hidden knowledge and relationship between different patent files. And the related knowledge is imported to extend the background knowledge repository. Novel word features are derived according to the different background knowledge supplied by patent data. The word features reflecting the document’s background knowledge offer valuable indications on individual words’ importance in the target document. The keyword extraction problem can then be regarded as a classification problem and the support vector machine (SVM) is used to extract the keywords. Experiments have been done using patent data set and open data set. Experimental results have proved that using these novel word features, the novel approach can achieve superior performance in keyword extraction to other state-of-the-art approaches.
Protein Function Prediction Using Positive and Negative Examples
Fu Guangyuan, Yu Guoxian, Wang Jun,Guo Maozu
2016, 53(8):  1753-1765.  doi:10.7544/issn1000-1239.2016.20160196
Asbtract ( 954 )   HTML ( 4)   PDF (1417KB) ( 593 )  
Related Articles | Metrics
Predicting protein function is one of the key challenges in the post genome era. Functional annotation databases of proteins mainly provide the knowledge of positive examples that proteins carrying out a given function, and rarely record the knowledge of negative examples that proteins not carrying out a given function. Current computational models almost only focus on utilizing the positive examples for function prediction and seldom pay attention to these scarce but informative negative examples. It is well recognized that both positive and negative examples should be used to achieve a discriminative predictor. Motivated by this recognition, in this paper, we propose a protein function prediction approach using positive and negative examples (ProPN) to bridge this gap. ProPN first utilizes a direct signed hybrid graph to describe the positive examples, negative examples, interactions between proteins and correlations between functions; and then it employs label propagation on the graph to predict protein function. The experimental results on several public available proteomic datasets demonstrate that ProPN not only makes better performance in predicting negative examples of proteins whose functional annotations are partially known than state-of-the-art algorithms, but also performs better than other related approaches in predicting functions of proteins whose functional annotations are completely unknown.
Petri Nets Based Recognition of Model Deviation Domains and Model Repair
Du Yuyue, Sun Ya’nan,Liu Wei
2016, 53(8):  1766-1780.  doi:10.7544/issn1000-1239.2016.20160099
Asbtract ( 718 )   HTML ( 0)   PDF (3172KB) ( 448 )  
Related Articles | Metrics
Process mining techniques can be used to discover process models from event logs. Event logs and process model can be contrasted by conformance checking techniques. And conformance checking techniques can be used to detect the deviations between observed behaviors and process model. However, existing techniques of process mining concern with discovering these deviations, but not support to repair the process model easily and make the process model more related to the real process. So in this paper we propose a token-based identification method of model deviation domains and a token-based technique of model repair (static model repair and dynamic model repair) through techniques of conformance checking and dynamic behaviors of workflow net. Model deviation domains can be identified effectively though the flow direction of token. We can repair process model according to model deviation domains. And we also can repair the real complex process accurately which has the structures of complicated circulation and choice. In this paper, the effectiveness and the correctness of techniques are illustrated through contrast experiment and analysis with other techniques.
Online Transfer Learning for Mining Recurring Concept in Data Stream Classification
Wen Yimin, Tang Shiqi, Feng Chao,Gao Kai
2016, 53(8):  1781-1791.  doi:10.7544/issn1000-1239.2016.20160223
Asbtract ( 1628 )   HTML ( 10)   PDF (3187KB) ( 818 )  
Related Articles | Metrics
At the age of big data, data stream classification is being applied to many fields, like spam filtering, market predicting, and weather forecasting, et al, in which recurring concept is an important character. Aiming to reduce the influence of negative transfer and improve the lag of detection of concept drift, RC-OTL is proposed for mining recurring concepts in data stream based on online transfer learning strategy. When a concept drift is detected, RC-OTL selects one current base classifier to store, and then computes the domain similarities between the current training samples and the stored classifiers, in order to select the most appropriate source classifier to combine with a new classifier for learning the upcoming samples, which results in knowledge transfer from the source domain to the target domain. In addition, RC-OTL can select appropriate classifier to classify when the current classification accuracy is detected below a given threshold before concept drift detection. The preliminary theory analysis explains why RC-OTL can reduce negative transfer effectively, and the experiment results further illustrates that RC-OTL can efficiently promote the cumulate accuracy of data stream classification, and faster adapt to the samples of new concept after concept drift takes place.
HSSM: A Hierarchical Method for Streaming Submodular Maximization
Zhang Fenxiang, Chen Huahui, Qian Jiangbo,Dong Yihong
2016, 53(8):  1792-1805.  doi:10.7544/issn1000-1239.2016.20160140
Asbtract ( 676 )   HTML ( 2)   PDF (3551KB) ( 447 )  
Related Articles | Metrics
How to extract k elements from a large data stream according to some utility functions can be reduced to maximizing a submodular set function. The traditional algorithms had already given some good solutions of summarizing a static data set by submodular method, well-known as standard greedy algorithm. Lastly researches also presented some new algorithms with corresponding distributed solutions to meet the streaming data access and the real-time response limits, but those algorithms are not good enough in maximizing the utility gain. In this paper, we propose a new algorithm called HSSM which runs as a pipelining distributed framework and requires only a single-pass access the data set. Finally, the utility gain of our solution is close to the option standard greedy solution. We also propose using utility vector to compress the set of summarization and filtering out the low gain objects to improve the original HSSM algorithm. Fortunately, this approach can get applications in many related fields such as representative articles selection, k-medoid select problem and so on. Experimental results show that the improved Spark-HSSM+ method can increase the summarization speed in direct proportion to k\+2 in contrast to the traditional method. Compared with other distributed algorithms, the result of the Spark-HSSM+ method is the most close to the standard greedy solution.
IncPR: An Incremental Parallel PageRank Algorithm
Jiang Shuangshuang, Liao Qun, Yang Yulu,Li Tao
2016, 53(8):  1806-1818.  doi:10.7544/issn1000-1239.2016.20160210
Asbtract ( 1138 )   HTML ( 12)   PDF (3278KB) ( 571 )  
Related Articles | Metrics
PageRank algorithm plays an important role in various areas such as information retrieval and social network analysis. The continuously increasing size of networks and the timeliness requirements make it bring enormous computational overhead to repeat calculating PageRank for a massive and evolving network. Therefore, IncPR, a PageRank algorithm based on the idea of incremental computation model is proposed. IncPR reuses existing results from previous computation and modifies PageRank values according to the changed part of data sets accumulatively via an extended Monte Carlo method, which can effectively avoid reduplicated computation and improve the performance of PageRank computing with no additional storage overhead. Theoretical analysis shows that the computational complexity of IncPR processing an evolving network with m nodes and n edges changed is O((cm+n)R/c\+2), where c is the reset probability and R is the number of random walks starting from each node. The computational complexity of IncPR is lower than other existing related algorithms. Our evaluations with typical real-world graphs upon a Hama cluster also demonstrate that IncPR obtains PageRank values with equal (or even higher) accuracy as the baseline Monte Carlo PageRank algorithm and reduces the amount of computation significantly. In experiments with 0.01% data changed, IncPR reduces over 99.9% amount of computation.
SparkCRF: A Parallel Implementation of CRFs Algorithm with Spark
Zhu Jizhao, Jia Yantao, Xu Jun, Qiao Jianzhong, Wang Yuanzhuo,Cheng Xueqi
2016, 53(8):  1819-1828.  doi:10.7544/issn1000-1239.2016.20160197
Asbtract ( 1380 )   HTML ( 2)   PDF (3386KB) ( 696 )  
Related Articles | Metrics
Condition random fields has been successfully applied to various applications in text analysis, such as sequence labeling, Chinese words segmentation, named entity recognition, and relation extraction in nature language processing. The traditional CRFs tools in single-node computer meet many challenges when dealing with large-scale texts. For one thing, the personal computer experiences the performance bottleneck; For another, the server fails to tackle the analysis efficiently. And upgrading hardware of the server to promote the capability of computing is not always feasible due to the cost constrains. To tackle these problems, in light of the idea of “divide and conquer”, we design and implement SparkCRF, which is a kind of distributed CRFs running on cluster environment based on Apache Spark. We perform three experiments using NLPCC2015 and the 2nd International Chinese Word Segmentation Bakeoff datasets, to evaluate SparkCRF from the aspects of performance, scalability and accuracy. Results show that: 1)compared with CRF++, SparkCRF runs almost 4 times faster on our cluster in sequence labeling task; 2)it has good scalability by adjusting the number of working cores; 3)furthermore, SparkCRF has comparable accuracy to the state-of-the-art CRF tools, such as CRF++ in the task of text analysis.
Survey of Memory Address Leakage and Its Defense
Fu Jianming, Liu Xiuwen, Tang Yi,Li Pengwei
2016, 53(8):  1829-1849.  doi:10.7544/issn1000-1239.2016.20150526
Asbtract ( 1051 )   HTML ( 2)   PDF (3696KB) ( 657 )  
Related Articles | Metrics
With memory address leakage, an attacker can bypass ALSR(address space layout randomization) mechanism, deploy ROP(return-oriented programming) chains to close the DEP (data execution prevention), and divert the program to execute Shellcode. With regard to memory address leakage, this paper gathers the related information of vulnerability instances, presents the classification of vulnerabilities resulting in memory address leakage based on the procedure of memory leakage. The paper analyzes all kinds of illegal operations of pointer or object which cause the operation of cross-border memory access, as well as side-channel information leakage.In the meantime, this paper divids the defense methods of memory address leakage into four categories according to the procedure of memory corruption attacks, including memory layout randomization, object border protection, object content protection, and the critical address information randomization. And these protections make memory layout vague, memory object unavailable, memory object unreadable and critical memory address untraceable. Finally, this paper points out that we need to provide support of memory layout randomization, fine-grained memory address randomization and object content protection in perspective of programming design, adapting the operating system to establish collaborative defense mechanism in order to build robust defense system in depth.
Atomic Algorithm Against Simple Power Attack of SM2
Han Xiaowei, Wu Liji, Wang Beibei,Wang An
2016, 53(8):  1850-1856.  doi:10.7544/issn1000-1239.2016.20150052
Asbtract ( 798 )   HTML ( 3)   PDF (2706KB) ( 440 )  
Related Articles | Metrics
SM2 algorithms are commercial elliptic curve public-key algorithms, which are released by Chinese Cryptography Administration and similar to ECC. Traditional cryptographic algorithms always have security flaws. Attackers often attack on security weaknesses of algorithms and analyze the secret-key, which poses great threat to cryptographic systems and peoples property. There are various kinds of attacks, such as power attack, fault attack and electromagnetic attack. Among these attacks, power attack is the most traditional one, which has many advantages such as small secret-key searching space and high analysis efficiency. Usually, power attack utilizes the power leakage during operation processes of cryptographic algorithms, acquires power waves and retrieves the secret key. In order to resist power attack and enhance the security of SM2 algorithms, this article learns from elliptic curve cryptography algorithms, applies the atomic concept into SM2 and proposes a novel atomic algorithm. According to theoretical comparison between the proposed algorithm and other former algorithms, it shows that the proposed algorithm saves 27.4% of computation in comparison to double-and-add always algorithm. Besides, it has less computation amount than other atomic algorithms. Furthermore, implementation has been fulfilled on SAKURA-G FPGA board. Simulation results demonstrate that the proposed algorithm can resist simple power attack successfully.
XOR-Based Region Incrementing Visual Cryptography Scheme by Random Grids
Hu Hao, Shen Gang, Yu Bin,Mahao Junfu
2016, 53(8):  1857-1866.  doi:10.7544/issn1000-1239.2016.20150058
Asbtract ( 617 )   HTML ( 1)   PDF (2746KB) ( 363 )  
Related Articles | Metrics
Considering the problem of the white pixels in secret images can not be correctly recovered, which is resulted by the OR operation confined in exiting region incrementing visual cryptography schemes, a novel definition of region incrementing visual cryptography based on XOR operation is proposed for the first time. By iterating random-grid-based (k,k) single-secret visual cryptography scheme and combing the property of 0 as the generator of the {0, 1} group, the shares generation algorithm of (k,n) single-secret-sharing scheme using XOR operation is designed, and the secret sharing and recovering procedures for region incrementing scheme are proposed further. For any original secret pixel s, according to the security level, s is reassigned associated with a randomly chosen qualified set Q in the sharing procedure, and then encoded by the proposed (k,n) single-secret-sharing scheme. The recovering procedure is the same as that of the previous schemes. The effectiveness is theoretically verified at last. Experimental results show that the present scheme not only realizes no pixel expansion, but also can obtain the perfect recovery of white pixels when all the shares are stacked.
Attribute-Based Encryption Scheme Resilient Against Continuous Auxiliary-Inputs Leakage
Ma Haiying, Zeng Guosun, Bao Zhihua, Chen Jianping, Wang Jinhua,Wang Zhanjun
2016, 53(8):  1867-1878.  doi:10.7544/issn1000-1239.2016.20140787
Asbtract ( 676 )   HTML ( 0)   PDF (1310KB) ( 415 )  
Related Articles | Metrics
For the leakage of secret keying material under side-channel attacks in attribute-based encryption, these existing solutions only allow attackers to get length-bounded leakage on the secret key. First of all, we combine the model of continuous auxiliary-inputs leakage with dual system encryption to achieve strong leakage resilience. Secondly, we reasonably design the generation of secret key to reduce its size, and then devise the first attribute-based encryption that remains secure even if the attacker can obtain the leakage of continuous auxiliary inputs. In the end, our scheme is proved fully secure in the standard model based on reasonable assumptions, even if the attacker gets the leakage of the secret key from an auxiliary-input function. Our scheme achieves the continuous and unbounded leakage of both the master key and the private key, and does not assume fully erasure of old secret keys during the key-update query. In addition, it has a desirable composition feature. Compared with relevant solutions, this scheme not only benefits the best leakage resilience, but also has shorter length of master keys and private keys.
Provably Secure Identity-Based Multi-Proxy Signature Scheme in Standard Model
Chen MingYuan Shaoliang
2016, 53(8):  1879-1892.  doi:10.7544/issn1000-1239.2016.20150197
Asbtract ( 555 )   HTML ( 0)   PDF (1372KB) ( 451 )  
Related Articles | Metrics
Multi-proxy signature schemes are quite useful tools while a signer requires delegating his signing right to a group of proxy signers. There are two main types of formal security models of multi-proxy signatures. However, they have deficiencies, respectively. One of them is complicated, and does not model the chosen warrant attacks; the other model does have the incomplete definition of adversary. Meanwhile, there is so far no provably secure identity-based multi-proxy signature scheme. In this paper, we give a formal security model of the identity-based multi-proxy signature schemes, and propose an identity-based multi-proxy signature scheme. Our security model compensates for deficiencies in existing models. It defines more powerful adversary capacity, formalizes the behaviors of the adversaries, and adopts simple and clear proof structure. The proposed identity-based multi-proxy signature scheme is based on the well-studied CDH (computational Diffie-Hellman) assumption, and is proven existentially unforgeable against chosen message/warrant attacks in our security model. In addition, we present that there are three security flaws in a recent proposed identity-based multi-proxy signature scheme and in its security model. Comparative analysis shows that the new security model is more complete, and the new multi-proxy signature scheme is a real and provably secure identity-based cryptosystem in the standard model.