ISSN 1000-1239 CN 11-1777/TP

Table of Content

15 December 2012, Volume 49 Issue 12
ACR:An Ant-Colony-based Routing in Delay Tolerant Networks
Yang Zhenguo, Huang Liusheng, Xiao Mingjun, Huang He, Zhang Yindong, and Zhu Youwen,
2012, 49(12):  2501-2514. 
Asbtract ( 696 )   HTML ( 4)   PDF (4923KB) ( 509 )  
Related Articles | Metrics
Due to the frequent partitions and intermittent connections, routing becomes one of the most challenging problems in delay tolerant networks (DTNs), which can be extensively applied in many domains, such as interplanetary networks, mobile ad hoc networks and networks in remote areas, etc. Ant colony optimization (ACO), a probabilistic technique for finding an optimal path in a graph, is widely used in many other areas. The advantages of ACO can be concluded as positive feedback, distributed calculating, and intelligent optimization. To improve the adoption to dynamic topology, this paper investigates a routing algorithm based on ant colony to reduce the delivery delay by its intelligent and self-adapting optimization. In this paper, we first model message delivery in DTNs, and then propose our Ant-colony-based routing algorithm (ACR) which is used to guide message delivery, including forwarding-based and replication-based schemes. To show the performance of our algorithm, we finally evaluate ACR on public available data set Infocom Trace and RollerNet Trace against MED, SimBet, Spray & Wait and EBR. The simulation results show that forwarding-based ACR performs at least 25.8% better than other forwarding-based algorithms in delivery delay, and replication-based ACR achieves at least 22.5% shorter delivery delay.
Cost-Optimizing Adaptive Location Service Protocol in MANET
Zong Ming, Wang Xiaodong, and Zhou Xingming
2012, 49(12):  2515-2528. 
Asbtract ( 448 )   HTML ( 0)   PDF (4207KB) ( 352 )  
Related Articles | Metrics
Location service protocols in mobile ad hoc network maintain nodes' positions. The performance of upper layer protocols depends on the performance of location service. Protocol cost is defined as the number of one-hop location querying packages and location updating packages. It is one of the most important performance indices of location service protocol and determines the scalability and other performance of the protocol. Without considering location reguest, existing location service protocols optimize protocol cost for networks with specific communication features. For optimizing the location service protocol cost in networks with different communication features, this paper proposes a minimum cost location updating tree. It is used to optimize protocol cost of multi-home region location service protocol according to the number of location querying to each home region and the protocol package transmission costs. Then, it is proved that the minimum cost location updating tree problem is an NP-hard problem. Based on these works, this paper then proposes a demand-driven multi-home region location service protocol. This protocol uses a kind of real time algorithm to get the minimum cost location updating tree just before location updating, and then forwards location updating packages according to the tree. The experimental results show that the proposed protocol can decrease the protocol cost and is more scalable.
Study on Network Cascading Failures Based on Load-Capacity Model
Guo Chi, Wang Lina, Li Yu, and Zhou Furong,
2012, 49(12):  2529-2538. 
Asbtract ( 591 )   HTML ( 1)   PDF (5821KB) ( 364 )  
Related Articles | Metrics
Cascading failures always occur in computer networks in which network traffic is severely impaired or halted to or between larger sections of the network, caused by failing or disconnected hardware or software. “Load-capacity” models are usually used for solving network traffic problems and exploring the mechanisms of cascading failures. Centering on cascading failures in complex networks, the following work is done. Firstly, the effect of network traffic load on cascading failures is analyzed. It indicates that the communication activity among network nodes presents a self-organized criticality phenomenon, and the influence on the network robustness brought by the traffic change of those original inactive nodes is much greater than that brought by those active ones. Secondly, under the constraints of economy and technology, a cost factor is introduced to model the relationship of network capacity and load, so as to reveal their constraints relationship. Finally, Centering on the issue of “how to allocate the limited redundant resources to a network with specific structure in order to improve its robustness”, an evolutionary algorithm to search an optimized capacity-allocation strategy is proposed, which makes the network achieve optimal robustness with the same resources. Experiments show that our algorithm can find a better result of capacity-allocation than the ones with linear or preferences-attached strategies.
A Game-Based Interaction Scheme among Mobile Nodes in Wireless Access Networks
Gui Jinsong, Chen Zhigang, and Deng Xiaoheng
2012, 49(12):  2539-2548. 
Asbtract ( 435 )   HTML ( 1)   PDF (2160KB) ( 356 )  
Related Articles | Metrics
In order to curb cheating behaviors of mobile nodes in wireless access networks when they relay data each other, a forwarding policy game scheme is presented in this paper. It consists of a method based on the enforcement of punishments by neighboring nodes and a method based on the enforcement of punishments by the system. The scheme makes good use of the data flow direction in wireless access networks, takes account of the rational nodes' contribution to the system and the desire for future return, and suppresses the selfish nodes' cheating behavior through reducing their future payoff expectations and exploiting their fear of punishments. The former method in the scheme can achieve a better result of restraining cheating behaviors with a less value of punishment parameter, but it is difficult to punish successfully the cheating nodes when they frequently replace their neighboring nodes. Although the latter method completely prevents cheating motivation with a more value of punishment parameter, it can make cheating nodes have no way to escape being punished. In simulation and analysis, the reasonable parameter values are obtained through integrating the two methods, which can both moderately punish the cheating nodes and effectively reduce the cheating occurrence ratio and improve the successful packet delivery ratio.
Pay-As-You-Go Web Services Discovery
Pan Ying and Tang Yong
2012, 49(12):  2549-2558. 
Asbtract ( 623 )   HTML ( 2)   PDF (2111KB) ( 341 )  
Related Articles | Metrics
With the increasing growth in popularity of Web services, extensive efforts have been brought forth to assist in Web service discovery. Web services are usually described by WSDL and advertised in UDDI registries. UDDI provides limited keyword-based search which is not powerful enough. To address this problem, information retrieval techniques are exploited to assess the similarity between two services descriptions. However, Web services description languages are schema-first, and the systems require hard up-front investment before offering powerful functionalities for Web service discovery, that is, the current research does not study how to discover Web services in a pay-as-you-go fashion. In this paper, a framework based on dataspace techniques is proposed to discover Web services in a pay-as-you-go fashion. A loosely structured data model is presented to describe Web services and the relationships among them, and then the ways to lazily compute and query this model are discussed. Furthermore, similarity is defined as intensional edges in the data model, the service information used to measure the degree of similarity can be obtained lazily, and thus the similarity can be computed in a pay-as-you-go fashion. An algorithm to support similarity-based service discovery is also presented along with a proof of its correctness. Finally, the validity of the framework is proved by the experiment.
Algorithms of Spanning Tree Based on the Stability Probability and Contribution Link of Nodes for Application Layer Multicast
Huo Lin, Li Deshun, and Tan Yinglu
2012, 49(12):  2559-2567. 
Asbtract ( 651 )   HTML ( 2)   PDF (1813KB) ( 567 )  
Related Articles | Metrics
Application layer multicast (ALM) which relys on independent terminal hosts to relay multicast data, has emerged as a viable solution to most problems associated with IP multicast. However, the application layer multicast faces new challenges, for instance, the stability of ALM system, the minimum delay spanning tree of ALM, etc. In this paper, the research focuses on the algorithms of the high stability and minimum delay spanning tree for application layer multicast. Firstly, the paper analyzes the influencing factors of ALM stability and delay, and attributes them to the stability probability of terminal hosts, the degree constraints of terminal hosts and the unicast cost between two terminal hosts. The problem of the high stability and minimum delay spanning tree is modeled into a spanning tree based on stability probability, degree-constrained, and edge-weighted for application layer multicast (T-SDE). We propose a theorem of stability degree under the T-SDE, and show that T-SDE is NP-hard. Secondly, through analyzing the contribution of nodes to application layer multicast in stability and delay, three algorithms of spanning tree are given based on the stability probability and contribution link of nodes. Finally, from the experiment, the spanning trees of these algorithms are approved to have a great advantage in the average delay, the maximum delay, and the stability degree.
A High Efficient Boolean Polynomial Representation
Li Xin, Lin Dongdai, and Xu Lin
2012, 49(12):  2568-2574. 
Asbtract ( 657 )   HTML ( 0)   PDF (1171KB) ( 560 )  
Related Articles | Metrics
Solving Boolean equations for cryptanalysis has important practical significance. However, the contradiction of limited computer storage space and solving demand growth of existing algorithms is the major bottleneck for getting more progresses. This paper presents a high efficient Boolean polynomial representation, BanYan. BanYan is designed for Boolean equations solving algorithms based on leading-term eliminating, such as F4,F5,XLs.The essence of BanYan is that it only stores the information about how to generate intermediate polynomials, but not the intermediate polynomials themselves. The original polynomials in the polynomial ring for Grnbner bases computation are called root polynomials. The new generated polynomials come from root polynomials by terms multiplication and polynomials addition. So we only store the corresponding terms and polynomials connected with the root polynomials. As the intermediate polynomials grow, the whole storage structure is getting just like a tree. That is why we call this method BanYan. Although the scale of intermediate polynomials grows exponentially in the process of Grnbner bases computation, the generation information is becoming much more simpler, so BanYan can greatly reduce the space requirement and then improve the solving ability. Analysis and experiments show that compared with traditional representations based on terms, in average case and worst case the space requirement of BanYan has l-fold reduction. Here l is the average length of intermediate polynomials.
Practical Multi-Coupon Schemes with Strong Unsplittability
Liu Xin, and Xu Qiuliang
2012, 49(12):  2575-2590. 
Asbtract ( 534 )   HTML ( 0)   PDF (1344KB) ( 442 )  
Related Articles | Metrics
So far, one main obstacle in constructing multi-coupon schemes is how to devise an efficient issue protocol in which the size of the multi-coupons can be chosen freely and the complexity of the resultant protocol is not dependent on the size of the multi-coupons. Another obstacle is how to provide efficient and flexible mechanisms for redemption protocol. This paper overcame these problems by proposing two revised schemes with improved efficiency and functionality. In order to specify the size of multi-coupons flexibly, the new schemes employed the discrete logarithm based range proof by Chaabouni et al. and the knowledge proof of committed values by Canard et al. respectively. In addition, the computation complexities of redemption protocols were optimized by making use of the batch zero-knowledge proof and verification by Peng et al. It can be proved that the new schemes are secure in Nguyen's security model for multi-coupon schemes. Moreover, the new schemes for the first time achieve all the desirable features required in applications, i.e., concurrent issuing, compact storage, batch redeeming, as well as supporting coupon's object and its expiration date. Furthermore, performance comparison shows that their communication and computation overheads are significantly lower than the previous two schemes with strong unsplittability.
Symbolic Algorithmic Verification of Generalized Noninference
Zhou Conghua, Wu Hailing, and Ju Shiguang
2012, 49(12):  2591-2602. 
Asbtract ( 509 )   HTML ( 1)   PDF (2881KB) ( 363 )  
Related Articles | Metrics
In multi-level security systems the leakage of confidential data, is in essence, the illegal flow of information. Generalized noninference characterizes the legal information flow between subjects with different security levels. Before the security system is applied, verifying whether it satisfies generalized noninference can eliminate all kinds of hidden data leakage and protect the confidentiality of data. At present the traditional verification approach for generalized noninference, i.e.“unwinding”, only checks a sufficient and non-neccessary condition making generalized noninference hold. Therefore the traditional approach is not complete. A complete verification approach for generalized noninference is proposed. It verifies generalized noninference by searching for counterexamples step by step. In order to guarantee the completeness of the approach, the double construction of finite state transition system is given, and based on the graph structure theory an approximate computation of the shortest counterexample is given. Further in order to increase the efficiency of the approach, the search of counterexamples and computation of the threshold are reduced to the quantified Boolean formula satisfiability problem. Then a symbolic verification procedure is established by a quantified Boolean formula solver. Finally, how to apply the approach to search for illegal information flow is explained by a case study of the disk-arm covert channel.
Personalized l-Diversity Algorithm for Multiple Sensitive Attributes Based on Minimum Selected Degree First
Yang Jing and Wang Bo,
2012, 49(12):  2603-2610. 
Asbtract ( 715 )   HTML ( 0)   PDF (1387KB) ( 778 )  
Related Articles | Metrics
Privacy preserving in data publishing (PPDP) is always an important issue in the fields of data mining and information security. So far, most of the research on privacy preserving technology is limited to single sensitive attribute, but there are a lot of data information which includes multiple sensitive attributes in the real life. In the meanwhile, more and more researchers are paying great attention to the personalized service in the process of privacy preserving as the requirements of personality put forward continuously. To expand the privacy preserving technology for single sensitive attribute and to satisfy the requirement of personalized service, the personalized privacy preserving approaches for multiple sensitive attributes in the process of data publishing are studied. Based on the single sensitive attribute l-diversity principle, a personalized multiple sensitive attributes l-diversity model is defined by introducing a personalized customization scheme based on domain hierarchies partitions. In the meanwhile, a multiple sensitive personalized l-diversity algorithm based on the minimum selected degree first (MSFMPL-diversity) is presented. The experimental results show that the proposed method not only can satisfy the requirement of individual personalized privacy, but also can protect the data privacy effectively and reduce the information hidden rate, which ensures the usability of the publishing data.
An Enhanced Dynamic Password Generation Algorithm and Resynchronization Scheme
Liu Xiao, Liu Weiran, and Li Weiyu
2012, 49(12):  2611-2618. 
Asbtract ( 681 )   HTML ( 3)   PDF (2564KB) ( 480 )  
Related Articles | Metrics
According to HOTP authentication framework, an enhanced dynamic password generation algorithm EOTP is designed, combined with key generation algorithm, EHMAC algorithm and dynamic truncated function. At the same time, a detailed authentication protocol based on EOTP algorithm is proposed to meet the basic requirements of authentication system. Besides, in order to solve the problem of stepping out, a resynchronization scheme based on window mechanism is designed to provide inside window resynchronization and outside window resynchronization. Finally, the security and performance of the proposed algorithm and protocol are analyzed. Results show that the algorithm has fast computing speed and high security, and it is easy to be implemented by using token or smart card hardware. The protocol can resist attacks such as brute-force attack, password theft, dictionary attack and interception/replay attack effectively, and has very high security.
Design of Key Exchange Protocol Based on Short Group Signature
Sun Yu, Han Qingtong, and Liu Jianwei
2012, 49(12):  2619-2622. 
Asbtract ( 466 )   HTML ( 3)   PDF (589KB) ( 598 )  
Related Articles | Metrics
A new key exchange protocol using linear encryption based on short group signature is proposed in this paper. Typically, a short group signature system includes six steps: setup, join, sign, verify, open and revocation. It can provide conditional privacy to group members. The key exchange phase is added into short group signature system so that short group signature system may offer confidentiality after the new key exchange phase. The proposed protocol could implement key exchange between TA(trust authority) and a group member. With the help of TA, the seed of session key can be exchanged between two group members according to this protocol. The following communication can be encrypted by symmetric encipherment algorithm using the exchanged key after key exchange phase. No more parameters is introduced into short group signature system by proposed protocol, which reduces the difficulty in system management. X.509 certification or PKI is unnecessary in proposed protocol, which keeps the conditional privacy of short group signature. Key is exchanged in only two communications, which reduces network delay and congestion. Security analysis proves that the proposed protocol resists tampering attack, impersonal attack, replay attack and man-in-the-middle attack. It provides confidential to short group signature system, which makes short group signature system more suitable for VANET, trust computing and cloud computing.
A Rule Learning Algorithm for Event Detection Based on Semantic Trajectory
Jiang Guang, Wang Xiaofeng, and Shi Zhongzhi
2012, 49(12):  2623-2630. 
Asbtract ( 800 )   HTML ( 1)   PDF (1144KB) ( 474 )  
Related Articles | Metrics
Event detection from video is important work for video retrieval and semantic understanding. Trajectories of moving objects in the video not only record the moving information, but also reflect the motivations of the moving objects, and are closely related with the event. However, the raw trajectory is only geographic information of an object without any domain knowledge. Meanwhile, semantic gap exists between the low-level feature extracted and the according high-level concept in the content-based video analysis. Thus, it is critical to combine both the raw trajectory and its semantic information. In this regard, extracting event using the semantic trajectories which are analyzed from the video is studied, domain knowledge is utilized to label interest areas in the video, and a new semantic trajectory representation is proposed which includes interest areas the object stops and passes by. Moreover, the original trajectory of the interest object can be converted into an according semantic trajectory, so video event can be represented as regular expressions of relationship between objects and interest areas. Inspired by FOIL (first order inductive learner), an inductive-based event rule learning algorithm is proposed, and the regular expression is illustrated to be easier learned than the traditional well-formed formula in the first-order predicate logic. Finally, experiment results indicate the performance.
Ensemble Learning of ELM Regressors Based on l1-regularization
Wang Quan and Chen Songcan
2012, 49(12):  2631-2637. 
Asbtract ( 991 )   HTML ( 5)   PDF (889KB) ( 532 )  
Related Articles | Metrics
Recently ELM (extreme learning machine) is proposed for single-hidden layer feedforward neural networks (SLFNs), which not only provides good generalization performance, but also maintains extremely fast learning speed. However, choosing weights randomly may inevitably leads to instable generalization performance of ELM. So SERELM (sparse ensemble regressors of ELM) is proposed for filling up this deficiency, which ensembles some instable ELM regressors sparsely. On one hand, the experimental results on some standard time series datasets show that SERELM not only provides better generalization performance than single ELM regressor, but also outperforms another two ensemble methods related. On the other hand, it is accepted generally that measuring diversity is very important to ensemble learning. Many researchers are focusing on diversity, but how to define and measure diversity is still an open problem. Many diversity measures have been proposed, but none of them is accepted generally.Taking into account this dilemma, the proposed SERELM circumvents the problem by l1-norm regularization, which abandons measuring diversity simply. The experimental results show that: 1)l1-norm regularization causes that the relatively accurate ELM regressors are assigned to relatively large weight automatically; 2)negative correlation is largely ineffective for measuring diversity in applications of regression.
An AP Clustering Algorithm of Fine-Grain Parallelism Based on Improved Attribute Reduction
Zhu Hong, Ding Shifei, and Xu Xinzheng,
2012, 49(12):  2638-2644. 
Asbtract ( 625 )   HTML ( 0)   PDF (814KB) ( 567 )  
Related Articles | Metrics
Affinity propagation (AP) clustering simultaneously considers all data points as potential exemplars. It takes similarity between pairs of data points as input measures, and clusters gradually during the message-passing procedure. AP is an efficient and fast clustering algorithm for large dataset compared with the existing clustering algorithms. Therefore, attributes reduction is important for AP. Meanwhile, fine-grain parallelism is emphasized in the design of massively parallel computers to acquire higher performance. In this paper, an AP clustering algorithm based on improved attribute reduction and fine-grain parallelism (IRPAP) is proposed. Firstly, granularity is introduced into parallel computing and granularity principle is applied as well. Secondly, data set is preprocessed by the improved attribute reduction algorithm through which elements in discernibility matrix will be calculated and selected in parallel, in order to reduce the complexity of time and space. Finally, data set is clustered by means of a parallel AP algorithm. The whole task can be divided into multiple threads to be processed simultaneously. Experimental results show that the IRPAP algorithm is more efficient than the AP algorithm for large data set clustering.
A New Algorithm on Lagged Correlation Analysis Between Time Series: TPFP
Lin Ziyu, Jiang Yi, Lai Yongxuan, and Lin Chen
2012, 49(12):  2645-2655. 
Asbtract ( 1780 )   HTML ( 2)   PDF (3191KB) ( 660 )  
Related Articles | Metrics
Lagged correlation analysis plays an important role in data mining based on time series, which can be used extensively in real life such as weather forcast, stock market analysis, network analysis, moving object tracking, sensor monitoring, and so on. Lagged correlation analysis is to find out the time series which are correlated with lags. Here three phenomena in lagged correlation analysis between time series, namely, continuous distribution of lags, lag mutation and mutation amplitude distribution feature, are found based on extensive experiments. It is proved that existing research work can achieve desirable performance when the lag is small, but there is sometimes large error when the lag is large. Furthermore, available methods can not deal with the occasion of lag mutation, which means that the lag changes suddenly at certain point on the lag correlation curve and is much different from before. This brings many difficulties for those existing methods. Based on the above three phenomena, three points forecast-based probing (TPFP) is proposed here to overcome the disadvantages of the existing methods, which is able to achieve small error when the lag is large, and also it can perform well on the occasion of lag mutation. Extensive experiments show that TPFP can achieve better performance than available methods.
A Categorization Approach Based on Adapted Decision Tree Algorithm for Web Databases Query Results
Meng Xiangfu, Ma Zongmin, Zhang Xiaoyan, and Wang Xing
2012, 49(12):  2656-2670. 
Asbtract ( 613 )   HTML ( 0)   PDF (2852KB) ( 406 )  
Related Articles | Metrics
To deal with the problem that too many results are returned from a Web database in response to a user query, this paper proposes a novel approach based on adapted decision tree algorithm for automatically categorizing Web database query results. The query history of all users in the system is analyzed offline and then similar queries in semantics are merged into the same cluster. Next, a set of tuple clusters over the original data is generated in accordance to the query clusters, each tuple cluster corresponding to one type of user preferences. When a query is coming, based on the tuple clusters generated in the offline time, a labeled and leveled categorization tree, which can enable the user to easily select and locate the information he/she needs, is constructed by using the adapted decision tree algorithm. Experimental results demonstrate that the categorization approach has lower navigational cost and better categorization effectiveness, and can meet different type user's personalized query needs effectively as well.
A Model Driven Development Method for Pen-Based Form Interface Software
Fan Yinting, Teng Dongxing, Ma Cuixia, Yang Haiyan, Dai Guozhong, and Wang Hongan
2012, 49(12):  2671-2685. 
Asbtract ( 459 )   HTML ( 3)   PDF (4224KB) ( 468 )  
Related Articles | Metrics
Most of the existing pen-based form applications are developed by traditional software development methods. Whether they are structured development method, rapid prototyping or extreme programming, in essence, they emphasize on the documents generated in each development phase. The documents are independent each other, so they tend to provide the software with the user needs inconsistent with the system implementation model. The methods also need a long life cycle and high cost, and it is difficult for them to adapt to requirement changes. Furthermore, the pen-based interface software with the characteristics of intelligence and ambiguity is very sophisticated. To address the problems above, a model driven development method for pen-based form interface software is presented. Firstly, a pen-based form user interface model is proposed. Secondly, the development framework for pen-based interface software is presented, and its development model is described, with the structures of the models and their relationship introduced in details. Finally, the modeling methods and the automatic generation methods of system for pen-based interface software are brought forward based on the framework. An example is described and evaluated by experiments. The results demonstrate that the method can help end users develop the pen-based form interface software.
An Active Contour Model Based on Region Saliency for Image Segmentation
Bai Xuefei, Wang Wenjian, and Liang Jiye
2012, 49(12):  2686-2695. 
Asbtract ( 744 )   HTML ( 3)   PDF (3310KB) ( 466 )  
Related Articles | Metrics
Image segmentation refers to the process of partitioning an image into some no-overlapped meaningful regions, and it is vital for the higher-level image processing such as image analysis and understanding. During the past few decades, there has been substantial progress in the field of image segmentation and its application. Recently, segmentation algorithms based on active contours have been given wide attention by many internal and foreign researchers due to their variable forms, flexible structure and excellent performance. However, most available active contour models suffer from lacking adaptive initial contour and priori information of target region. In this paper, an active contour model for image segmentation based on visual saliency detection mechanism is proposed. Firstly, priori shape information of target objects in input images which is used to describe the initial curve adaptively is extracted with the visual saliency detection method in order to reduce the influence of initial contour position. Furthermore, the proposed active model can segment images adaptively and automatically, and the segmented results accord with the property of human visual perception. Experimental results demonstrate that the proposed model can achieve better segmentation results than some traditional active contour models. Meanwhile it requires less iteration and is much more computationally efficient.
A Self-Adapting Bilateral Total Variation Technology for Image Super-Resolution Reconstruction
Yang Xin, Zhou Dake, and Fei Shumin
2012, 49(12):  2696-2701. 
Asbtract ( 739 )   HTML ( 0)   PDF (1365KB) ( 655 )  
Related Articles | Metrics
Super-resolution image reconstruction has recently drawn considerable attention within the research area. For some special-purpose imaging devices such as medical imaging, remote sensor imaging and video capturing, the acquired images cannot often achieve a higher resolution because of the limitation of imaging mechanism and imaging sensor. Super-resolution image reconstruction methods attempt to create a single high resolution and high quality image from multiple low resolution observations (or a video sequence) degraded by warping, blurring, noise and aliasing. So far, existing super-resolution methods are all confronted with the problems of slow convergence and expensive computation. To satisfy the requirement of real occasion applications, an effective super-resolution reconstruction algorithm is built upon the MAP framework. In the proposed algorithm, the conception of self-adapting weight coefficients matrix (SWCM) in super-resolution technology is proposed. And a new method on super-resolution based on self-adapting bilateral total variation is given. The method takes into account respective characteristic about each LR image. It can not only sharpen edges but also help to suppress noise in the estimated HR image. The super-resolution reconstruction model and iterative scheme are developed to get the more accurate image. Experimental results using both real and synthetic data show the effectiveness of the proposed algorithm.
OSOSM:Operating System Object Semantics Model and Formal Verification
Qian Zhenjiang, Liu Wei, and Huang Hao
2012, 49(12):  2702-2712. 
Asbtract ( 638 )   HTML ( 3)   PDF (2003KB) ( 566 )  
Related Articles | Metrics
The complexity of the operating system makes its security problems become increasingly prominent. Many research works verify the correctness of the existing operating systems using formal methods, and they mainly focus on verifying the implementation of functions on code level with programming formal logic. In this paper, from the view of system design, we recently propose an operating system object semantics model (OSOSM) based on the higher order logic and type theory. OSOSM adopts a hierarchical structure, including essential effectiveness layer, implementation layer, and optimization layer. OSOSM abstracts the execution subjects and resources as objects of the operating system, and establishes the domain of the discourse for the operating system. We denote the state of the operating system by mapping from the set of operating system object variables to the domain of discourse. OSOSM describes the semantics of system calls, and the security properties with the predicate formulae. This paper also elaborates on the formal methods of verifying that during execution the operating system maintains the security tactics and properties. Finally, we use the self-implemented and the verified trusted operating system (VTOS) as an example to illustrate the semantics correctness of OSOSM, and verify the consistency between the design and safety requirements with Isabelle theorem prover and show that VTOS has the expected security properties.
An Algebraic-Mapping-Based Phase-Change Memory Matrix Wear-Leveling Method
Du Yuyang, Yu Hongliang, and Zheng Weimin
2012, 49(12):  2713-2720. 
Asbtract ( 624 )   HTML ( 0)   PDF (2136KB) ( 400 )  
Related Articles | Metrics
Phase-change memory (PCM) is an emerging memory technique. PCM offers many advantages over traditional DRAM, and thus has the potential to be the next generation main memory in computer systems. One of the hurdles for its use is the limited number of writes to storage cells. Furthermore, the non-uniformity of memory accesses in typical workloads makes this situation worse. In such case, wear-leveling is often employed to map the logical address to the physical address and remap them to distribute the writes among all cells to prevent some cells from being worn out sooner than the others. This paper proposes an algebraic-mapping-based matrix wear-leveling method. Our method views the storage cells as a matrix and then levels the writes in rows and columns in a two-dimensional scheme. Every logical address is simultaneously remapped in a column and a row, and thus can be efficiently remapped to any physical address as soon as possible. The method can extend the lifetime of PCM both under normal applications and in malicious attacks, and meanwhile incur very little write overhead.