Loading...
ISSN 1000-1239 CN 11-1777/TP

Table of Content

01 September 2014, Volume 51 Issue 9
A Novel Regularization Method Based on Convolution Neural Network
Lü Guohao, Luo Siwei, Huang Yaping, Jiang Xinlan
2014, 51(9):  1891-1900.  doi:10.7544/issn1000-1239.2014.20140266
Asbtract ( 2857 )   HTML ( 6)   PDF (3024KB) ( 2422 )  
Related Articles | Metrics
Regularization method is widely used in solving the inverse problem. An accurate regularization model plays the most important part in solving the inverse problem. The energy constraints should be different for the different types of images and different parts of the same image, but the traditional L1 and L2 models used in the field of image restoration are both based on a single prior assumption. In this paper, according to the defects of the single priori assumption in traditional regularization model, a novel regularization method based on convolution neural network is proposed and applied to image restoration, therefore, the image restoration can be regarded as a classification issue. In this method, the image is partitioned into several blocks, and the convolution neural network is used to extract and classify the features of sub-block images; then the different forms of the priori regularization constraints are adopted considering the different features of the sub-block images, therefore the regularization method is no longer limited to a single priori assumption. Experiments show that the image restoration results by the regularization method based on convolution neural network are superior to those by the traditional regularization model with a single priori assumption. Therefore the regularization method based on convolution neural network can better restore image, maintain the edge texture characteristic of the image nicely, and has lower computational cost.
A Sparse Stochastic Algorithm with O(1/T) Convergence Rate
Jiang Jiyuan,Xia Liang,Zhang Xian, Tao Qing
2014, 51(9):  1901-1910.  doi:10.7544/issn1000-1239.2014.20140161
Asbtract ( 1169 )   HTML ( 1)   PDF (2898KB) ( 788 )  
Related Articles | Metrics
Stochastic gradient descent (SGD) is a simple but efficient method for large-scale optimization problems. Recent researches have shown that its convergence rate can be effectively improved by using the so-called α-suffix averaging technique in solving the strongly convex problems. However, SGD is purely a black-box method, so it is difficult to obtain the expected effect on the learning structure while solving the regularized optimization problems. On the other hand, composite objective mirror descent (COMID) in the stochastic setting is a scalable algorithm which can effectively keep the sparsity imposed by L1 regularization;But it can only obtain an O(logT/T) convergence rate for the strongly convex optimization problems. In this paper, we focus on the generally convex optimization problem in the form of “L1 + Hinge”. To make full use of the α-suffix averaging technique, we first change it into a strongly convex optimization problem by adding an L2 strongly convex term. Then, we present an L1MD-α algorithm which combines the COMID algorithm and the α-suffix averaging technique. We prove that the L1MD-α algorithm can achieve an O(1/T) convergence rate. As a result, our L1MD-α algorithm not only obtains faster convergence rate but also better sparsity than COMID. Through extensive experiments on some typical large-scale datasets we finally verify the correctness of the theoretical analysis and effectiveness of the proposed algorithm.
Remote Sensing Image Classification Based on DBN Model
Lü Qi, Dou Yong, Niu Xin, Xu Jiaqing, Xia Fei
2014, 51(9):  1911-1918.  doi:10.7544/issn1000-1239.2014.20140199
Asbtract ( 2247 )   HTML ( 7)   PDF (3333KB) ( 1961 )  
Related Articles | Metrics
Remote sensing image classification is one of the key technologies in geographic information system (GIS), and it plays an important role in modern urban planning and management. In the field of machine learning, deep learning is springing up in recent years. By mimicking the hierarchical structure of human brain, deep learning can extract features from lower level to higher level gradually, and distill the spatio-temporal regularizes of input data, thus improve the classification performance. Deep belief network (DBN) is a widely investigated and deployed deep learning model. It combines the advantages of unsupervised and supervised learning, and can archive good classification performance for high-dimensional data. In this paper, a remote sensing image classification method based on DBN model is proposed. This is one of the first attempts to apply deep learning approach to urban detailed classification. Six-day high-resolution RADARSAT-2 polarimetric synthetic aperture radar (SAR) data were used for evaluation. Experimental results show that the proposed method can outperform SVM (support vector machine) and traditional neural network (NN).
Image Classification Using Hierarchical Feature Learning Method Combined with Image Saliency
Zhu Jun,Zhao Jieyu,Dong Zhenyu
2014, 51(9):  1919-1928.  doi:10.7544/issn1000-1239.2014.20140138
Asbtract ( 1106 )   HTML ( 2)   PDF (3754KB) ( 1061 )  
Related Articles | Metrics
Efficient feature representations for images are essential in many computer vision tasks. In this paper, a hierarchical feature representation combined with image saliency is proposed based on the theory of visual saliency and deep learning, which builds a feature hierarchy layer-by-layer. Each feature learning layer is composed of three parts: sparse coding, saliency max pooling and contrast normalization. To speed up the sparse coding process, we propose batch orthogonal matching pursuit which differs from the traditional method. The salient information is introduced into the image sparse representation, which compresses the feature representation and strengthens the semantic information of the feature representation. Simultaneously, contrast normalization effectively reduces the impact of local variations in illumination and foreground-background contrast, and enhances the robustness of the feature representation. Instead of using hand-crafted descriptors, our model learns an effective image representation directly from images in an unsupervised data-driven manner. The final image classification is implemented with a linear SVM classifier using the learned image representation. We compare our method with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer or multi-layer sparse coding methods, and some kernel based feature learning approaches. The experimental results on two commonly used benchmark datasets Caltech 101 and Caltech 256 show that our method consistently and significantly improves the performance.
Sparseness Representation Model for Defect Detection and Its Application
Li Qingyong,Liang Zhengping,Huang Yaping, Shi Zhongzhi
2014, 51(9):  1929-1935.  doi:10.7544/issn1000-1239.2014.20140153
Asbtract ( 809 )   HTML ( 0)   PDF (1471KB) ( 909 )  
Related Articles | Metrics
Defect detection is an important applicaion of computer vision in industry, but it is a challenge to effectively inspect defects in a vision system because of illumination inequality and the variation of reflection property of objects. Sparseness is one of the most improtant characteristics of defect images, and therefore the approach named defect decomposition based on sparseness (DDBS) is proposed. In the framework of DDBS, a defect image is treated as linearly combination of three components: background, defects and noise. Background is coded by an over-complete sparse dictionary, which can not sparsely represent defect component. Meanwhile defect is sparsely coded by its dictionary that is exclusive to background. Then defect component is decomposed using DDBS based on the principle of blind sources sepration and block-coordinate relaxation. Furthermore, DDBS is applied in rail surface defect detection to improve the robustness of inspection systems. Experiments are carried out for different dictionary combinations based on actual rail images, and the results demonstrate that DDBS can effectively detect the defects that are missed by the state-of-the-art methods. DDBS is a flexible framwork for applications of defect detection and has the potential benefit to improve robustness of traditional methods.
A Study of Speech Recognition Based on RNN-RBM Language Model
Li Yaxiong, Zhang Jianqiang, Pan Deng, Hu Dan4
2014, 51(9):  1936-1944.  doi:10.7544/issn1000-1239.2014.20140211
Asbtract ( 2222 )   HTML ( 9)   PDF (1524KB) ( 1238 )  
Related Articles | Metrics
In the recent years, deep learning is emerging as a new way of multilayer neural networks and back propagation training. Its application in the field of language model, such as restricted Boltzmann machine language model, gets good results. This language model based on neural network can assess the probability of the next word appears according to the word sequence which is mapped to a continuous space. This language model can solve the problem of sparse data. Besides, some scholars are constructing language model making use of recurrent neural network mode in order to make full use of the preceding text to predict the next words. From these models we can sort out the restriction of long-distance dependency in language. This paper attempts to catch the long-distance information based on RNN-RBM. On the other hand, the dynamic adjunction of language model ia analyzed and illustrated according to the language features. The experimental result manifests there are considerable improvement to the efficiency of expanding vocabulary continuing speech recognition using RNN_RBM language model.
Audio Classical Composer Identification by Deep Neural Network
Hu Zhen, Fu Kun, Zhang Changshui
2014, 51(9):  1945-1954.  doi:10.7544/issn1000-1239.2014.20140189
Asbtract ( 1448 )   HTML ( 2)   PDF (2371KB) ( 1913 )  
Related Articles | Metrics
Music is a kind of signal that has hierarchical structure. In music information retrieval (MIR) area, higher level features, such as emotion and genre, are typically extracted based on lower level features such as pitch and spectrum energy. Deep neural networks have good capacity of hierarchical feature learning, which indicates that deep learning is potentially to obtain good performance on music dataset. Audio classical composer identification (ACC) is an important problem in MIR which aims at identifying the composer for audio classical music clips. In this work, a hybrid model based on deep belief network (DBN) and stacked denoising autoencoder (SDA) is built to identify the composer from audio signal. The model get an accuracy of 76.26% in the testing data set which is better than some thoroughbred models and shallow models. After dimensionally reduced by linear discriminant analysis (LDA) it is also clear that the samples from different classes become farther away from each other when being transformed by more layers in our model. By comparing models in different sizes we give some empirical instruction for ACC problem. Similar to image, music features are hierarchical too and different parts of our brain handle signals differently. So we propose a hybrid model and our results encourage us to believe that our proposed model makes sense in some applications. During the experiments, we also find some practical guides for choosing network parameters. For example, number of neurons in the first hidden layer should be approximately 3 times to the dimension of input data.
Improving the Performance of Sparse Directories
Zhang Lunkai,Song Fenglong,Wang Da,Fan Dongrui,Sun Ninghui1
2014, 51(9):  1955-1970.  doi:10.7544/issn1000-1239.2014.20131173
Asbtract ( 701 )   HTML ( 4)   PDF (5576KB) ( 667 )  
Related Articles | Metrics
Sparse directory technique is proved to be an effective coherence scheme in ccNUMA system. However, the downside is that sparse directories may require costly directory evictions which degenerate the overall system performance. This paper discusses and evaluates the methods to improve the effectiveness of sparse directories, including replacement policies of sparse directories and victim directories. First, we qualitatively and quantitatively compare state-of-the-art replacement policies of sparse directories, and the results show that, contrary to common beliefs, the traditional least-recent-used (LRU) considerably outperforms several recently proposed directory replacement policies (e.g., least-sharer-count (LSC) policy). We further propose victim directory schemes which add small fully associative directory buffers to the main sparse directory modules. Two different victim directory schemes are proposed, namely: simple victim directory and selective victim directory. Selective victim directory scheme manages to differentiate the usefulness of directory entries by sending probe messages to caches, and it only preserves the useful directory entries in the victim directory. Experiments show that, with minimal hardware requirement (in our experiment, around 1KB additional storage per ccNUMA node), selective victim directory scheme saves on average 35.7% execution time compared with baseline sparse directory scheme, therefore it is extremely effective in improving the performance of sparse directories.
A Communication Feature-Oriented 3D NoC Architecture Design
Wang Di,Zhao Tianlei,Tang Yuxing,Dou Qiang
2014, 51(9):  1971-1979.  doi:10.7544/issn1000-1239.2014.20130131
Asbtract ( 792 )   HTML ( 2)   PDF (3195KB) ( 556 )  
Related Articles | Metrics
Three dimensional integrated circuits (3D IC) and networks on chip (NoC) are two trends of integrated circuit design. Three dimensional networks on chip (3D NoC), which combines 3D IC and NoC, is one of the hot spots of current research. Existing 3D NoC researches paid inadequate attention to the feature of heterogeneous communication between inter and intra silicon wafer. This paper devises a kind of single hop inter dies (SHID) architecture which is a communication feature-aware architecture using heterogeneous topology and express inter dies router (EIDR). Analysis on the experimental data shows that compared with the existing 3D NoC structures of 3D-Mesh and NoC-Bus, SHID architecture has some characteristics: 1) The latency of SHID architecture is smaller. It is 15.1% less than 3D-Mesh and 11.5% less than NoC-Bus with stacking 4 layers; 2) The power consumption of SHID architecture equals NoC-Bus and it is 10% less than 3D-Mesh; 3) The throughput of SHID architecture decreases more slowly as the number of stacked layers increases. It is 66.98% larger than that of 3D-Mesh and 314.49% larger than that of NoC-Bus with stacking 16 layers. SHID architecture has more advantages in terms of both performance and scalability and is a well design choice for future 3D NoC architecture.
A Customized Coprocessor Acceleration of Genome Re-Sequencing
Tang Wen,Zhang Chunming,Tan Guangming,Zhang Peiheng,Sun Ninghui
2014, 51(9):  1980-1992.  doi:10.7544/issn1000-1239.2014.20130987
Asbtract ( 673 )   HTML ( 2)   PDF (3879KB) ( 520 )  
Related Articles | Metrics
Since January 2008 when the next-generation DNA sequencing platforms were developed, the sequencing throughput has been significantly improved. However, this technology has been challenged by the large amount of sequencing data which grows dramatically even over the Moore's Law. As an emerging data-intensive workload, the high-throughput re-sequencing tools like Hash-based programs shows different characteristics from traditional computational applications. Both low arithmetic intensity and irregular memory access pattern are major sources of inefficiency on commodity multi-core platforms. In this paper, we propose co-processor architecture for accelerating a short reads mapping algorithm. The complete mapping flow in one processing element (PE) is integrated to an exclusive memory port to improve the parallel performance. This proposed architecture is then implemented on a Convey HC-1ex reconfigurable computer. The design includes 64 parallel PEs on 4 Xilinx Virtex-6 LX760 that operate at 150MHz. Compared with an Intel Xeon 8-cores CPU, the speedup achieves 28.5 times, and the average memory read bandwidth achieves 22.59GBps. Therefore, this proposed design can potentially supply a solution to the large-amount data challenge and be applied in high throughput re-sequencing.
Design of Fault-Tolerant Router for 3D NoC Based on Virtual Channel Fault Granularity Partition
Ouyang Yiming,Zhang Yidong,Liang Huaguo,Huang Zhengfeng,Chang Hao
2014, 51(9):  1993-2002.  doi:10.7544/issn1000-1239.2014.20131161
Asbtract ( 714 )   HTML ( 0)   PDF (3440KB) ( 581 )  
Related Articles | Metrics
Routers become subject to physical manufacture defects and running-time vulnerability in the deep submicron technology, which results in virtual channel permanent faults. The faults affect the performance and functionality of systems and result in communication malfunctions. In order to tolerate virtual channel faults effectively, and to ensure system performance and efficient usage of available resources, the type of failure is subdivided into coarse-grained fault and fine-grained fault, and then we propose the SVS router (single virtual channel sharing router) architecture to achieve a single virtual channel sharing between ports in the same group, which contains two ports in the router. Coarse-grained faults are tolerated by using adjacent ports' shared virtual channel in the same group. According to the information of Slot State Table, fine-grained faults are tolerated by configuring read/write pointer value to skip fault buffer slots. Also, in the absence of coarse-grained fault condition, shared virtual channel can be used for load balancing and fault tolerance of calculation module. Experimental results demonstrate significant reduction in average packet latency, and improvement in throughput under three different fault conditions compared with other existing virtual channel architectures. It shows that this scheme effectively improves system reliability, ensures system performance and makes full use of the available resources.
Research and Design of Hash Indexing Mechanism for BTB
Wang Guopeng,Hu Xiangdong,Yin Fei,Zhu Ying
2014, 51(9):  2003-2011.  doi:10.7544/issn1000-1239.2014.20130132
Asbtract ( 870 )   HTML ( 0)   PDF (2569KB) ( 691 )  
Related Articles | Metrics
It is well known that branch mispredictions have become a serious bottleneck to achieve better processor performance. Branch-target buffer (BTB), which caches the most recent resolved target, is the important dedicated structure to provide branch target address in modern processors. However, because of inheriting from cache, the performance of BTB is restricted by its hit ratio. Due to the nonuniform distribution of branch instructions, the conventional indexing method always causes many unnecessary conflicts, having negative effects on BTB performance. Such mispredictions caused by conflicts can be easily avoided by means of a properly chosen Hash function. Although Hash functions have been well studied to improve the utilization of memory system, the Hash-indexing method, which is specifically indicated for BTB, is not explored in literature. In this paper, based on analysis of regular pattern of branch distribution and control flow in program, the Hash indexing mechanism for BTB is researched and two Hash-indexing methods for BTB are designed in this paper. One is XOR-based Hash function, the other is optimized bit-select method. The evaluation framework is estimated for set index function so that the best-performing transformation matrix can be fast detected. The maximal number of branches, which are mapped to the same BTB set under Hash-indexing mechanism, is evaluated by probability theory. The experimental results show that our Hash-indexing methods are efficient to minimize BTB mis-predicitons caused by conflict miss; the XOR-based Hash function performs even better.
An Improved DFTL Algorithm Based on Sequential Cache and Second Level Cache
Yao Yingbiao,Shen Zuobing
2014, 51(9):  2012-2021.  doi:10.7544/issn1000-1239.2014.20130660
Asbtract ( 1259 )   HTML ( 2)   PDF (3088KB) ( 1490 )  
Related Articles | Metrics
Flash translation layer (FTL) is one of the key techniques in solid state drive (SSD) design. Currently, demand-based FTL (DFTL) is a well-known FTL algorithm which can dynamically load map entries into cache based on the characteristics of requests. However, it does not consider the spatial locality of workloads, and one map entry evict out operation in cache may update one translation page; thus, frequent evict out operations will cause extra erase operations. Focusing on above drawbacks of DFTL, this paper proposes an FTL scheme called SDFTL (sequential/second cache DFTL), which sets a sequential cache and a second level cache additionally. The former improves the performance of FTL handling the workloads with high spatial locality by prefetching map entries to exploit the spatial locality of workloads. The latter is used to buffer the updated map entries, which are evicted from first level cache, to take advantage of batch updating strategy, and thus reduces the translation page write counts and erase counts. Experimental results of various realistic workloads show that SDFTL can improve the cache hit ratio by 41.57% and reduce the erase counts by 23.08% and response time by 17.74% compared with those of DFTL in average.
A New Method of Embedding Test Patterns into Test-per-Clock Bit Stream
Liu Tieqiao,Kuang Jishun,Cai Shuo,You Zhiqiang
2014, 51(9):  2022-2029.  doi:10.7544/issn1000-1239.2014.20130179
Asbtract ( 510 )   HTML ( 0)   PDF (1709KB) ( 538 )  
Related Articles | Metrics
The key of IC testing lies in the test patterns generator (TPG) design. Traditional testing methods do not make full use of the test data bit stream to construct test patterns during the test generation and application process, which results in high test cost due to the enormous test data. Test-per-scan scheme exposes the drawback of long test application time with the test data volumes increasing. In order to reduce the test cost, a built-in self test (BIST) scheme based on test-per-clock testing is proposed. Based on the analysis of linear shift test structure, a corresponding forward-backward test patterns generation method is proposed, which efficiently embeds the test set into test-per-clock bit stream. In this method, test patterns are determined by the solution of input-stream with fault dropping, where the input-stream is composed by the first bits of these patterns. The solved minimum input-stream after repeatedly reduction is directly stored in the memory to control the linear shifter in the test application, so as to generate the whole required test set. The experimental results demonstrate that the proposed method, under the precondition of meeting the required fault coverage, can obviously shorten test application time and reduce storage area overhead compared with other approaches.
A Colored C-net Model Based on RGPS and Its Application
Huang Yiwang,He Keqing,Feng Zaiwen,Huang Ying,Xie Fang
2014, 51(9):  2030-2045.  doi:10.7544/issn1000-1239.2014.20130595
Asbtract ( 692 )   HTML ( 0)   PDF (5990KB) ( 603 )  
Related Articles | Metrics
Configurable business process model describes the family of the similar process model for particular domain; it can derive the individual process model meeting the particular users' requirement by some configuration operations. This paper provides a configurable business process model with the constraints of role and goal, which is obtained by add role and goal to the core model element—the activity of the causal nets (C-net). Then it guides the execution sequence of the activities of business process through the constraint rules and relationship along the role-goal-process-service of the RGPS requirement meta-model framework, so that it enables the model to reflect the behavior of business process efficiently. Finally, it sets the label function of configuration operation label to the input/output binding ports of activity, thus the individual process is formed by assigning the concrete configuration operation label to the ports. In this paper, the formal definitions of the proposed model are given and its application in the configuration on the business process is analyzed, so that it can be utilized to guiding the configuration management of business process.
Self Stabilizing Distributed Transactional Memory Model and Algorithms
Lin Fei, Sun Yong, Ding Hong, Ren Yizhi1
2014, 51(9):  2046-2057.  doi:10.7544/issn1000-1239.2014.20130058
Asbtract ( 824 )   HTML ( 0)   PDF (2238KB) ( 548 )  
Related Articles | Metrics
Aiming at the issue of transient fault tolerance in distributed system while taking into account the system's robustness and scalability, a self-stabilizing distributed transactional memory model, called SSDTM, is proposed. Firstly, the model frame is constructed by layered technology and collateral composition theory, which includes spanning tree layer, objects location layer, transaction proxy layer and application layer. Furthermore, the spanning tree algorithm is improved in self-stabilizing way, which can solve deficiencies of being only adaptive to stable environments based on existing methods. Then, data object manipulation algorithms are designed, which utilize data stream paradigm and self-stabilizing theory for locating objects and enhancing data access locality, and ensure mutually exclusive access to objects in distributed system with transient faults. Moreover, after building the transaction service model that defines the basic types of memory transaction states and operations, the concurrency control algorithms based on the improved logic clock are given. Finally, combining theoretical derivation and instance verification, the performance of SSDTM is verified and analyzed through multi-angle large-scale virtual tests based on 4 classical benchmarks in SimJava environment. Experimental results show that the algorithms presented in the paper exhibit good robustness and applicability, and SSDTM has higher system throughput and better fault-tolerance compared with other models under the same conditions.
A Scalable and Self-Adjust Multi-Tenant Data Storage Strategy Under Different SLAs
Gu Lianchao,Cui Lizhen
2014, 51(9):  2058-2069.  doi:10.7544/issn1000-1239.2014.20131339
Asbtract ( 781 )   HTML ( 2)   PDF (3485KB) ( 638 )  
Related Articles | Metrics
Multi-tenancy is one of the key characteristics of cloud application. In shared data storage model, how to achieve dynamically scalable data storage of multiple nodes according to different tenants' performance requirements is the key of cloud data management. It faces many challenges, such as how to forecast performance requirements of different tenants under shared data storage, how to place and control data layout in response to bursty in workload, i.e. data items automatic partitioned, coalesced or copied among the nodes of the system according to variable tenants' performance requirements. In this paper, we propose a scalable and self-adjust multi-tenant data storage strategy to address these challenges. It mainly consists of piecewise multiple performance boundary model,which is used to predict whether the data node can meet the performance requirements of different tenants, and data storage layout adjustment strategy based on greedy, which is used to make decisions about data movement on overloaded node and data coalescence on underloaded node. Through the analysis of experiment system, this method can accurately predict whether the system is overloaded or not, reduce the influence on system performance by less data movement, and make the shared cloud data node satisfy performance requirements of different tenants.
Binary Pure Phase Encoding Compressive Imaging in Frequency Domain
Zhang Cheng,Zhang Fen,Shen Chuan,Zhang Quanbing,Wei Sui,Wang Yue
2014, 51(9):  2070-2080.  doi:10.7544/issn1000-1239.2014.20130304
Asbtract ( 548 )   HTML ( 0)   PDF (3494KB) ( 513 )  
Related Articles | Metrics
Super resolution (SR) is considered as one of the “holy grails” of optical imaging and image processing. The introduction of compressive sensing theory presents a novel super-resolution reconstruction method from a single low-resolution image, which can avoid the requirements for the multiple sub-pixel images of traditional superresolution method. Analyzing the requirements of the similarities and differences between compressed sensing measurement matrices and optical imaging systems, a binary phase encoding compressive imaging method based on the 4-f optical architecture is presented, with the phase in the frequency domain randomly modulated, which can achieve super-resolution reconstruction from single low-resolution measurement images obtained with single exposure conditions, no other additional information collected. Binary phase mask is much easier to implement than random phase mask with uniform distribution, which is a more viable scheme for physical realization of compressive imaging. Simulation experiments demonstrate that the proposed method can effectively capture compressive measurements and implement super-resolution reconstruction in a single shot condition. Furthermore, another experiments show that this method is also more applicable to large-scale image reconstruction compared with random demodulation (RD) proposed by Romberg in the reconstruction time, and more practical in the sampling scheme than RecPC method proposed by Yin.
A Sparsity Image Inpainting Algorithm Combining Color with Gradient Information
Li Zhidan, He Hongjie, Yin Zhongke, Chen Fan
2014, 51(9):  2081-2093.  doi:10.7544/issn1000-1239.2014.20130071
Asbtract ( 717 )   HTML ( 1)   PDF (6623KB) ( 577 )  
Related Articles | Metrics
In the existing sparsity-based image inpainting algorithms, only color information is used to measure the similarity between the exemplar to be filled and other exemplars, and the match patches of the exemplar to be filled are searched in the whole image. These decrease the structure connectivity and the neighborhood consistence of texture region, and increase the time complexity of these algorithm. To address these problems, the color-gradient distance between two exemplars is defined by the color and gradient norm information of them. Using the color-gradient distance, the new patch structure sparsity is constructed to determine the filling order, and the new match criterion is obtained to find the most similar patch. Furthermore, the size of local search region is adaptively decided by the patch structure sparsity values to decrease the time complexity of this algorithm. Also the weighting coefficients of color information and gradient information are different in different images, which is verified through experiments. Experimental results demonstrate that the proposed method has better ability to maintain the structure connectivity and the neighborhood consistence of texture area. The PSNR of repaired image by the proposed scheme is 1dB higher than that by the existing algorithms. Additionally, the speed of the proposed scheme is about 4 to 11 times of that of the existing algorithms.
Image Reconstruction Algorithm for ECT Based on Dual Particle Swarm Collaborative Optimization
Zhao Yulei,Guo Baolong,Wu Xianxiang,Wang Pai
2014, 51(9):  2094-2100.  doi:10.7544/issn1000-1239.2014.20131006
Asbtract ( 757 )   HTML ( 0)   PDF (1589KB) ( 633 )  
Related Articles | Metrics
Since the sensitivity field in the capacitance sensor of electrical capacitance tomography system is “soft field”, and the “soft field” nature is ignored by the traditional image reconstruction algorithms, there is bottleneck in improving the imaging accuracy for the algorithms. To solve the problem, based on the analysis of the distribution of sensitivity field and the discussion of the “soft field” effect and its impact on the image reconstruction, a novel image reconstruction algorithm is proposed, which is dual particle swarm collaborative optimization. In the algorithm, to eliminate the impact generated by ignoring the “soft field” nature, a priori condition is used to construct the fitness function of particle swarm optimization. The priori conditions under the different flow patterns are obtained by the least square support vector machine. Meanwhile, by introducing the Lotka-Volterra model, a new cooperative-competitive scheme is discussed. The diversity of particles is increased by intraspecific and interspecific learning and competition. So the algorithm improves the global convergence and convergence rate. The experimental results illustrate that this algorithm not only has higher image precision and stronger convergence, but also is resistant to the interference of noise in the measurement signal.
Detouring Matching Pursuit Algorithm in Compressed Sensing
Pei Tingrui,Yang Shu,Li Zhetao,Xie Jingxiong
2014, 51(9):  2101-2107.  doi:10.7544/issn1000-1239.2014.20131148
Asbtract ( 1229 )   HTML ( 2)   PDF (1492KB) ( 3284 )  
Related Articles | Metrics
Detouring matching pursuit (DMP) is a greedy algorithm of reconstructive sparse signals with low computational complexity, high accuracy and low column-correlation demand for sensing matrix. The increasing and deceasing formulas of the submatrix's inner-product and the coefficient matrix in the DMP are put forward and proved. By using the inverse of submatrix's inner-product and the coefficient matrix, DMP could reduce the amount of calculation of residual error's variable quantity and obtain light computation complexity in the end. In addition, by using the method of decreasing firstly, and then increasing the element of the assumed support set one by one optimally, DMP could improve the reconstructive accuracy and broaden the range of sparsity of reconstructing the sparse signal. The analysis of algorithmic complexity shows that the algorithmic complexity of getting, deceasing and increasing the assumed support set is O(K2N), O(b(K-b)N) and O(b(K-b)N), respectively. The experiment of indirect reconstructive weighted 0-1 sparse signal shows the reconstructive accuracy of the DMP, greedy pursuit algorithm (GPA), subspace pursuit (SP), compressive sampling matching pursuit (CoSaMP) and orthogonal matching pursuit (OMP) are 99%, 65%, 0%, 0% and 13% separately for 0-1 sparse signal with M/2 sparsity. The experiments of sparse signals in which the non-zero values obey normal distribution also show the reconstruction accuracy of DMP has obvious superiority.
Two-Tiered Correlation Clustering Method for Entity Resolution in Big Data
Wang Ning,Li Jie
2014, 51(9):  2108-2116.  doi:10.7544/issn1000-1239.2014.20131345
Asbtract ( 878 )   HTML ( 1)   PDF (1943KB) ( 804 )  
Related Articles | Metrics
Volume, velocity, variety and veracity are four striking features of big data, which bring new challenges to data integration. Entity resolution is one of the most important steps in data integration. For big data, conventional entity resolution methods tend to be inefficient and ineffective in practice, especially on the noise immunity. In order to address the inconsistency issue of resolution results produced by the big data's four features, we introduce the concept of common neighborhood into the correlation clustering problem. Our top tier for pre-partition is designed based on the neighborhood, which can quickly and effectively complete the preliminary partition of blocks. The introduction of the concept of kernel gives a more precise definition of the correlation degree between a node and a cluster. As a consequence, our bottom tier for adjustment can accurately cluster nodes and improve the accuracy of the correlation clustering. Our two-tiered method for entity resolution is simple and efficient for the use of coarse similarity function. Meanwhile, our method achieves good performance on noise immunity with the introduction of the neighborhood. Extensive experiments demonstrate that the proposed two-tiered method achieves high accuracy and good noise immunity compared with those traditional methods, and is also scalable for big data.
A New Fuzzy Clustering Algorithm with Entropy Index Constraint
Huang Chengquan,Wang Shitong,Jiang Yizhang
2014, 51(9):  2117-2129.  doi:10.7544/issn1000-1239.2014.20130305
Asbtract ( 721 )   HTML ( 0)   PDF (3819KB) ( 571 )  
Related Articles | Metrics
The fuzziness index m plays an important role in the clustering result of fuzzy clustering algorithms. In order to avoid the fuzziness index m of the CA (competitive agglomeration) clustering algorithm based on FCM (fuzzy C-means) clustering algorithm framework being forced to fix at the usual value 2, a more universal fuzzy clustering algorithm is proposed. Firstly, a fuzzy clustering algorithm named EIC-FCM (entropy index constraint FCM), which has comparable clustering performance to the classical FCM algorithm, is presented by introducing an entropy index r into constraints with m=1. The successful introducing of entropy index r effectively makes the fuzziness index constraint m>1 transform into entropy index constraint 0
On Commutativity of Products of Two Types Fuzzy Finite State Machines
Xie Zhengwei, Zhai Ying, Huang Feidan, Yi Zhong, Deng Peimin
2014, 51(9):  2130-2136.  doi:10.7544/issn1000-1239.2014.20121184
Asbtract ( 675 )   HTML ( 1)   PDF (715KB) ( 601 )  
Related Articles | Metrics
Automata theory is one of the basic and important theories in computer science. The use of algebraic techniques in determining the structure of automata has been significant. Afterword, Malik et al. applied algebraic techniques to study fuzzy automata or fuzzy finite state machines(ffsm). In this article, the further research on the commutativity of two types ffsm is investigated by algebraic tools such as matrices, semgroups, and so on.Some equivalent characterizations of the commutativity of ffsm are given. It is proved that ffsm are commutative if and only if their state transition matrices are commutative for fuzzy matrix multiplication or the semigroup of strings over input alphabet by congruence relations is commutative. The commutativity of direct product, cascade product,wreath product, and sum of ffsm are discussed. Meanwhile,the concept of commutativity of Mealy-type fuzzy finite state machines (Mffsm) is defined.The commutativity of several products, sum, and quotient for Mffsm are studied in detail.Furthermore,the sufficient and necessary conditions of the commutativity of direct product, sum for ffsm (Mffsm) are obtained as well as the sufficient conditions of the commutativity of wreath product, cascade product for Mffsm.It is also proved that quotient Mffsm maintains the commutativity of Mffsm. Moreover,the algorithm for commutativity of ffsm is presented.