ISSN 1000-1239 CN 11-1777/TP

Most Down Articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month| Most Downloaded in Recent Year|

    Most Downloaded in Recent Month
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Knowledge Graph Construction Techniques
    LiuQiao,LiYang,DuanHong,LiuYao,QinZhiguang
    Journal of Computer Research and Development    2016, 53 (3): 582-600.   DOI: 10.7544/issn1000-1239.2016.20148228
    Abstract11121)   HTML368)    PDF (2414KB)(18409)       Save
    Google’s knowledge graph technology has drawn a lot of research attentions in recent years. However, due to the limited public disclosure of technical details, people find it difficult to understand the connotation and value of this technology. In this paper, we introduce the key techniques involved in the construction of knowledge graph in a bottom-up way, starting from a clearly defined concept and a technical architecture of the knowledge graph. Firstly, we describe in detail the definition and connotation of the knowledge graph, and then we propose the technical framework for knowledge graph construction, in which the construction process is divided into three levels according to the abstract level of the input knowledge materials, including the information extraction layer, the knowledge integration layer, and the knowledge processing layer, respectively. Secondly, the research status of the key technologies for each level are surveyed comprehensively and also investigated critically for the purposes of gradually revealing the mysteries of the knowledge graph technology, the state-of-the-art progress, and its relationship with related disciplines. Finally, five major research challenges in this area are summarized, and the corresponding key research issues are highlighted.
    Related Articles | Metrics
    Adversarial Attacks and Defenses for Deep Learning Models
    Li Minghui, Jiang Peipei, Wang Qian, Shen Chao, Li Qi
    Journal of Computer Research and Development    2021, 58 (5): 909-926.   DOI: 10.7544/issn1000-1239.2021.20200920
    Abstract529)   HTML0)    PDF (1577KB)(674)       Save
    Deep learning is one of the main representatives of artificial intelligence technology, which is quietly enhancing our daily lives. However, the deployment of deep learning models has also brought potential security risks. Studying the basic theories and key technologies of attacks and defenses for deep learning models is of great significance for a deep understanding of the inherent vulnerability of the models, comprehensive protection of intelligent systems, and widespread deployment of artificial intelligence applications. This paper discusses the development and future challenges of the adversarial attacks and defenses for deep learning models from the perspective of confrontation. In this paper, we first introduce the potential threats faced by deep learning at different stages. Afterwards, we systematically summarize the progress of existing attack and defense technologies in artificial intelligence systems from the perspectives of the essential mechanism of adversarial attacks, the methods of adversarial attack generation, defensive strategies against the attacks, and the framework of the attacks and defenses. We also discuss the limitations of related research and propose an attack framework and a defense framework for guidance in building better adversarial attacks and defenses. Finally, we discuss several potential future research directions and challenges for adversarial attacks and defenses against deep learning model.
    Related Articles | Metrics
    Big Data Management: Concepts,Techniques and Challenges
    Meng Xiaofeng and Ci Xiang
    Journal of Computer Research and Development   
    Accepted: 15 January 2020

    Journal of Computer Research and Development    2021, 58 (5): 905-908.   DOI: 10.7544/issn1000-1239.2021.qy0501
    Abstract281)   HTML2)    PDF (301KB)(440)       Save
    Related Articles | Metrics
    Deep Learning: Yesterday, Today, and Tomorrow
    Yu Kai, Jia Lei, Chen Yuqiang, and Xu Wei
    计算机研究与发展    2013, 50 (9): 1799-1804.  
    Abstract4956)   HTML206)    PDF (873KB)(10748)       Save
    Machine learning is an important area of artificial intelligence. Since 1980s, huge success has been achieved in terms of algorithms, theory, and applications. From 2006, a new machine learning paradigm, named deep learning, has been popular in the research community, and has become a huge wave of technology trend for big data and artificial intelligence. Deep learning simulates the hierarchical structure of human brain, processing data from lower level to higher level, and gradually composing more and more semantic concepts. In recent years, Google, Microsoft, IBM, and Baidu have invested a lot of resources into the R&D of deep learning, making significant progresses on speech recognition, image understanding, natural language processing, and online advertising. In terms of the contribution to real-world applications, deep learning is perhaps the most successful progress made by the machine learning community in the last 10 years. In this article, we will give a high-level overview about the past and current stage of deep learning, discuss the main challenges, and share our views on the future development of deep learning.
    Related Articles | Metrics
    Knowledge Representation Learning: A Review
    Liu Zhiyuan, Sun Maosong, Lin Yankai, Xie Ruobing
    Journal of Computer Research and Development    2016, 53 (2): 247-261.   DOI: 10.7544/issn1000-1239.2016.20160020
    Abstract10124)   HTML112)    PDF (3333KB)(15824)       Save
    Knowledge bases are usually represented as networks with entities as nodes and relations as edges. With network representation of knowledge bases, specific algorithms have to be designed to store and utilize knowledge bases, which are usually time consuming and suffer from data sparsity issue. Recently, representation learning, delegated by deep learning, has attracted many attentions in natural language processing, computer vision and speech analysis. Representation learning aims to project the interested objects into a dense, real-valued and low-dimensional semantic space, whereas knowledge representation learning focuses on representation learning of entities and relations in knowledge bases. Representation learning can efficiently measure semantic correlations of entities and relations, alleviate sparsity issues, and significantly improve the performance of knowledge acquisition, fusion and inference. In this paper, we will introduce the recent advances of representation learning, summarize the key challenges and possible solutions, and further give a future outlook on the research and application directions.
    Related Articles | Metrics
    Situation, Trends and Prospects of Deep Learning Applied to Cyberspace Security
    Zhang Yuqing, Dong Ying, Liu Caiyun, Lei Kenan, Sun Hongyu
    Journal of Computer Research and Development    2018, 55 (6): 1117-1142.   DOI: 10.7544/issn1000-1239.2018.20170649
    Abstract3361)   HTML54)    PDF (3633KB)(2825)       Save
    Recently, research on deep learning applied to cyberspace security has caused increasing academic concern, and this survey analyzes the current research situation and trends of deep learning applied to cyberspace security in terms of classification algorithms, feature extraction and learning performance. Currently deep learning is mainly applied to malware detection and intrusion detection, and this survey reveals the existing problems of these applications: feature selection, which could be achieved by extracting features from raw data; self-adaptability, achieved by early-exit strategy to update the model in real time; interpretability, achieved by influence functions to obtain the correspondence between features and classification labels. Then, top 10 obstacles and opportunities in deep learning research are summarized. Based on this, top 10 obstacles and opportunities of deep learning applied to cyberspace security are at first proposed, which falls into three categories. The first category is intrinsic vulnerabilities of deep learning to adversarial attacks and privacy-theft attacks. The second category is sequence-model related problems, including program syntax analysis, program code generation and long-term dependences in sequence modeling. The third category is learning performance problems, including poor interpretability and traceability, poor self-adaptability and self-learning ability, false positives and data unbalance. Main obstacles and their opportunities among the top 10 are analyzed, and we also point out that applications using classification models are vulnerable to adversarial attacks and the most effective solution is adversarial training; collaborative deep learning applications are vulnerable to privacy-theft attacks, and prospective defense is teacher-student model. Finally, future research trends of deep learning applied to cyberspace security are introduced.
    Related Articles | Metrics
    Research Progress of Neural Networks Watermarking Technology
    Zhang Yingjun, Chen Kai, Zhou Geng, Lü Peizhuo, Liu Yong, Huang Liang
    Journal of Computer Research and Development    2021, 58 (5): 964-976.   DOI: 10.7544/issn1000-1239.2021.20200978
    Abstract177)   HTML0)    PDF (1865KB)(340)       Save
    With the popularization and application of deep neural networks, the trained neural network model has become an important asset and has been provided as machine learning services (MLaaS) for users. However, as a special kind of user, attackers can extract the models when using the services. Considering the high value of the models and risks of being stolen, service providers start to pay more attention to the copyright protection of their models. The main technique is adopted from the digital watermark and applied to neural networks, called neural network watermarking. In this paper, we first analyze this kind of watermarking and show the basic requirements of the design. Then we introduce the related technologies involved in neural network watermarking. Typically, service providers embed watermarks in the neural networks. Once they suspect a model is stolen from them, they can verify the existence of the watermark in the model. Sometimes, the providers can obtain the suspected model and check the existence of watermarks from the model parameters (white-box). But sometimes, the providers cannot acquire the model. What they can only do is to check the input/output pairs of the suspected model (black-box). We discuss these watermarking methods and potential attacks against the watermarks from the viewpoint of robustness, stealthiness, and security. In the end, we discuss future directions and potential challenges.
    Related Articles | Metrics
    Edge Computing: State-of-the-Art and Future Directions
    Shi Weisong, Zhang Xingzhou, Wang Yifan, Zhang Qingyang
    Journal of Computer Research and Development    2019, 56 (1): 69-89.   DOI: 10.7544/issn1000-1239.2019.20180760
    Abstract5767)   HTML281)    PDF (3670KB)(4094)       Save
    With the burgeoning of the Internet of everything, the amount of data generated by edge devices increases dramatically, resulting in higher network bandwidth requirements. In the meanwhile, the emergence of novel applications calls for the lower latency of the network. It is an unprecedented challenge to guarantee the quality of service while dealing with a massive amount of data for cloud computing, which has pushed the horizon of edge computing. Edge computing calls for processing the data at the edge of the network and develops rapidly from 2014 as it has the potential to reduce latency and bandwidth charges, address the limitation of computing capability of cloud data center, increase availability as well as protect data privacy and security. This paper mainly discusses three questions about edge computing: where does it come from, what is the current status and where is it going? This paper first sorts out the development process of edge computing and divides it into three periods: technology preparation period, rapid growth period and steady development period. This paper then summarizes seven essential technologies that drive the rapid development of edge computing. After that, six typical applications that have been widely used in edge computing are illustrated. Finally, this paper proposes six open problems that need to be solved urgently in future development.
    Related Articles | Metrics
    Privacy-Preserving Network Attack Provenance Based on Graph Convolutional Neural Network
    Li Teng, Qiao Wei, Zhang Jiawei, Gao Yiyang, Wang Shenao, Shen Yulong, Ma Jianfeng
    Journal of Computer Research and Development    2021, 58 (5): 1006-1020.   DOI: 10.7544/issn1000-1239.2021.20200942
    Abstract125)   HTML0)    PDF (4206KB)(297)       Save
    APT(advanced persistent threat) attacks have a long incubation time and a vital purpose. It can destroy the inside’s enterprise security fortress, employing variant Trojans, ransomware, and botnet. However, the existing attack source tracing methods only target a single log or traffic data, making it impossible to trace the complete process of multi-stage attacks. Because of the complicated log relationship, serious state explosion problems will occur in the log relationship graph, making it difficult to classify and identify attacks accurately. Simultaneously, data privacy protection is rarely considered in using log and traffic data for attack tracing approaches. We propose an attack tracing method based on a Graph Convolutional Network (GCN) with user data privacy protection to solve these problems. Supervised learning solves the state explosion caused by multiple log relationship connections, optimizing the Louvain community discovery algorithm to improve detection speed and accuracy. Moreover, using map neural networks to attack classification effectively and combining privacy protection scheme leveraging CP-ABE (Ciphertext-Policy Attribute Based Encryption) properties realize log data secure sharing in public cloud. In this paper, the detection speed and efficiency of four APT attack testing methods are reproduced. Experimental results show that the detection time of this method can be reduced by 90% at most, and the accuracy can reach 92%.
    Related Articles | Metrics
    Research and Challenge of Distributed Deep Learning Privacy and Security Attack
    Zhou Chunyi, Chen Dawei, Wang Shang, Fu Anmin, Gao Yansong
    Journal of Computer Research and Development    2021, 58 (5): 927-943.   DOI: 10.7544/issn1000-1239.2021.20200966
    Abstract317)   HTML2)    PDF (2954KB)(328)       Save
    Different from the centralized deep learning mode, distributed deep learning gets rid of the limitation that the data must be centralized during the model training process, which realizes the local operation of the data, and allows all participants to collaborate without exchanging data. It significantly reduces the risk of user privacy leakage, breaks the data island from the technical level, and improves the efficiency of deep learning. Distributed deep learning can be widely used in smart medical care, smart finance, smart retail and smart transportation. However, typical attacks such as generative adversarial network attacks, membership inference attacks and backdoor attacks, have revealed that distributed deep learning still has serious privacy vulnerabilities and security threats. This paper first compares and analyzes the characteristics of the three distributed deep learning modes and their core problems, including collaborative learning, federated learning and split learning. Secondly, from the perspective of privacy attacks, it comprehensively expounds various types of privacy attacks faced by distributed deep learning, and summarizes the existing privacy attack defense methods. At the same time, from the perspective of security attacks, the paper analyzes the attack process and inherent security threats of the three security attacks: data poisoning attacks, adversarial sample attacks, and backdoor attacks, and analyzes the existing security attack defense technology from the perspectives of defense principles, adversary capabilities, and defense effects. Finally, from the perspective of privacy and security attacks, the future research directions of distributed deep learning are discussed and prospected.
    Related Articles | Metrics
    Survey on Privacy-Preserving Machine Learning
    Liu Junxu, Meng Xiaofeng
    Journal of Computer Research and Development    2020, 57 (2): 346-362.   DOI: 10.7544/issn1000-1239.2020.20190455
    Abstract3209)   HTML141)    PDF (1684KB)(3310)       Save
    Large-scale data collection has vastly improved the performance of machine learning, and achieved a win-win situation for both economic and social benefits, while personal privacy preservation is facing new and greater risks and crises. In this paper, we summarize the privacy issues in machine learning and the existing work on privacy-preserving machine learning. We respectively discuss two settings of the model training process—centralized learning and federated learning. The former needs to collect all the user data before training. Although this setting is easy to deploy, it still exists enormous privacy and security hidden troubles. The latter achieves that massive devices can collaborate to train a global model while keeping their data in local. As it is currently in the early stage of the study, it also has many problems to be solved. The existing work on privacy-preserving techniques can be concluded into two main clues—the encryption method including homomorphic encryption and secure multi-party computing and the perturbation method represented by differential privacy, each having its advantages and disadvantages. In this paper, we first focus on the design of differentially-private machine learning algorithm, especially under centralized setting, and discuss the differences between traditional machine learning models and deep learning models. Then, we summarize the problems existing in the current federated learning study. Finally, we propose the main challenges in the future work and point out the connection among privacy protection, model interpretation and data transparency.
    Related Articles | Metrics
    An Image Quality-Aware Fast Blind Denoising Algorithm for Mixed Noise
    Xu Shaoping, Liu Tingyun, Luo Jie, Zhang Guizhen, Tang Yiling
    Journal of Computer Research and Development    2019, 56 (11): 2458-2468.   DOI: 10.7544/issn1000-1239.2019.20180617
    Abstract534)   HTML9)    PDF (10842KB)(427)       Save
    The existing Gaussian-impulse mixed noise removal algorithms usually restore the noisy images via regularization technique by solving an optimal objective function iteratively, which results in low executive efficiency and limits their practical applications. To this end, in this paper we propose an image quality-aware fast blind denoising algorithm (IQA-FBDA), which takes convolutional neural network (CNN) as the core technique for the removal of Gaussian-impulse mixed noise. In the training phase, a shallow CNN-based image quality estimation model is first exploited to estimate the image quality of the image to be denoised. Then, according to the statistical distribution of the image qualities of a large number of noisy images, we construct a mixed noise pattern classification dictionary (MNPCD). Based on the MNPCD, the training noisy images are classified into 16 sub-classes, and then deep CNN-based denoisers for each class are trained. In the denoising phase, the image quality estimation model is first used to estimate the quality value of a given noisy image. After querying the quality value in the MNPCD, the corresponding pre-trained denoiser is exploited to achieve efficient blind image denoising. Experiments show that, compared with the state-of-the-art Gaussian-impulse mixed noise removal algorithms, the proposed one achieves comparable noise reduction effect with great improvement in terms of the execution efficiency, which makes it more practical.
    Related Articles | Metrics
    Survey of Internet of Things Security
    Zhang Yuqing, Zhou Wei, Peng Anni
    Journal of Computer Research and Development    2017, 54 (10): 2130-2143.   DOI: 10.7544/issn1000-1239.2017.20170470
    Abstract3853)   HTML88)    PDF (1747KB)(3771)       Save
    With the development of smart home, intelligent care and smart car, the application fields of IoT are becoming more and more widespread, and its security and privacy receive more attention by researchers. Currently, the related research on the security of the IoT is still in its initial stage, and most of the research results cannot solve the major security problem in the development of the IoT well. In this paper, we firstly introduce the three-layer logic architecture of the IoT, and outline the security problems and research priorities of each level. Then we discuss the security issues such as privacy preserving and intrusion detection, which need special attention in the IoT main application scenarios (smart home, intelligent healthcare, car networking, smart grid, and other industrial infrastructure). Though synthesizing and analyzing the deficiency of existing research and the causes of security problem, we point out five major technical challenges in IoT security. They are privacy protection in data sharing, the equipment security protection under limited resources, more effective intrusion detection and defense systems and method, access control of equipment automation operations and cross-domain authentication of motive device. We finally detail every technical challenge and point out the IoT security research hotspots in future.
    Related Articles | Metrics
    Cited: Baidu(13)
    Shenwei-26010: A High-Performance Many-Core Processor
    Hu Xiangdong, Ke Ximing, Yin Fei, Zhao Xin, Ma Yongfei, Yan Shiyun, Ma Chao
    Journal of Computer Research and Development    2021, 58 (6): 1155-1165.   DOI: 10.7544/issn1000-1239.2021.20201041
    Abstract78)      PDF (1621KB)(166)       Save
    Based on the multi-core processor Shenwei 1600, the high-performance many-core processor Shenwei 26010 adopts SoC (system on chip) technology, and integrates 4 computing-control cores and 256 computing cores in a single chip. It adopts a 64-bit RISC (reduced instruction set computer) instruction set designed with an original design, and supports 256-bit SIMD (single instruction multiple data) integer and floating-point vector-acceleration operations. Its peak performance for double precision floating-point operations reaches 3.168TFLOPS. Shenwei 26010 processor is manufactured using 28nm process technology. The die area of the chip is more than 500mm\+2, and the 260 cores of the chip can run stably with a frequency of 1.5GHz. Shenwei 26010 processor adopts a variety of low power-consumption designs on the architecture level, the microarchitecture level, and the circuit level, and thus, leading to a peak energy-efficiency-ratio of 10.559GFLOPS/W. Notably, both the operating frequency and the energy-efficiency-ratio of the chip are higher than those of the worldwide contemporary processor products. Through the technical innovations of high frequency design, stable reliability design and yield design, Shenwei 26010 has effectively solved the issues of high frequency target, power consumption wall, stability and reliability, and yield, all of which are encountered when pursuing the goal of high-performance computing. It has been applied successfully to a 100PFLOPS supercomputer system named “Sunway TaihuLight” on a large scale, and therefore, can adequately meet the computing requirements for both scientific and engineering applications.
    Related Articles | Metrics
    Data Mining Based on Segmented Time Warping Distance in Time Series Database
    Xiao Hui and Hi Yunfa
    null   
    Abstract719)   HTML0)    PDF (472KB)(1257)       Save
    Data mining in time series database is an important task, most research work are based comparing time series with Euclidean distance measure or its transformations. However Euclidean distance measure will change greatly when the compared time series move slightly along the time-axis. It's impossible to get satisfactory result when using Euclidean distance in many cases. Dynamic time warping distance is a good way to deal with these cases, but it's very difficult to compute which limits its application. In this paper, a novel method is proposed to avoid the drawback of Euclidean distance measure. It first divides time series into several line segments based on some feature points which are Chosen by some heuristic method. Each time series is converted into a segmented sequence, and then a new distance measure called feature points segmented time warping distance is defined based this segmentation. Compared with the classical dynamic time warping distance, this new method is much more fast in speed and almost no degrade in accuracy. Finally, implements two completed and detailed experiments to prove its superiority.
    Related Articles | Metrics
    Quantum Annealing Algorithms: State of the Art
    Du Weilin, Li Bin, and Tian Yu
    Journal of Computer Research and Development    2008, 45 (9): 1501-1508.  
    Abstract1727)   HTML11)    PDF (1382KB)(4520)       Save
    In mathematics and applications, quantum annealing is a new method for finding solutions to combinatorial optimization problems and ground states of glassy systems using quantum fluctuations. Quantum fluctuations can be simulated in computers using various quantum Monte Carlo techniques, such as the path integral Monte Carlo method, and thus they can be used to obtain a new kind of heuristic algorithm for global optimization. It can be said that the idea of quantum annealing comes from the celebrated classical simulated thermal annealing invented by Kirkpatrick. However, unlike a simulated annealing algorithm, which utilizes thermal fluctuations to help the algorithm jump from local optimum to global optimum, quantum annealing algorithms utilize quantum fluctuations to help the algorithm tunnel through the barriers directly from local optimum to global optimum. According to the previous studies, although the quantum annealing algorithm is not capable, in general, of finding solutions to NP-complete problems in polynomial time, quantum annealing is still a promising optimization technique, which exhibits good performances on some typical optimization problems, such as the transverse Ising model and the traveling salesman problem. Provided in this paper is an overview of the principles and research progresses of quantum annealing algorithms in recent years; several different kinds of quantum annealing algorithms are presented in detail; both the advantages and disadvantages of each algorithm are analyzed; and prospects for the research orientation of the quantum annealing algorithm in future are given.
    Related Articles | Metrics
    A Review of Fuzzing Techniques
    Ren Zezhong, Zheng Han, Zhang Jiayuan, Wang Wenjie, Feng Tao, Wang He, Zhang Yuqing
    Journal of Computer Research and Development    2021, 58 (5): 944-963.   DOI: 10.7544/issn1000-1239.2021.20201018
    Abstract256)   HTML3)    PDF (1225KB)(282)       Save
    Fuzzing is a security testing technique, which is playing an increasingly important role, especially in detecting vulnerabilities. Fuzzing has experienced rapid development in recent years. A large number of new achievements have emerged, so it is necessary to summarize and analyze relevant achievements to follow fuzzing’s research frontier. Based on 4 top security conferences (IEEE S&P, USENIX Security, CCS, NDSS) about network and system security, we summarized fuzzing’s basic workflow, including preprocessing, input building, input selection, evaluation, and post-fuzzing. We discussed each link’s tasks, challenges, and the corresponding research results. We emphatically analyzed the fuzzing testing method based on coverage guidance, represented by the American Fuzzy Lop tool and its improvements. Using fuzzing testing technology in different fields will face vastly different challenges. We summarized the unique requirements and corresponding solutions for fuzzing testing in specific areas by sorting and analyzing the related literature. Mostly, we focused on the Internet of Things and the kernel security field because of their rapid development and importance. In recent years, the progress of anti-fuzzing testing technology and machine learning technology has brought challenges and opportunities to the development of fuzzing testing technology. These opportunities and challenges provide direction reference for the further research.
    Related Articles | Metrics
    Circuit Design of Convolutional Neural Network Based on Memristor Crossbar Arrays
    Hu Fei, You Zhiqiang, Liu Peng,Kuang Jishun
    Journal of Computer Research and Development    2018, 55 (5): 1097-1107.   DOI: 10.7544/issn1000-1239.2018.20170107
    Abstract1106)   HTML4)    PDF (3608KB)(589)       Save
    Memristor crossbar array has caused wide attention due to its excellent performance in neuromorphic computing. In this paper, we design a circuit to realize a convolutional neural network (CNN) using memristors and CMOS devices. Firstly, we improve a memristor crossbar array that can store weights and bias accurately. A dot product between two vectors can be calculated after introducing an appropriate encoding scheme. The improved memristor crossbar array is employed for convolution and pooling operations, and a classifier in a CNN. Secondly, we also design a memristive CNN architecture using the improved memristor crossbar array and based on the high fault-tolerance of CNNs to perform a basic CNN algorithm. In the designed architecture, the analog results of convolution operations are sampled and held before a pooling operation rather than using analog digital converters and digital analog converters between convolution and pooling operations in a previous architecture. Experimental results show the designed circuit with the area of 0.8525cm\+2 can achieve a speedup of 1770×compared with a GPU platform. Compared with previous memristor-based architecture with a similar area, our design is 7.7×faster. The average recognition errors performed on the designed circuit are only 0.039% and 0.012% lost than those of software implementation in the cases of a memristor with 6-bit and 8-bit storage capacities, respectively.
    Related Articles | Metrics
    Segmentation of Color Overlapping Cells Image Based on Sparse Contour Point Model
    Guan Tao, Zhou Dongxiang, Fan Weihong, Liu Yunhui
    Journal of Computer Research and Development    2015, 52 (7): 1682-1691.   DOI: 10.7544/issn1000-1239.2015.20140324
    Abstract1096)   HTML3)    PDF (3305KB)(908)       Save
    Based on the analysis of cell contour structure, a sparse contour point model, which can describe the characteristics of the cell contour, is proposed in this paper. In the sparse contour point model, cell contour is divided into 2 parts, namely light contour and dark contour, respectively; and then the cell contour is approximately described as a set of sparse contour points. Based on this model, the color and grayscale image segmentation techniques are combined to locate the basic contour, which lies between the cell and the background. Then, a circular dynamic contour searching method is proposed to search for the dark contour that lies in the overlapping cell region along the basic contour. Contour points located by the searching method are arranged to construct the initial contour of the gradient vector flow (GVF) Snake model. Then the GVF Snake model is performed to obtain the final accurate segmentation result of the cell image. Various cell images containing single cell, overlapping cells of similar colors and overlapping cells of different colors have been tested to show the validity and effectiveness of the proposed method. The proposed techniques are useful for the development of automatic cervical cell image analysis systems.
    Related Articles | Metrics