• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Pan Weimin and He Jun. Neuro-Fuzzy System Modeling with Density-Based Clustering[J]. Journal of Computer Research and Development, 2010, 47(11): 1986-1992.
Citation: Pan Weimin and He Jun. Neuro-Fuzzy System Modeling with Density-Based Clustering[J]. Journal of Computer Research and Development, 2010, 47(11): 1986-1992.

Neuro-Fuzzy System Modeling with Density-Based Clustering

More Information
  • Published Date: November 14, 2010
  • Neuro-fuzzy system is widely used for nonlinear system modeling. How to partition the input space optimally is the core issue in fuzzy system modeling. Previous ways suffer from two main drawbacks, the difficulty to determine of the number of partitions and rule redundancy, which hinders the application of fuzzy system. The authors present a new approach to neuro-fuzzy system modeling based on DENCLUE using a dynamic threshold and similar rules merging (DDTSRM). They first introduce DDT, which uses a dynamic threshold rather than a global threshold in merging density-attractors in DENCLUE. DDTSRM is good at determining the number of rules because DDT does not depend on input parameters. Additionally, the modeling performance is improved for DDT can find arbitrary shape and arbitrary density clusters. Rule redundancy is caused by similar fuzzy sets in the input and output data space. After structure identification, similar rules are merged by considering similarity measures between fuzzy sets. This is also effective for the model to avoid overfitting to the sample data. Finally, BP method is used to precisely adjust the parameters of the fuzzy model. DDTSRM is applied to a nonlinear function and Box and Jenkins system. Experimental results show that DDTSRM has overcome the drawbacks with a good performance.
  • Related Articles

    [1]Wu Jinjin, Liu Quan, Chen Song, Yan Yan. Averaged Weighted Double Deep Q-Network[J]. Journal of Computer Research and Development, 2020, 57(3): 576-589. DOI: 10.7544/issn1000-1239.2020.20190159
    [2]Bai Chenjia, Liu Peng, Zhao Wei, Tang Xianglong. Active Sampling for Deep Q-Learning Based on TD-error Adaptive Correction[J]. Journal of Computer Research and Development, 2019, 56(2): 262-280. DOI: 10.7544/issn1000-1239.2019.20170812
    [3]Zhu Fei, Wu Wen, Liu Quan, Fu Yuchen. A Deep Q-Network Method Based on Upper Confidence Bound Experience Sampling[J]. Journal of Computer Research and Development, 2018, 55(8): 1694-1705. DOI: 10.7544/issn1000-1239.2018.20180148
    [4]Chen Tieming, Yang Yimin, Chen Bo. Maldetect: An Android Malware Detection System Based on Abstraction of Dalvik Instructions[J]. Journal of Computer Research and Development, 2016, 53(10): 2299-2306. DOI: 10.7544/issn1000-1239.2016.20160348
    [5]Fu Ning, Zhou Xingshe, Zhan Tao. QPi: A Calculus to Enforce Trustworthiness Requirements[J]. Journal of Computer Research and Development, 2011, 48(11): 2120-2130.
    [6]Liu Tao, He Yanxiang, Xiong Qi. A Q-Learning Based Real-Time Mitigating Mechanism against LDoS Attack and Its Modeling and Simulation with CPN[J]. Journal of Computer Research and Development, 2011, 48(3): 432-439.
    [7]Zhao Ming, Luo Jizhou, Li Jianzhong, and Gao Hong. XCluster: A Cluster-Based Queriable Multi-Document XML Compression Method[J]. Journal of Computer Research and Development, 2010, 47(5): 804-814.
    [8]Deng Shanshan, Sun yi, Zhang Lisheng, Mo Zhifeng, Xie Yingke. Design of HighSpeed FFT Processor for Length N=q×2\+m[J]. Journal of Computer Research and Development, 2008, 45(8): 1430-1438.
    [9]Han Jingyu, Xu Lizhen, and Dong Yisheng. An Approach for Detecting Similar Duplicate Records of Massive Data[J]. Journal of Computer Research and Development, 2005, 42(12): 2206-2212.
    [10]Li Ronglu, Wang Jianhui, Chen Xiaoyun, Tao Xiaopeng, and Hu Yunfa. Using Maximum Entropy Model for Chinese Text Categorization[J]. Journal of Computer Research and Development, 2005, 42(1): 94-101.

Catalog

    Article views (642) PDF downloads (472) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return