• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Bao Zhenkun, Zhang Weiming, Cheng Sen, Zhao Xianfeng. ±1 Steganographic Codes by Applying Syndrome-Trellis Codes to Dynamic Distortion Model in Pixel Chain[J]. Journal of Computer Research and Development, 2014, 51(8): 1739-1747. DOI: 10.7544/issn1000-1239.2014.20121213
Citation: Bao Zhenkun, Zhang Weiming, Cheng Sen, Zhao Xianfeng. ±1 Steganographic Codes by Applying Syndrome-Trellis Codes to Dynamic Distortion Model in Pixel Chain[J]. Journal of Computer Research and Development, 2014, 51(8): 1739-1747. DOI: 10.7544/issn1000-1239.2014.20121213

±1 Steganographic Codes by Applying Syndrome-Trellis Codes to Dynamic Distortion Model in Pixel Chain

More Information
  • Published Date: August 14, 2014
  • Double-layered STC (syndrome trellis code) is the most popular method for minimizing the distortion of ±1 steganography. However, it is a probabilistic algorithm which may fail in the embedding process on some given profiles. Another characteristic of double-layered STC is the high computational complexity. Starting from these two points, we propose a dynamic distortion model defined in a pixel chain in this paper. The dynamic distortion model is working on a principle that the SLSB (second least significant bit) of current pixel is used to control the LSB (least significant bit) of the next pixel. So the distortion of some pixels may be adjusted to zero by this means. We apply STC to fit the dynamic distortion model and get a novel method for ±1 steganography. Comparing with the double-layered STC, the experiment result shows that the proposed method has comparable ability for minimizing distortion with significantly improved embedding speed. And this novel method avoids failure in the embedding process. Considering the advantages together, the method is more suitable for steganography systems and software in practical environment.
  • Related Articles

    [1]Guo Hongjing, Tao Chuanqi, Huang Zhiqiu. Surprise Adequacy-Guided Deep Neural Network Test Inputs Generation[J]. Journal of Computer Research and Development, 2024, 61(4): 1003-1017. DOI: 10.7544/issn1000-1239.202220745
    [2]Wu Huanhuan, Xie Ruilin, Qiao Yuanxin, Chen Xiang, Cui Zhanqi. Optimizing Deep Neural Network Based on Interpretability Analysis[J]. Journal of Computer Research and Development, 2024, 61(1): 209-220. DOI: 10.7544/issn1000-1239.202220803
    [3]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [4]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [5]Ma Chencheng, Du Xuehui, Cao Lifeng, Wu Bei. burst-Analysis Website Fingerprinting Attack Based on Deep Neural Network[J]. Journal of Computer Research and Development, 2020, 57(4): 746-766. DOI: 10.7544/issn1000-1239.2020.20190860
    [6]Zhao Hongke, Wu Likang, Li Zhi, Zhang Xi, Liu Qi, Chen Enhong. Predicting the Dynamics in Internet Finance Based on Deep Neural Network Structure[J]. Journal of Computer Research and Development, 2019, 56(8): 1621-1631. DOI: 10.7544/issn1000-1239.2019.20190330
    [7]Wang Yilei, Zhuo Yifan, Wu Yingjie, Chen Mingqin. Question Answering Algorithm on Image Fragmentation Information Based on Deep Neural Network[J]. Journal of Computer Research and Development, 2018, 55(12): 2600-2610. DOI: 10.7544/issn1000-1239.2018.20180606
    [8]Zhou Yucong, Liu Yi, Wang Rui. Training Deep Neural Networks for Image Applications with Noisy Labels by Complementary Learning[J]. Journal of Computer Research and Development, 2017, 54(12): 2649-2659. DOI: 10.7544/issn1000-1239.2017.20170637
    [9]Wang Peiqi, Gao Yuan, Liu Zhenyu, Wang Haixia, Wang Dongsheng. A Comparison Among Different Numeric Representations in Deep Convolution Neural Networks[J]. Journal of Computer Research and Development, 2017, 54(6): 1348-1356. DOI: 10.7544/issn1000-1239.2017.20170098
    [10]Zhang Lei, Zhang Yi. Big Data Analysis by Infinite Deep Neural Networks[J]. Journal of Computer Research and Development, 2016, 53(1): 68-79. DOI: 10.7544/issn1000-1239.2016.20150663
  • Cited by

    Periodical cited type(20)

    1. 徐宁,李静秋,王岚君,刘安安. 时序特性引导下的谣言事件检测方法评测. 南京大学学报(自然科学). 2025(01): 71-82 .
    2. 张元园,袁嘉霁. 基于社交媒体的谣言检测研究综述. 数据通信. 2024(01): 28-33 .
    3. 廖劲智,赵和伟,连小童,纪文亮,石海明,赵翔. 基于对比图学习的跨文档虚假信息检测. 计算机科学. 2024(03): 14-19 .
    4. 凤丽洲,刘馥榕,王友卫. 基于图卷积网络和注意力机制的谣言检测方法. 数据分析与知识发现. 2024(04): 125-136 .
    5. 王晰巍,孙哲,姜奕冰,李玥琪. 社交媒体网络辟谣回音室效应分析模型及实验研究. 现代情报. 2024(10): 3-17 .
    6. 朱奕,王根生,金文文,黄学坚,李胜. 基于文本语义增强和评论立场加权的网络谣言检测. 计算机科学与探索. 2024(12): 3311-3323 .
    7. 甘臣权,付祥,冯庆东,祝清意. 基于公共情感特征压缩与融合的轻量级图文情感分析模型. 计算机研究与发展. 2023(05): 1099-1110 . 本站查看
    8. 聂大成,汪明达,刘世钰,杨慧,张翔,邱鸿杰. 在线社会网络虚假信息检测关键技术研究综述. 通信技术. 2023(04): 391-399 .
    9. 李卓远,李军. 基于对比学习的多模态注意力网络虚假信息检测方法. 中国科技论文. 2023(11): 1192-1197 .
    10. 强子珊,顾益军. 基于多模态异质图的社交媒体谣言检测模型. 数据分析与知识发现. 2023(11): 68-78 .
    11. 陈志毅,隋杰. 基于DeepFM和卷积神经网络的集成式多模态谣言检测方法. 计算机科学. 2022(01): 101-107 .
    12. 陆恒杨,范晨悠,吴小俊. 面向网络社交媒体的少样本新冠谣言检测. 中文信息学报. 2022(01): 135-144+172 .
    13. 唐樾,马静. 基于增强对抗网络和多模态融合的谣言检测方法. 情报科学. 2022(06): 108-114+131 .
    14. 王壮,隋杰. 基于多级融合的多模态谣言检测模型. 计算机工程与设计. 2022(06): 1756-1761 .
    15. 吴诗苑,董庆兴,宋志君,张斌. 社交媒体中错误信息的检测方法研究述评. 情报学报. 2022(06): 651-661 .
    16. 范伟,刘勇. 基于时空Transformer的社交网络信息传播预测. 计算机研究与发展. 2022(08): 1757-1769 . 本站查看
    17. 姜梦函,李邵梅,吴子仪,张建朋. 多模态特征融合的中文谣言检测. 信息工程大学学报. 2022(04): 485-490 .
    18. 孟佳娜,王晓培,李婷,刘爽,赵迪. 基于对抗神经网络的跨模态谣言检测. 数据分析与知识发现. 2022(12): 32-42 .
    19. 徐铭达,张子柯,许小可. 基于模体度的社交网络虚假信息传播机制研究. 计算机研究与发展. 2021(07): 1425-1435 . 本站查看
    20. 胡斗,卫玲蔚,周薇,淮晓永,韩冀中,虎嵩林. 一种基于多关系传播树的谣言检测方法. 计算机研究与发展. 2021(07): 1395-1411 . 本站查看

    Other cited types(32)

Catalog

    Article views (1885) PDF downloads (554) Cited by(52)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return