• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Ren Pengzhen, Liang Xiaodan, Chang Xiaojun, Zhao Ziying, Xiao Yun. Neural Architecture Search on Temporal Convolutions for Complex Action Recognition[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440048
Citation: Ren Pengzhen, Liang Xiaodan, Chang Xiaojun, Zhao Ziying, Xiao Yun. Neural Architecture Search on Temporal Convolutions for Complex Action Recognition[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440048

Neural Architecture Search on Temporal Convolutions for Complex Action Recognition

Funds: This work was supported by the Major National Science and Technology Program (2020AAA0109704), the China Postdoctoral Science Foundation (2023M734009), the General Program of the National Natural Science Foundation of China (62372371),the Key Projects of Shaanxi Province Int Science and Technology Cooperation Plan (2022KWZ-14), the Guangdong Outstanding Youth Fund (2021B1515020061), the Shenzhen Science and Technology Program (GJHZ20220913142600001), the Nansha Key RD Program (2022ZD014), the Pengcheng Laboratory Major Research Project (PCL2024AS101), and the CAAI-Huawei MindSpore Open Fund.
More Information
  • Author Bio:

    Ren Pengzhen,born in 1993. PhD, engineer. His main research interests include multi-modal representation learning, visual language pre-training and model automation design

    Liang Xiaodan,born in 1991. PhD, associate professor,PhD supervisor. Member of CCF. Her main research interests include computer vision, natural language understanding, and smart driving

    Chang Xiaojun,born in 1986. PhD, professor,PhD supervisor. His main research interests include multi-modal learning, computer vision and green artificial intelligence

    Zhao Ziying,born in 1985. PhD, senior engineer,PhD supervisor. Her main research interests include artificial intelligence and spatial analysis, and the application of large model technology

    Xiao Yun,born in 1978. PhD, professor,PhD supervisor. Senior member of CCF. Her main research interests include data mining, machine learning, and artificial intelligence algorithm research and applications

  • Received Date: January 28, 2024
  • Revised Date: February 24, 2025
  • Accepted Date: March 02, 2025
  • Available Online: March 02, 2025
  • In the field of complex action recognition in videos, the structural design of the model plays a crucial role in its final performance. However, manually designed network structures often rely heavily on the knowledge and experience of researchers. Therefore, neural architecture search (NAS) has received widespread attention from researchers in the field of image processing because of its automated network structure design. Currently, neural architecture search has achieved tremendous development in the image field. Some NAS methods even reduce the number of graphics processing unit (GPU) days required for automated model design to single digits, and the model structures they search show strong competitive potential. This encourages us to extend automated model structure design to the video domain. But it faces two serious challenges: 1) How to capture the long-range contextual temporal association in video as much as possible; 2) How to reduce the computational surge caused by 3D convolution as much as possible. To address the above challenges, we propose a novel Neural Architecture Search on Temporal Convolutions for Complex Action Recognition (NAS-TC). NAS- TC is a two-stage framework. In the first stage, we use the classic convolutional neural network (CNN) network as the backbone network to complete the computationally intensive feature extraction task. In the second stage, we propose a neural architecture search layer temporal convolutional layer (NAS-TC) to accomplish relatively lightweight long-range temporal model design and information extraction. This ensures that our method will have a more reasonable parameter allocation and can handle minute-level videos. Finally, the method we proposed achieved an average performance gain of 2.3% mAP on three complex action recognition benchmark data sets compared with similar methods, and the number of parameters was reduced by 28.5%.

  • [1]
    Chang Xiaojun, Yu Yao-Liang, Yang Yi, et al. They are not equally reliable: Semantic event search using differentiated concept classifiers [C] //Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 1884−1893
    [2]
    Ji Yuzhu, Zhang Haijun, Zhang Zhao, et al. CNN-based encoder-decoder networks for salient object detection: A comprehensive review and recent advances[J]. Information Sciences, 2021, 546: 835−857
    [3]
    Ren Jiahuan, Zhang Zhao, Hong Richang, et al. Robust low-rank convolution network for image denoising [C] //Proc of the 30th ACM Int Conf on Multimedia. Lisbon, New York: ACM, 2022: 6211−6219
    [4]
    Wu Zhihao, Zhao Zhang, and Fan Jicong. Graph convolutional kernel machine versus graph convolutional networks [C/OL] //Proc of the 37th Advances in Neural Information Processing Systems, 2023[2024-08-01]. https://proceedings.neurips.cc/paper_files/paper/2023/hash/3ec6c6fc9065aa57785eb05dffe7c3db-Abstract-Conference.html
    [5]
    Zoph B, Vasudevan V, Shlens J, et al. Learning transferable architectures for scalable image recognition [C] //Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 8697−8710
    [6]
    Real E, Aggarwal A, Huang Y, et al. Regularized evolution for image classifier architecture search [C] //Proc of the 23rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 4780−4789
    [7]
    Ren Pengzhen, Xiao Yun, Chang Xiaojun, et al. A comprehensive survey of neural architecture search: Challenges and solutions[J]. ACM Computing Surveys (CSUR), 2021, 54(4): 1−34
    [8]
    Zhang Xingwu, Ma Rui, Zhao Yu, et al. Differentiable sampling based efficient architecture search for automatic fault diagnosis[J]. Engineering Applications of Artificial Intelligence, 2024, 127(1): 107−214
    [9]
    孟子尧,谷雪,梁艳春,许东,吴春国. 深度神经架构搜索综述[J]. 计算机研究与发展,2021,58(1):22−33 doi: 10.7544/issn1000-1239.2021.20190851

    Meng Ziyao, Gu Xue, Liang Yanchun, et al. Deep neural architecture search: A survey[J]. Journal of Computer Research and Development, 2021, 58(1): 22−33(in Chinese) doi: 10.7544/issn1000-1239.2021.20190851
    [10]
    Sigurdsson G A, Varol G, Wang Xiaolong, et al. Hollywood in homes: Crowdsourcing data collection for activity understanding [C] //Proc of the 14th European Conf on Computer Vision. Berlin: Springer, 2016: 510−526
    [11]
    Cai Han and Chen Tianyao, Zhang Weinan, et al. Efficient architecture search by network transformation [C] // Proc of the 22nd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 2787−2794
    [12]
    Negrinho R, Geoff G. Deeparchitect: Automatically designing and training deep architectures [J]. arXiv preprint, arXiv: 1704.08792, 2017
    [13]
    Liu Chenxi, Zoph B, Neumann M, et al. Progressive neural architecture search [C] // Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 19−34
    [14]
    Kandasamy K, Neiswanger W, Schneider J, et al. Neural architecture search with bayesian optimisation and optimal transport [C/OL] // Proc of the 32nd Advances in Neural Information Processing Systems, 2018[2023-10-01]. https://proceedings.neurips.cc/paper_files/paper/2018/hash/f33ba15effa5c10e873bf3842afb46a6-Abstract.html
    [15]
    Liu Hanxiao, Simonyan K, Yang Yiming. Darts: Differentiable architecture search[J]. arXiv preprint, arXiv: 1806.09055, 2018
    [16]
    Poliakov E, Hung Weijie, Huang Chingchun. Efficient constraint-aware neural architecture search for object detection [C] // Proc of the 15th Asia Pacific Signal and Information Processing Association Annual Summit and Conf. Piscataway, NJ: IEEE, 2023: 733−740
    [17]
    Ozaeta M A A, Fajardo A C, Brazas F P, et al. Seagrass classification using differentiable architecture search [C] // Proc of the 26th Int Joint Conf on Computer Science and Software Engineering (JCSSE). Piscataway, NJ: IEEE, 2023: 123−128
    [18]
    Howard Andrew G, Zhu Menglong, Chen Bo, et al. Mobilenets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv preprint, arXiv: 1704.04861, 2017
    [19]
    Zhang Xiangyu, Zhou Xinyu, Lin Mengxiao, et al. Shufflenet: An extremely efficient convolutional neural network for mobile devices [C] // Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 6848−6856
    [20]
    Zoph B, Vasudevan V, Shlens J, et al. Learning transferable architectures for scalable image recognition[C]// Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 8697−8710
    [21]
    Tong Zhan, Song Yibing, Wang Jue, et al. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training [C/OL] //Proc of the 36th Advances in Neural Information Processing Systems, 2022[2023-08-17]. https://proceedings.neurips.cc/paper_files/paper/2022/hash/416f9cb3276121c42eebb86352a4354a-Abstract-Conference.html
    [22]
    Zisserman A, Carreira J, Simonyan K, et al. The kinetics human action video dataset[J]. arXiv preprint, arXiv: 1705.06950, 2017
    [23]
    Goyal R, Ebrahimi Kahou S, Michalski V, et al. The "something something" video database for learning and evaluating visual common sense [C] // Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE. 2017: 5842−5850
    [24]
    Soomro K, Zamair A R, Shah M. UCF101: A dataset of 101 human actions classes from videos in the wild[J]. arXiv preprint, arXiv: 1212.0402, 2012
    [25]
    Kuehne H, Jhuang H, Garrote E, et al. HMDB: A large video database for human motion recognition [C] // Proc of the 24th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2011: 2556−2563
    [26]
    Schindler K, Van Gool L. Action snippets: How many frames does human action recognition require? [C/OL] //Proc of the 21st IEEE/CVF Conf on Computer Vision and Pattern Recognition, 2008[2023-10-15]. https://ieeexplore.ieee.org/document/4587730
    [27]
    Hussein N, Gavves E, Smeulders A W. Timeception for complex action recognition [C] // Proc of the 32nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 254−263
    [28]
    Hilde K, Ali A, Thomas S. The language of actions: Recovering the syntax and semantics of goal-directed human activities [C] // Proc of the 27th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2014: 780−787
    [29]
    Yeung S, Russakovsky O, Jin N, et al. Every moment counts: Dense detailed labeling of actions in complex videos[J]. International Journal of Computer Vision, 2018, 126: 375−389
    [30]
    Zhou Jiaming, Li Hanjun, Lin Kunyu, et al. Adafocus: Towards end-to-end weakly supervised learning for long-video action understanding[J]. arXiv preprint, arXiv: 2311.17118, 2023
    [31]
    Vaswani A, Shazeer N, Parmar N. Attention is all you need [C/OL] //Proc of the 31st Advances in Neural Information Processing Systems, 2017[2023-10-15]. https://www.semanticscholar.org/reader/204e3073870fae3d05bcbc2f6a8e263d9b72e776
    [32]
    Bertasius G, Wang H, Torresani L. Is space-time attention all you need for video understanding? [C/OL] //Proc of the 38th Int Conf on Machine Learning, 2021[2024-01-01]. https://proceedings.mlr.press/v139/bertasius21a/bertasius21a-supp.pdf
    [33]
    Tong Zhan, Song Yibing, Wang Jue, et al. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training [C] // Proc of the 36th Advances in Neural Information Processing Systems, 2022[2023-12-10]. https://proceedings.neurips.cc/paper_files/paper/2022/hash/416f9cb3276121c42eebb86352a4354a-Abstract-Conference.html
    [34]
    He Kaiming, Chen Xinlei, Xie Saining, et al. Masked autoencoders are scalable vision learners [C] // Proc of the 35th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 16000−16009
    [35]
    Ji Shuiwang, Xu Wei, Yang Ming, et al. 3D convolutional neural networks for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(1): 221−231
    [36]
    Li Chao, Zhong Qiaoyong, Xie Di, et al. Collaborative spatiotemporal feature learning for video action recognition [C]//Proc of the 32nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 7872−7881
    [37]
    Xie Saining, Sun Chen, Huang Jonathan, et al. Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification [C] //Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 305−321
    [38]
    Wu Chaoyuan, Feichtenhofer C, Fan Haoqi, et al. Long-term feature banks for detailed video understanding [C]// Proc of the 32nd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 284−293
    [39]
    Wu Chaoyuan, Li Yanghao, Mangalam K, et al. Memvit: Memory-augmented multiscale vision transformer for efficient long-term video Recognition [C] // Proc of the 35th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 13587−13597
    [40]
    Zoph B. Neural architecture search with reinforcement learning[J]. arXiv preprint, arXiv: 1611.01578, 2016
    [41]
    Cai Han, Yang Jiacheng, Zhang Weinan, et al. Path-level network transformation for efficient architecture search [C/OL] //Proc of the 35th Int Conf on Machine Learning, 2018[2023-11-14]. https://proceedings.mlr.press/v80/cai18a.html
    [42]
    Real E, Moore S, Selle A, et al. Large-scale evolution of image classifiers [C/OL] //Proc of the 34th Int Conf on Machine Learning, 2017[2023-10-08]. https://proceedings.mlr.press/v70/real17a.html
    [43]
    Hussein N, Gavves E, Smeulders A W. Unified embedding and metric learning for zero-exemplar event detection [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 1096−1105
    [44]
    Habibian A, Mensink T, Snoek C GM. Video2vec embeddings recognize events when examples are scarce[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(10): 2089−2103
    [45]
    Girdhar R, Ramanan D. Attentional pooling for action recognition [C/OL] // Proc of the 31st Advances in Neural Information Processing Systems, 2017[2023-10-13]. https://proceedings.neurips.cc/paper/2017/hash/67c6a1e7ce56d3d6fa748ab6d9af3fd7-Abstract.html
    [46]
    Fernando B, Gavves E, Oramas J, et al. Rank pooling for action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(4): 773−787
    [47]
    Oneata D, Verbeek J, Schmid C. Action and event recognition with fisher vectors on a compact feature set [C] //Proc of the 26th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2013: 1817−1824
    [48]
    Cosmin DI, Ionescu B, Aizawa K, et al. Spatio-temporal vector of locally max pooled features for action recognition in videos [C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 3097−3106
    [49]
    Donahue J, Anne HL, Guadarrama S, et al. Long-term recurrent convolutional networks for visual recognition and description [C] //Proc of the 28th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2015: 2625−2634
    [50]
    Ghodrati A, Gavves E, Snoek C G M. Video time: Properties, encoders and evaluation[J]. arXiv preprint, arXiv: 1807.06980, 2018
    [51]
    Huang Gao, Liu Zhuang, Van DML, et al. Densely connected convolutional networks [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 4700−4708
    [52]
    Shi Wuzhen, Liu Shaohui, Jiang Feng, et al. Video compressed sensing using a convolutional neural network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(2): 425−438
    [53]
    Song Xue, Xu Baohan, Jiang Yugang, et al. Predicting content similarity via multimodal modeling for video-in-video advertising[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(2): 569−581
    [54]
    Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos [C/OL] //Proc of the 28th Advances in Neural Information Processing Systems. 2014[2024-01-19]. https:// proceedings.neurips.cc/paper_files/paper/2014/hash/00ec53c4682d36f5c4359f4ae7bd7ba1-Abstract.html
    [55]
    Bilen H, Fernando B, Gavves E, et al. Action recognition with dynamic image networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(12): 2799−2813
    [56]
    Ji Shuiwang, Xu Wei, Yang Ming, et al. 3D convolutional neural networks for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(1): 221−231
    [57]
    Tran D, Bourdev L, Fergus R, et al. Learning spatiotemporal features with 3d convolutional networks [C] //Proc of the 28th IEEE/CVF Conf on Computer Vision and Pattern Recognition, Piscataway, NJ: IEEE. 2015: 4489−4497
    [58]
    Carreira J, Zisserman A. Quo vadis, action recognition? A new model and the kinetics dataset [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 6299−6308
    [59]
    Zhou Bolei, Andonian A, Oliva A, et al. Temporal relational reasoning in videos [C] //Proc of the 15th European Conf on Computer Vision. Berlin: Springer, 2018: 803−818
    [60]
    Wang Limin, Xiong Yuanjun, Wang, Zhe, et al. Temporal segment networks: Towards good practices for deep action recognition [C] //Proc of the 14th European Conf on Computer Vision. Berlin: Springer, 2016: 20−36
    [61]
    Wang Limin, Xiong Yuanjun, Wang Zhe, et al. Temporal segment networks for action recognition in videos[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(11): 2740−2755
    [62]
    Wang Xiaolong, Girshick R, Gupta A, et al. Non-local neural networks [C] //Proc of the 31st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 7794−7803
    [63]
    Sigurdsson G A, Divvala S, Farhadi A, et al. Asynchronous temporal fields for action recognition[C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 585−594
    [64]
    Varol G, Ivan L, Cordelia S, et al. Long-term temporal convolutions for action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(6): 1510−1517
    [65]
    Zoph B, Vasudevan V, Shlens J, et al. Learning transferable architectures for scalable image recognition [C] //Proc of the 31st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 8697−8710
    [66]
    Zhang Xiangyu, Zhou Xinyu, Lin Mengxiao, et al. Shufflenet: An extremely efficient convolutional neural network for mobile devices [C] //Proc of the 31st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 6848−6856
    [67]
    Li Guihong, Duc H, Kartikeya B, et al. Zero-Shot neural architecture search: Challenges, solutions, and opportunities[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 7618−7635
    [68]
    Haroon I, Amir R Z, Jiang Yugang, et al. The thumos challenge on action recognition for videos “in the wild”[J]. Computer Vision and Image Understanding, 2017, 155(1): 1−23
    [69]
    Girdhar R, Ramanan D, Gupta A, et al. Actionvlad: Learning spatio-temporal aggregation for action classification [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 971−980
    [70]
    Hussein N, Gavves E, Smeulders A W M. Videograph: Recognizing minutes-long human activities in videos[J]. arXiv preprint, arXiv: 1905.05143, 2019
    [71]
    Piergiovanni A, Angelova A, Toshev A, et al. Evolving space-time neural architectures for videos [C] //Proc of the 17th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 1793−1802
    [72]
    Xu Huijuan, Das A, Saenko K. R-c3d: Region convolutional 3d network for temporal activity detection [C] //Proc of the 30th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 5783−5792
    [73]
    Dai Xiyang, Singh B, Ng J Y H, et al. Tan: temporal aggregation network for dense multi-label action recognition [C] //Proc of the 6th IEEE Winter Conf on Applications of Computer Vision. Piscataway, NJ: IEEE, 2019: 151−160
    [74]
    Piergiovanni A, Ryoo M S. Learning latent super-events to detect multiple activities in videos [C] //Proc of the 31st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 5304−5313
    [75]
    Piergiovanni A, Ryoo M. Temporal gaussian mixture layer for videos [C/OL] //Proc of the 36th Int Conf on Machine Learning, 2019[2023-12-16]. https://proceedings.mlr.press/v97/piergiovanni19a.html
    [76]
    Tirupattur P, Duarte K, Rawat Y S, et al. Modeling multi-label action dependencies for temporal action localization [C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 1460−1470
    [77]
    Dai R, Das S, Minciullo L, et al. Pdan: pyramid dilated attention network for action detection [C] //Proc of the 8th IEEE/CVF Winter Conf on Applications of Computer Vision. Piscataway, NJ: IEEE, 2021: 2970−2979
    [78]
    Dai Rui, Das S, Bremond F. Ctrn: Class-temporal relational network for action detection[J]. arXiv preprint, arXiv: 2110.13473, 2021
    [79]
    Wu Yuankai, Su Xin, Salihu D, et al. Modeling action spatiotemporal relationships using graph-based class-level attention network for long-term action detection [C] //Proc of the 36th IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS). Piscataway, NJ: IEEE, 2023: 6719−6726
    [80]
    Zhou Jiaming, Lin Kunyu, Li Haoxin, et al. Graph-based high-order relation modeling for long-term action recognition [C] //Proc of the 34th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 8984−8993
    [81]
    Guo Hongji, Wang Hanjing, Ji Qiang. Uncertainty-guided probabilistic transformer for complex action recognition[C]// Proc of the 35th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 20052−20061

Catalog

    Article views (27) PDF downloads (16) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return