Citation: | Yan Xinkai, Huo Yuchi, Bao Hujun. Survey on Neural Rendering and Its Hardware Acceleration[J]. Journal of Computer Research and Development, 2024, 61(11): 2846-2869. DOI: 10.7544/issn1000-1239.202330483 |
Neural rendering is a new image and video generation method based on deep learning. It combines the deep learning model with the physical knowledge of computer graphics to obtain a controllable and realistic scene model, and realize the control of scene attributes such as lighting, camera parameters, posture and so on. On the one hand, neural rendering can not only make full use of the advantages of deep learning to accelerate the traditional forward rendering process, but also provide new solutions for specific tasks such as inverse rendering and 3D reconstruction. On the other hand, the design of innovative hardware structures that adapt to the neural rendering pipeline breaks through the parallel computing and power consumption bottleneck of existing graphics processors, which is expected to provide important support for future key areas such as virtual and augmented reality, film and television creation and digital entertainment, artificial intelligence and metaverse. In this paper, we review the technical connotation, main challenges, and research progress of neural rendering. On this basis, we analyze the common requirements of neural rendering pipeline for hardware acceleration and the characteristics of the current hardware acceleration architecture, and then discuss the design challenges of neural rendering processor architecture. Finally, the future development trend of neural rendering processor architecture is prospected.
[1] |
Eslami S, Rezende D, Besse F, et al. Neural scene representation and rendering[J]. Science 2018, 360(6394): 1204–1210
|
[2] |
Tewari A, Fried O, Thies J, et al. State of the art on neural rendering[J]. Journal of the European Association for Computer Graphics, 2020, 39(2): 701−727
|
[3] |
Mildenhall B, Srinivasan P, Tancik M, et al. NeRF: Representing scenes as neural radiance fields for view synthesis[J]. Communications of the ACM, 2022, 65(1): 99−106 doi: 10.1145/3503250
|
[4] |
Tewari A, Thies J, Mildenhall B, et al. Advances in neural rendering[J]. Journal of the European Association for Computer Graphics, 2022, 41(2): 41−74
|
[5] |
Wang Qi, Zhong Zhihua, Huo Yuchi, et al. State of the art on deep learning-enhanced rendering methods[J]. Machine Intelligence Research, 2023, 20(6): 799−821 doi: 10.1007/s11633-022-1400-x
|
[6] |
Kajiya J. The rendering equation[C]//Proc of the 13th Annual Conf on Computer Graphics and Interactive Techniques. New York: ACM, 1986, 20(4): 143–150
|
[7] |
NVIDIA. RTX technology [EB/OL]. 2023 [2023-04-22]. https://developer.nvidia.com/rtx/ray-tracing
|
[8] |
Trina W. Truly global illumination: Ray tracing for the masses [EB/OL]. 2023[2023-04-22].https://blog.imaginationtech.com/truly-global-illumination-ray-tracing-for-the-masses
|
[9] |
Hornik K, Stinchcombe M, White H. et al. Multilayer feedforward networks are universal approximators[J]. Neural Networks, 1989, 2(5): 359−366 doi: 10.1016/0893-6080(89)90020-8
|
[10] |
Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proc of the 27th Neural Information Processing Systems. New York: ACM, 2014: 2672−2680
|
[11] |
Zhang R, Isola P, Efros A , et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proc of the 36th IEEE Conf on Computer Vision & Pattern Recognition. Piscataway, NJ: IEEE, 2018: 586−595
|
[12] |
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. arXiv preprint, arXiv: 1706.03762, 2017
|
[13] |
Curless B, Levoy M. A volumetric method for building complex models from range images[C]//Proc of the 23rd SIGGRAPH. New York: ACM, 1996: 303−312
|
[14] |
Greger G, Shirley P, Hubbard P, et al. The irradiance volume[J]. IEEE Computer Graphics and Applications, 1998, 18(2): 32−43 doi: 10.1109/38.656788
|
[15] |
Aliev K, Ulyanov D, Lempitsky V, et al. Neural point-based graphics[J]. arXiv preprint, arXiv: 1906.08240, 2019
|
[16] |
Wiles O, Gkioxari G, Szeliski R, et al. Synsin: End-to-end view synthesis from a single image[C]//Proc of the 38th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 7465−7475
|
[17] |
Lassner C, Zollhofer M. Pulsar: Efficient sphere-based neural rendering[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 1440−1449
|
[18] |
Neff T, Stadlbauer P, Parger M, et al. Point-based neural rendering with per-view optimization[J]. Journal of the European Association for Computer Graphics, 2021, 40(4): 40−54
|
[19] |
Park J, Florence P, Straub J, et al. DeepSDF: Learning continuous signed distance functions for shape representation[C]//Proc of the 37th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 165−174
|
[20] |
Chen Zhiqin, Zhang Hao. Learning implicit fields for generative shape modeling[C]//Proc of the 37th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 5932−5941
|
[21] |
Zhu Jingsen, Huo Yuchi, Qi Ye, et al. I2-SDF: Intrinsic indoor scene reconstruction and editing via raytracing in neural SDFs[C]//Proc of the 41st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 12489−12498
|
[22] |
Martin-Brualla R, Radwan N, Sajjadi M , et al. NeRF in the wild: Neural radiance fields for unconstrained photo collections[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 7206−7215
|
[23] |
Niemeyer M, Geiger A. GIRAFFE: Representing scenes as compositional generative neural feature fields[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 11448−11459
|
[24] |
Pumarola A, Corona E, Pons-Moll G, et al. D-NeRF: Neural radiance fields for dynamic scenes[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 10313−10322
|
[25] |
Srinivasan P, Deng Boyang, Zhang Xiuming, et al. NeRV: Neural reflectance and visibility fields for relighting and view synthesis[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 7491−7500
|
[26] |
Chen Jianchuan, Yi Wentao, Ma Liqian, et al. GM-NeRF: Learning generalizable model-based neural radiance fields from multi-view images[C]//Proc of the 41st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 20648−20658
|
[27] |
Bao Chong, Zhang Yinda, Yang Bangbang, et al. SINE: Semantic-driven image-based NeRF editing with prior-guided editing field[C] //Proc of the 41st IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 20919−20929
|
[28] |
Nguyen-Phuoc T, Li Chuan, Balaban S, et al. RenderNet: A deep convolutional network for differentiable rendering from 3D shapes[C]//Proc of the 36th Neural Information Processing Systems. New York: ACM, 2018: 7902−7912
|
[29] |
Rematas K, Ferrari V. Neural voxel renderer: Learning an accurate and controllable rendering tool[C]//Proc of the 38th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 5416−5426
|
[30] |
Aliev K, Ulyanov D, Lempitsky V. Neural point-based graphics[J]. arXiv preprint, arXiv: 1906.08240, 2019
|
[31] |
Dai Peng, Zhang Yinda, Li Zhuwen, et al. Neural point cloud rendering via multi-plane projection[C]//Proc of the 37th IEEE Conf on Computer Vision and Pattern Recognition, Piscataway, NJ: IEEE, 2019: 7830–7839
|
[32] |
Sanzenbacher P, Mescheder L, Geiger A. Learning neural light transport[J]. arXiv preprint, arXiv: 2006.03427, 2020
|
[33] |
Niemeyer M, Mescheder L, Oechsle M, et al. Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision[C]//Proc of the 38th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 3501−3512
|
[34] |
Yariv L, Gu Jiatao, Kasten Y, et al. Volume rendering of neural implicit surfaces[J]. arXiv preprint, arXiv: 2106.12052, 2021
|
[35] |
Thomas M, Forbes A. Deep illumination: Approximating dynamic global illumination with generative adversarial network[J]. arXiv preprint, arXiv: 1710.09834, 2017
|
[36] |
T Müller, Rousselle F, J Novák, et al. Neural control variates[J]. ACM Transactions on Graphics, 2020, 39(6): 1−19
|
[37] |
Suppan C, Chalmers A, Zhao Junhong, et al. Neural screen space rendering of direct illumination[J/OL]. Journal of the European Association for Computer Graphics, 2021 [2023-04-29].https://diglib.eg.org/bitstream/handle/10.2312/pg20211385/037-042.pdf
|
[38] |
Nalbach O, Arabadzhiyska E, Mehta D, et al. Deep shading: Convolutional neural networks for screen space shading[J]. Journal of the Computer Graphics Forum, 2017, 36(4): 65−78 doi: 10.1111/cgf.13225
|
[39] |
Ren Lei, Song Ying. AOGAN: A generative adversarial network for screen space ambient occlusion[J]. Computational Visual Media, 2022, 8(3): 483−494 doi: 10.1007/s41095-021-0248-2
|
[40] |
Oechsle M, Mescheder L, Niemeyer M, et al. Texture fields: Learning texture representations in function space[C] //Proc of the 37th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 4530−4539
|
[41] |
Sitzmann V, Thies J, Heide F, et al. Deepvoxels: Learning persistent 3D feature embeddings[C]//Proc of the 37th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 2432−2441
|
[42] |
Lehrmann A, Schwartz G, Saragih J, et al. Neural volumes: Learning dynamic renderable volumes from images[J]. ACM Transactions on Graphics, 2019, 38(4): 1−14
|
[43] |
Lindell D, Martel J, Wetzstein G. AutoInt: Automatic integration for fast neural volume rendering[C]//Proc of the 39th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 14551−14560
|
[44] |
Liu Lingjie, Gu Jiatao, Lin K, et al. Neural sparse voxel fields[C]//Proc of the 38th Neural Information Processing Systems. New York: ACM, 2020: 15651−15663
|
[45] |
Reiser C, Peng Songyou, Liao Yiyi, et al. KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs[C]//Proc of the 18th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 14315−14325
|
[46] |
Hedman P, Srinivasan P, Mildenhall B, et al. Baking neural radiance fields for real-time view synthesis[C]//Proc of the 18th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 5855−5864
|
[47] |
Garbin S, Kowalski M, Johnson M, et al. FastNeRF: High-fidelity neural rendering at 200FPS[C]//Proc of the 18th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 14326−14335
|
[48] |
Johari M, Lepoittevin Y, Fleuret F. GeoNeRF: Generalizing NeRF with geometry prior[C]//Proc of the 40th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 18344−18347
|
[49] |
Yu A, Li Ruilong, Tancik M, et al. PlenOctrees for real-time rendering of neural radiance fields[C]//Proc of the 18th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 5732−5741
|
[50] |
Sitzmann V, Rezchikov S, Freeman W, et al. Light field networks: Neural scene representations with single-evaluation rendering[J]. arXiv preprint, arXiv: 2106.02634, 2021
|
[51] |
Müller T, Evans A, Schied C, et al. Instant neural graphics primitives with a multiresolution Hash encoding[J]. ACM Transactions on Graphics, 2022, 41(4): 1−15
|
[52] |
Li Zhenqi, Niklaus S, Snavely N, et al. Neural scene flow fields for space-time view synthesis of dynamic scenes[C]//Proc of the 39th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 6498−6508
|
[53] |
Yu A, Ye V, Tancik M, et al. pixelNeRF: Neural radiance fields from one or few images[C]//Proc of the 39th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 4576−4585
|
[54] |
Wang Qianqian, Wang Zhicheng, Genova K, et al. IBRNet: Learning multi-view image-based rendering[C]//Proc of the 39th IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 4688−4697
|
[55] |
Reizenstein J, Shapovalov R, Henzler P, et al. Common objects in 3D: Large-scale learning and evaluation of real-life 3D category reconstruction[C]//Proc of the 18th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2021: 10881−10891
|
[56] |
Sitzmann V, Zollhfer M, Wetzstein G. Scene representation networks: Continuous 3D-structure-aware neural scene representations[J]. arXiv preprint, arXiv: 1906.01618, 2019
|
[57] |
Bi Sai, Xu Zexiang, Srinivasan P, et al. Neural reflectance fields for appearance acquisition[J]. arXiv preprint, arXiv: 2008.03824, 2020
|
[58] |
Zhang Xiuming, Fanello S, Tsai Y, et al. Neural light transport for relighting and view synthesis[J]. arXiv preprint, arXiv: 2008.03806, 2020
|
[59] |
Sun Tiancheng, Lin K , Bi Sai, et al. NeLF: Neural light-transport field for portrait view synthesis and relighting[J]. arXiv preprint, arXiv: 2107.12351, 2021
|
[60] |
Meijering E, Zuiderveld K. Image reconstruction by convolution with symmetrical piecewise nth-order polynomial kernels[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 1999, 8((2): ): 192−201 doi: 10.1109/83.743854
|
[61] |
Keys R. Cubic convolution interpolation for digital image processing[J]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 2003, 29(6): 1153−1160
|
[62] |
NVIDIA. Nvidia DLSS 3[EB/OL]. 2023 [2023-03-25].https://www.nvidia.cn/geforce/news/dlss3-ai-powered-neural-graphics-innovations
|
[63] |
Zhang Guozhen, Zhu Yuhan, Wang Haonan, et al. Extracting motion and appearance via inter-frame attention for efficient video frame interpolation[C]//Proc of the 41st IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 5682−5692
|
[64] |
Bi Sai, Sunkavalli K, Perazzi F, et al. Deep cg2real: Synthetic-to-real translation via image disentanglement[C]//Proc of the 17th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 2730−2739
|
[65] |
Zhu Junyan, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proc of the 16th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2242−2251
|
[66] |
Hadadan S, Chen S, Zwicker M. Neural radiosity[J]. ACM Transactions on Graphics, 2021, 40(6): 1−11
|
[67] |
Li Sixu, Li Chaojian, Zhu Wenbo, et al. Instant-3D: Instant neural radiance field training towards on-device AR/VR 3D reconstruction[J]. arXiv preprint, arXiv: 2304.12467, 2023
|
[68] |
Fu Yonggan, Ye Zhifan, Yuan Jiayi, et al. GEN-NeRF: Efficient and generalizable neural radiance fields via algorithm-hardware co-design[J]. arXiv preprint, arXiv: 2304.11842, 2023
|
[69] |
Wang Peng, Liu Yuan, Chen Zhaoxi, et al. F2-NeRF: Fast neural radiance field training with free camera trajectories[J]. arXiv preprint, arXiv: 2303.15951, 2023
|
[70] |
Bai Jiayang, Huang Letian, Gong Wen, et al. Self-NeRF: A self-training pipeline for few-shot neural radiance fields[J]. arXiv preprint, arXiv: 2303.05775, 2023
|
[71] |
周飞燕,金林鹏,董军. 卷积神经网络研究综述[J]. 计算机学报,2017,40(6):1229−1251
Zhou Feiyan, Jin Linpeng, Dong Jun. Review of convolutional neural network[J]. Chinese Journal of Computers, 2017, 40(6): 1229−1251 (in Chinese)
|
[72] |
王恩东,闫瑞栋,郭振华,等. 分布式训练系统及其优化算法综述[J]. 计算机学报,2014,47(1):1−28
Wang Endong, Yan Ruidong, Guo Zhenhua, et al. A survey of distributed training system and its optimization algorithms[J]. Chinese Journal of Computers, 2024, 47(1): 1−28 (in Chinese)
|
[73] |
张浩宇,王天保,李孟择,等. 视觉语言多模态预训练综述[J]. 中国图象图形学报,2022,27(9):2652−2682
Zhang Haoyu, Wang Tianbao, Li Mengze, et al. Comprehensive review of visual-language-oriented multimodal pre-training methods[J]. Journal of Image and Graphics, 2022, 27(9): 2652−2682 (in Chinese)
|
[74] |
殷炯,张哲东,高宇涵,等. 视觉语言预训练综述[J]. 软件学报,2023,34(5):2000−2023
Yin Jiong, Zhang Zedong, Gao Yuhan, et al. Survey on visual language pre-training[J]. Journal of Software, 2023, 34(5): 2000−2023 (in Chinese)
|
[75] |
Chen Zhiqin, Thomas F, Peter H, et al. MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures[J]. arXiv preprint, arXiv: 2208.00277, 2023
|
[76] |
Wang Huan, Ren Jian, Huang Zeng , et al. R2L: Distilling neural radiance field to neural light field for efficient novel view synthesis[J]. arXiv preprint, arXiv: 2203.17261, 2022
|
[77] |
Cao Junli, Wang Huan, Pavlo C, et al. Real-time neural light field on mobile devices[C]//Proc of the 19th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2023: 8328−8337
|
[78] |
Muhammad H, Ramakrishna K, Tobias Z, et al. Hardware acceleration of neural graphics[C/OL]//Proc of the 50th Int Symp on Computer Architecture. New York: ACM, 2023 [2023-04-28].https://dl.acm.org/doi/10.1145/3579371.3589085
|
[79] |
Hennessy J, Patterson D. A new golden age for computer architecture[J]. Communications of the ACM, 2019, 62(2): 48−60 doi: 10.1145/3282307
|
[80] |
Brian W, Dung Q, Moreira E, et al. Energy efficiency boost in the AI-infused POWER10 processor[C]//Proc of the 48th Int Symp on Computer Architecture. New York: ACM, 2021: 29−42
|
[81] |
Talpes E, Williams D, Sarma D. DOJO: The microarchitecture of Tesla’s exa-scale computer[C/OL]//Proc of the 34th IEEE Hot Chips Symposium. Piscataway, NJ: IEEE, 2022 [2023-04-21].https://ieeexplore.ieee.org/document/9895534
|
[82] |
Choqiette J. NVIDIA hopper GPU: Scaling performance[C]//Proc of the 34th IEEE Hot Chips Symposium. Piscataway, NJ: IEEE, 2022 [2023-04-21].https://ieeexplore.ieee.org/document/9895592
|
[83] |
Smith A, James N. AMD instinct™ MI200 series accelerator and node architectures[C]//Proc of the 34th IEEE Hot Chips Symp. Piscataway, NJ: IEEE, 2022 [2023-04-21].https://ieeexplore.ieee.org/document/9895477
|
[84] |
Jiang Hong. Intel’s Ponte Vecchio GPU : Architecture, systems & software[C]//Proc of the 34th IEEE Hot Chips Symp. Piscataway, NJ: IEEE, 2022 [2023-04-21].https://ieeexplore.ieee.org/document/9895631
|
[85] |
Hong M, Xu Lingjie. Biren BR100 GPGPU: Accelerating datacenter scale AI computing [C]//Proc of the 34th IEEE Hot Chips Symp. Piscataway, NJ: IEEE, 2022 [2023-04-21].https://ieeexplore.ieee.org/document/9895604
|
[86] |
NVIDIA. NVIDIA ADA GPU ARCHITECTURE [EB/OL]. 2023 [2023-04-24]. http://images.nvidia.cn/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf
|
[87] |
AMD. AMD compare graphics specifications [EB/OL]. 2023 [2023-04-24].https://www.amd.com/zh-hans/products/specifications/compare/graphics/11836
|
[88] |
Reuther A, Michaleas P, Jones M, et al. AI and ML accelerator survey and trends[J]. arXiv preprint, arXiv: 2210.04055, 2022
|
[89] |
Norman P, George K, Kurian G, et al. TPU v4: An optically reconfigurable supercomputer for machine learning with hardware support for embeddings[J]. arXiv preprint, arXiv: 2304.01433, 2023
|
[90] |
Jouppi N, Yoon D, Ashcraft M, et al. Ten lessons from three generations shaped Google’s TPUv4i : Industrial product[C/OL]//Proc of the 48th Int Symp on Computer Architecture. New York: ACM, 2021 [2023-04-28].https://dl.acm.org/doi/10.1109/ISCA52012.2021.00010
|
[91] |
Cambricon. Cambricon MLU370 chip [EB/OL]. 2023 [2023-04-21].https://www.cambricon.com/index.php?m=content&c=index&a=lists&catid=360
|
[92] |
Ouyang Jian, Du Xueliang, Ma Yin, et al. 3.3 kunlun: A 14nm high-performance AI processor for diversified workloads[C]//Proc of the 68th IEEE Int Solid-State Circuits Conf. Piscataway, NJ: IEEE, 2021: 50−51
|
[93] |
Skillman A, Edso T. A technical overview of cortex-m55 and Ethos-U55: Arm’s most capable processors for endpoint AI[C]//Proc of the 32nd IEEE Hot Chips Symp. Piscataway, NJ: IEEE, 2020 [2023-04-22].https://ieeexplore.ieee.org/document/9220415
|
[94] |
Rao Chaolin, Yu Huangjie, Wan Haochuan, et al. ICARUS: A specialized architecture for neural radiance fields rendering[J]. arXiv preprint, arXiv: 2203.01414, 2022
|
[95] |
Matthew T, Vincent C. Block-NeRF: Scalable large scene neural view synthesis[C]//Proc of the 19th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2023: 8238−8248
|
[96] |
Dogyoon L, Minhyeok L, Chajin S, et al. DP-NeRF: Deblurred neural radiance field with physical scene priors[C]//Proc of the 19th IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2023: 12386−12396
|
[1] | Zhou Yuanding, Gao Guopeng, Fang Yaodong, Qin Chuan. Perceptual Authentication Hashing with Image Feature Fusion Based on Window Self-Attention[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330669 |
[2] | Gao Wei, Chen Liqun, Tang Chunming, Zhang Guoyan, Li Fei. One-Time Chameleon Hash Function and Its Application in Redactable Blockchain[J]. Journal of Computer Research and Development, 2021, 58(10): 2310-2318. DOI: 10.7544/issn1000-1239.2021.20210653 |
[3] | Wu Linyang, Luo Rong, Guo Xueting, Guo Qi. Partitioning Acceleration Between CPU and DRAM: A Case Study on Accelerating Hash Joins in the Big Data Era[J]. Journal of Computer Research and Development, 2018, 55(2): 289-304. DOI: 10.7544/issn1000-1239.2018.20170842 |
[4] | Jiang Jie, Yang Tong, Zhang Mengyu, Dai Yafei, Huang Liang, Zheng Lianqing. DCuckoo: An Efficient Hash Table with On-Chip Summary[J]. Journal of Computer Research and Development, 2017, 54(11): 2508-2515. DOI: 10.7544/issn1000-1239.2017.20160795 |
[5] | Wang Wendi, Tang Wen, Duan Bo, Zhang Chunming, Zhang Peiheng, Sun Ninghui. Parallel Accelerator Design for High-Throughput DNA Sequence Alignment with Hash-Index[J]. Journal of Computer Research and Development, 2013, 50(11): 2463-2471. |
[6] | Yuan Xinpan, Long Jun, Zhang Zuping, Luo Yueyi, Zhang Hao, and Gui Weihua. Connected Bit Minwise Hashing[J]. Journal of Computer Research and Development, 2013, 50(4): 883-890. |
[7] | Qin Chuan, Chang Chin Chen, Guo Cheng. Perceptual Robust Image Hashing Scheme Based on Secret Sharing[J]. Journal of Computer Research and Development, 2012, 49(8): 1690-1698. |
[8] | Ding Zhenhua, Li Jintao, Feng Bo. Research on Hash-Based RFID Security Authentication Protocol[J]. Journal of Computer Research and Development, 2009, 46(4): 583-592. |
[9] | Li Zhiqiang, Chen Hanwu, Xu Baowen, Liu Wenjie. Fast Algorithms for Synthesis of Quantum Reversible Logic Circuits Based on Hash Table[J]. Journal of Computer Research and Development, 2008, 45(12): 2162-2171. |
[10] | Liu Ji. One-Way Hash Function based on Integer Coupled Tent Maps and Its Performance Analysis[J]. Journal of Computer Research and Development, 2008, 45(3): 563-569. |
1. |
贾乃征,薛灿,杨骝,王智. 基于融合集成学习的鲁棒近超声室内定位方法. 计算机研究与发展. 2025(02): 488-502 .
![]() |