• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

抽象解释及其应用研究进展

陈立前, 范广生, 尹帮虎, 王戟

陈立前, 范广生, 尹帮虎, 王戟. 抽象解释及其应用研究进展[J]. 计算机研究与发展, 2023, 60(2): 227-247. DOI: 10.7544/issn1000-1239.202220925
引用本文: 陈立前, 范广生, 尹帮虎, 王戟. 抽象解释及其应用研究进展[J]. 计算机研究与发展, 2023, 60(2): 227-247. DOI: 10.7544/issn1000-1239.202220925
Chen Liqian, Fan Guangsheng, Yin Banghu, Wang Ji. Research Progress on Abstract Interpretation and Its Application[J]. Journal of Computer Research and Development, 2023, 60(2): 227-247. DOI: 10.7544/issn1000-1239.202220925
Citation: Chen Liqian, Fan Guangsheng, Yin Banghu, Wang Ji. Research Progress on Abstract Interpretation and Its Application[J]. Journal of Computer Research and Development, 2023, 60(2): 227-247. DOI: 10.7544/issn1000-1239.202220925
陈立前, 范广生, 尹帮虎, 王戟. 抽象解释及其应用研究进展[J]. 计算机研究与发展, 2023, 60(2): 227-247. CSTR: 32373.14.issn1000-1239.202220925
引用本文: 陈立前, 范广生, 尹帮虎, 王戟. 抽象解释及其应用研究进展[J]. 计算机研究与发展, 2023, 60(2): 227-247. CSTR: 32373.14.issn1000-1239.202220925
Chen Liqian, Fan Guangsheng, Yin Banghu, Wang Ji. Research Progress on Abstract Interpretation and Its Application[J]. Journal of Computer Research and Development, 2023, 60(2): 227-247. CSTR: 32373.14.issn1000-1239.202220925
Citation: Chen Liqian, Fan Guangsheng, Yin Banghu, Wang Ji. Research Progress on Abstract Interpretation and Its Application[J]. Journal of Computer Research and Development, 2023, 60(2): 227-247. CSTR: 32373.14.issn1000-1239.202220925

抽象解释及其应用研究进展

基金项目: 国家重点研发计划项目(2022YFA1005101);国家自然科学基金项目(61872445, 62032024, 62102432);湖南省自然科学基金项目(2021JJ40697)
详细信息
    作者简介:

    陈立前: 1982年生. 博士,副教授,硕士生导师. CCF高级会员. 主要研究方向为程序分析与验证、程序自动修复、可信人工智能

    范广生: 1997年生.博士研究生.主要研究方向为程序分析与验证

    尹帮虎: 1989年生.博士,讲师.主要研究方向为程序分析与验证、可信人工智能、系统建模与仿真

    王戟: 1969年生.博士,教授,博士生导师.CCF会士.主要研究方向为形式化方法、软件工程

    通讯作者:

    尹帮虎(bhyin@nudt.edu.cn)

  • 中图分类号: TP391

Research Progress on Abstract Interpretation and Its Application

Funds: This work was supported by the National Key Research and Development Program of China (2022YFA1005101), the National Natural Science Foundation of China (61872445, 62032024, 62102432), and the Natural Science Foundation of Hunan Province (2021JJ40697).
  • 摘要:

    抽象解释是一种对用于形式描述复杂系统行为的数学结构进行抽象和近似并推导或验证其性质的理论. 抽象解释自20世纪70年代提出以来,在语义模型、程序分析验证、混成系统验证、程序转换、系统生物学模型分析等领域取得了广泛应用. 近年来,抽象解释在程序分析、神经网络验证、完备性推理、抽象域改进等方面取得较大进展. 基于此,系统综述了抽象解释及其应用的研究进展. 首先概述了抽象解释理论的基本概念,介绍了抽象解释理论、抽象域的研究进展;然后概述了基于抽象解释的程序分析方面的研究进展; 之后概述了基于抽象解释的神经网络模型验证、神经网络模型鲁棒训练、深度学习程序的分析等方面的研究进展;又对抽象解释在智能合约可信保证、信息安全保证、量子计算可信保证等方面的应用进展进行了介绍;最后指明了抽象解释未来可能的研究方向.

    Abstract:

    Abstract interpretation is a theory of abstraction and approximation of the mathematical structures used in the formal description of complex systems and the inference or verification of their properties. Since being proposed in 1970 s, abstract interpretation has been widely applied to many fields, including semantic models, program analysis and verification, verification of hybrid systems, program transformation, analysis of systems biology models, etc. In recent years, abstract interpretation has made great progress in program analysis, neural network verification, completeness reasoning, improvement of abstract domains, etc. Based on this, we systematically review the research progress of abstract interpretation and its applications. Firstly, we outline the basic concepts of abstract interpretation theory, and review the recent research progress of abstract interpretation theory and abstract domains; then, we review the recent research progress in abstract interpretation-based program analysis, verification and robust training of neural networks, analysis of deep learning programs; after that, we also review the progress of some other applications of abstract interpretation, including trustworthiness assurance of smart contract, information security, and quantum computing; At last, potential future directions in the field of abstract interpretation are pointed out.

  • 场景流(scene flow, SF)是定义在2个连续场景中表面点的3D运动场,是一个感知动态场景的基本工具.随着自动驾驶、人机交互等应用的大规模商业落地,感知系统需要精确感知环境中的动态运动物体[1-2],因此精确估计场景流成为近年来的研究热点.由LiDAR等3D传感器直接获得的点云数据可以得到场景中点的精确位置信息,因此点云数据被广泛应用于场景流估计任务中.点云数据仅包含3D点坐标,因此在稀疏点、边缘点处会出现特征信息不足的现象,在这些点上的匹配会更加困难,这些匹配困难点严重影响场景流估计的整体准确度.

    近几年的方法都是关注2个连续点云间对应的匹配关系来优化场景流的估计精度,FlowNet3D[3]在单尺度上获取匹配关系;HPLFlowNet[4]使用双边卷积层(bilateral convolutional layer, BCL),在多个尺度上联合计算匹配关系[5];PointPWC-Net[6]在多个尺度上建立用于匹配的代价体(cost volume, CV)和重投影(warping)模块.但这些方法仅考虑了点云间的匹配关系,缺少优化匹配困难点的方式. 如图1(a)所示,图片中的点为场景的一部分,其中红色代表该点的端点误差(end point error, EPE)小于0.05 m;绿色代表该点的端点误差大于等于0.05 m且小于0.3 m;蓝色代表该点的端点误差大于等于0.3 m.在图1(a)虚线框中,PointPWC-Net在一个局部邻域内(一个汽车整体)同时有匹配准确的红色点和匹配困难的蓝色点.本文提出的基于邻域一致性的点云场景流传播更新方法(neighborhood consistency propagation update method,NCPUM)方法根据点云相邻点的相关性,即属于源点云的场景流在足够小的邻域内很大程度上存在一致性,将局部邻域中的准确场景流传播到匹配困难点上.可以有效地减少匹配困难点场景流的误差,提升整体准确度.图1(b)为经过NCPUM方法优化后的效果,可以看到在虚线框内的汽车点和匹配困难的蓝色点消失,匹配较差的绿色点明显减少,匹配准确的红色点明显增多.

    图  1  2种方法的可视化对比
    Figure  1.  Visual comparison of the two methods

    具体来说,NCPUM假设利用点云内相邻点的相关性使场景流估计具有邻域一致性,通过置信度图传播更新提升整体场景流估计的准确度.基于该假设,NCPUM设计了置信度预测模块和场景流传播模块,对骨干网络输出的初始场景流预测置信度图,经过场景流传播模块在具有一致性的邻域内将场景流从高置信度点集向低置信度点集传播,改善邻域内匹配困难点的准确度.本文的贡献有2方面:

    1)根据场景流的邻域一致性设计了场景流传播优化方法NCPUM.该方法使用场景流在局部邻域内传播的方式,改善估计效果.NCPUM的评估结果优于之前的工作,证明了该方法的有效性.

    2)NCPUM在Flyingthings3D和KITTI数据集上的测试结果在准确度上达到国际先进水平,并更换不同的骨干网络进行了测试,验证了NCPUM对于不同的骨干网络都能明显提升其估计准确度.

    在Vedula等人[7]工作中,定义和介绍了场景流的概念,之后许多工作[8-12]在多种类型的数据集上进行场景流的估计,随着最近几年基于点云的深度学习方法[13-15]的发展,可以在点云上直接估计场景流.其中一个使用可学习的深度网络来估计点云上的场景流的方法是FlowNet3D[3],它将下采样的特征进行嵌入,得到点云间的运动信息,通过上采样方法回归得到对应点的场景流.FlowNet3D只在单一尺度上进行了特征的嵌入,单一尺度的感受野无法在大尺度和小尺度运动上同时获得精确的估计结果. HPLFlowNet[4]使用双边卷积在多个尺度上联合计算匹配度,但限于内存使用限制无法在上采样过程中细化场景流.而PointPWC-Net[6]遵循了光流估计的“由粗到细”(coarse to fine, CTF)的范式,在多个尺度的局部范围内使用PointConv[13]建立用于匹配的代价体和重投影的模块.FLOT[16]通过最优传输(optimal transport),优化源点云和目标点云的匹配关系.这些关注于匹配关系的方法得到了很好的场景流估计结果.HALFlow[17]使用注意力机制,嵌入更多的位置信息,获得更准确的场景流估计结果.

    文献[34,6,13,1617]都是通过匹配连续点云间的特征回归出对应源点云的场景流,在匹配困难点处没有额外的优化措施.本文方法在源点云中根据相邻点的相关性,在邻域内改善匹配困难点的场景流,获得优于匹配方法的估计准确度.

    之前的场景流估计工作中都会注重在邻域内提取特征,根据提取到的特征来进行连续点云间的匹配[3-4,6,17-19],回归出点云间的场景流.但这只是在提取的特征上包含了邻域信息,在邻域特征信息不足的点上会出现匹配困难的情况.在同样使用邻域信息进行匹配的任务中[20-21],LiteFlowNet3[20]根据局部光流一致性,在代价体上对邻域内的点进行优化,获得了相对于匹配方法更好的光流估计精度.受该想法的启发,我们合理假设在2个连续场景中,一个合适的邻域内的点共享相同的运动模式,因此在邻域内的场景流具有一致性.NCPUM根据初始场景流显式的估计置信度,在邻域内的高置信度点向低置信度点进行传播更新.与现有方法不同的是,NCPUM的更新操作是在场景流上而非在特征上,所依赖的也不是特征上的相关或者相似,而是点云邻域内场景流的一致性.

    NCPUM从场景流邻域一致性假设出发,构建了一种对场景流在邻域内传播更新的优化方法.具体网络框架如图2所示,分别构建置信度预测模块和场景流传播模块实现NCPUM优化方法.首先是估计初始场景流的骨干网络,在得到初始场景流以及对应特征之后,送入置信度预测模块;然后在置信度预测模块中使用编码器-解码器(encoder-decoder)的网络结构,对输入的场景流进行置信度图的预测,置信度图用来表示各个场景流估计是否准确;最后在场景流传播模块中,根据预测得到的置信度图将场景流从高置信度点集向低置信度点集传播,更新低置信度点的场景流,降低匹配困难点对准确度的影响.

    图  2  网络结构图
    Figure  2.  Network structure

    场景流估计任务的最终目标是估计2个连续点云之间的运动矢量,因此定义2个连续的3D点云场景:源点云$P=\left\{{\boldsymbol{x}}_{i}\right|i=\mathrm{1,2},…,{n}_{1}\}$,和目标点云$Q=\left\{{\boldsymbol{y}}_{j}\right| j= \mathrm{1,2},…,{n}_{2}\}$,其中${\boldsymbol{x}}_{i},{\boldsymbol{y}}_{i}\in {\mathbb{R}}^{3}$并且$ i $并不必须与$ j $相等.源点云$ P $中的点运动到目标点云$ Q $中的对应点的运动矢量场为${{F}}=({\boldsymbol{f}}_1 … {\boldsymbol{f}}_{n_1})$,该运动矢量场即为最终估计的场景流.估计的场景流是基于源点云$ P $的,因此场景流与源点云中的点一一对应.

    在估计初始场景流时,使用的是PointPWC-Net作为骨干网络,该方法使用2个连续的点云作为输入,使用特征金字塔的结构,在每个分辨率尺度上都进行一次源点云$ P $到目标点云$ Q $的重投影,之后进行匹配度代价体的计算,代价体定义了逐点的匹配程度,PointPWC-Net对代价体进行回归得到逐点的场景流.

    在PointPWC-Net中,构建了4个尺度的特征金字塔,在得到4个尺度的点特征后,场景流的估计会从较粗的尺度进行,遵循由粗到细的规范.估计了当前尺度的场景流后,会上采样到更精细的尺度,将上采样的场景流对源点云进行重投影,在当前尺度上对重投影后的点云和对目标点云估计一个相对于上一个尺度场景流的残差,以完成对估计场景流的精细化.将整个重投影过程进行公式化定义:

    $${{{P}}}_{\mathrm{w}}=\{{\boldsymbol{p}}_{\mathrm{w},i}={\boldsymbol{p}}_{i}+{\boldsymbol{f}}_{i}|{\boldsymbol{p}}_{i}\in P,{\boldsymbol{f}}_{i}\in {F}^{\mathrm{u}\mathrm{p}}{\}}_{i=1}^{{n}_{1}},$$ (1)

    其中$ P $为源点云,$ {P}_{\mathrm{w}} $为重投影后的点云,$ {F}^{\mathrm{u}\mathrm{p}} $为从上一个尺度上采样的场景流.

    在PointPWC-Net中,对2个点云以及对应的特征进行了代价体的构建. 假设$ {\boldsymbol{g}}_{i}\in {\mathbb{R}}^{C} $是对应目标点云点$ {\boldsymbol{p}}_{i}\in P $的特征,$ {\boldsymbol{h}}_{j}\in {\mathbb{R}}^{C} $是对应目标点云点$ {\boldsymbol{q}}_{i}\in Q $的特征,那么对应2个点之间的匹配度定义为:

    $$Cost({\boldsymbol{p}}_{i},{\boldsymbol{q}}_{j})=M\left(concat\right({\boldsymbol{g}}_{i},{\boldsymbol{h}}_{j},{\boldsymbol{q}}_{j}-{\boldsymbol{p}}_{i}\left)\right),$$ (2)

    使用多层感知机(multilayer perceptron)$ M $将2点之间的潜在关系和点与点之间的距离串联后进行学习.在有了点对点的匹配度之后,将其组成当前尺度的代价体,PointPWC-Net根据源点云点到目标点云邻域点的距离对代价体加权,即对于1个源点云的点$ {\boldsymbol{p}}_{i}\in P $,得到它在目标点云$ Q $上的1个邻域$ {N}_{Q}\left({\boldsymbol{p}}_{i}\right) $,再根据目标点云邻域中的每个点到源点云点的距离得到权重C.

    $$C=\displaystyle \sum _{{\boldsymbol{q}}_{j}\in {N}_{Q}\left({\boldsymbol{p}}_{i}\right)}{W}_{Q}({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i})Cost({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i}), $$ (3)
    $$ {W}_{Q}({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i})=M({\boldsymbol{q}}_{j}-{\boldsymbol{p}}_{i}). $$ (4)

    使用PointPWC-Net估计初始场景流时,沿用了多尺度监督损失,对应估计得到4个尺度的场景流,对场景流真实值采样到同样的尺度,在对应的尺度上做不同权重$ \alpha $的2范数监督.

    $${Loss}_{\mathrm{s}\mathrm{f}}=\displaystyle \sum _{l={l}_{0}}^{L}{\alpha }_{l}\displaystyle \sum _{\boldsymbol{p}\in P}\left|\right|{{\boldsymbol{F}}}^{l}\left({\boldsymbol{p}}\right)-{{\boldsymbol{F}}}_{\boldsymbol{GT}}^{l}\left(\boldsymbol{p}\right)|{|}_{2}.$$ (5)

    在骨干网络输出初始场景流后,会经过置信度预测模块对初始场景流预测置信度图.该置信度定义为初始场景流相对于真实值的误差,即预测的误差值越小,表示该点在骨干网络中估计的初始场景流越准确,置信度值越高.置信度预测模块首先使用“编码器−解码器”的构造,以初始场景流的3D矢量信息作为输入,在编码器过程中进行点的下采样,可以扩大置信度预测模块的感受野,参考更多的相邻场景流推断置信度;然后在解码器的上采样过程中使用跳跃链接,串联编码过程中对应尺度的特征信息,为上采样提供更多精细尺度的特征,获得更精细的上采样结果,并且考虑骨干网络中输出的场景流特征;最后使用sigmoid函数输出1个(0-1)分布的置信度图,并将该置信度图用于之后的场景流传播模块中.

    置信度预测模块使用的是有监督的训练方式,监督信息是初始场景流与场景流真实值的2范数二值化得到的先验分布图,该分布图为初始场景流相对于真实值的先验误差分布.设定阈值$ \theta $,当初始场景流与真实值的2范数小于$ \theta $时,设定为0,否则设定为1.由此得到的分布图为场景流先验的二分类分布图,用来监督置信度预测模块的输出.

    $$ {\boldsymbol{GT}}_{{\rm{conf}}}=\left\{\begin{aligned} &0, |{\boldsymbol{F}}-{{\boldsymbol{GT}}}_{{\rm{sf}}}| <\theta ,\\ &1,|{\boldsymbol{F}}-{{\boldsymbol{GT}}}_{{\rm{sf}}}| \geqslant \theta , \end{aligned}\right.$$ (6)
    $$ \begin{aligned} {Loss}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}}=\;&-({{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}}\times \mathrm{ln}\;confmap+\\ &(1-{{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}})\times \mathrm{ln}\left(1-confmap\right)),\end{aligned} $$ (7)

    其中$ confmap $是置信度预测模块得到的置信图.${{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}}$是场景流先验分布图,在式(6)中由初始场景流F和真实值${{\boldsymbol{GT}}}_{\mathrm{sf}}$处理得到.估计置信度图的最后一层使用sigmoid函数将其转换为0~1之间的分布,因此就可以使用二分类交叉熵(binary cross entropy, BCE)进行监督.

    经过场景流置信度图的预测,根据场景流置信度得到源点云中的高置信度点集和低置信度点集,由高置信度点集向低置信度点集进行限制半径的传播;根据邻域一致性的假设,如果高置信度点与低置信度点的传播半径不大于传播半径阈值,可以认为两点的场景流拥有一致性,可以使用高置信度点的场景流更新低置信度点的场景流.

    $$ {\boldsymbol{p}}_{2}=KNN\left({\boldsymbol{p}}_{1}\right),\quad {\boldsymbol{p}}_{1},{\boldsymbol{p}}_{2}\in P,$$ (8)

    $ {\boldsymbol{p}}_{1} $$ {\boldsymbol{p}}_{2} $都属于源点云$ P $,因为邻域一致性依赖于点云内相邻点的相关性,所以距离最近的点最有可能存在场景流的邻域一致性,式$ \left(8\right) $$ KNN $为K最近邻(K-nearest neighbor)方法采样低置信度点$ {\boldsymbol{p}}_{1} $在源点云的最近点.

    $$ f\left({\boldsymbol{p}}_{1}\right)=f\left({\boldsymbol{p}}_{2}\right), \quad {\rm{if}}({\boldsymbol{p}}_{1}-{\boldsymbol{p}}_{2}) < \beta ,$$ (9)

    其中$ {\boldsymbol{p}}_{1} $$ {\boldsymbol{p}}_{2} $分别为低置信度点和高置信度点,$ \beta $为传播半径阈值,当两点的距离不大于传播半径时传播更新低置信度点的场景流. 这里传播半径阈值非常重要,点云中的相邻点只有空间距离在一定阈值内才会具有相关性,在点密度足够的情况下,对于小邻域内的点的场景流具有一致性,这个邻域的半径阈值设置不同的数值会影响到优化结果.

    NCPUM在优化初始场景流时,会将反传的梯度在初始场景流处截断,即训练置信度预测模块时不会影响到骨干网络PointPWC-Net.

    与之前的工作的训练过程[4, 6, 16]类似,NCPUM在合成数据集Flyingthings3D[22]上训练模型,然后在Flyingthings3D和真实场景数据集KITTI[23]两个数据集上进行测试,将测试结果与其他方法的结果在表1中进行对比.之所以使用这样的训练策略是因为很难在真实场景中确定出来一个场景流真实值,这里使用的KITTI数据集只有142个场景,而合成数据集有更大的数据量可供训练,如Flyingthings3D数据集有19640对点云可以进行训练.在训练之前,遵循HPLFlowNet和PointPWC-Net的方式对数据进行了预处理,点云场景中没有被遮挡的点.

    表  1  NCPUM与其他方法的对比
    Table  1.  Comparison of NCPUM and Other Methods
    数据集方法EPE/mAcc3DS/%Acc3DR/%Outlier3D/%
    Flyingthings3DFlowNet3D[3]0.11441.377.060.2
    HPLFlowNet[4]0.08061.485.542.9
    PointPWC-Net[6]0.05973.892.834.2
    FLOT[16]0.05273.292.735.7
    HALFlow[17]0.04978.594.730.8
    NCPUM0.06076.193.930.7
    KITTIFlowNet3D[3]0.17737.466.852.7
    HPLFlowNet[4]0.11747.877.841.0
    PointPWC-Net[6]0.06972.888.826.5
    FLOT[16]0.05675.590.824.2
    HALFlow[17]0.06276.590.324.9
    NCPUM0.07078.191.522.3
    注:黑体数字表示最优结果.
    下载: 导出CSV 
    | 显示表格

    在接下来的内容中,将介绍NCPUM实现的细节,然后将测试结果与之前的方法进行对比,证明了NCPUM的有效性.并且我们认为Flyingthings3D数据集与KITTI数据集差异较大,将NCPUM在KITTI数据集前100对数据上进行微调,在后42对数据上进行了测试,更换不同骨干网络微调的测试结果在表3中展示,证明NCPUM基于邻域一致性假设的传播更新方法更适用于真实场景,并进行了消融实验,对传播半径阈值进行更多的对比实验.

    表  3  在KITTI数据集上微调测试
    Table  3.  Fine Tuning and Testing on KITTI Dataset
    骨干网络方法EPE/mAcc3DS/%Acc3DR/%Outlier3D/%
    FlowNet3D[3]w/o ft0.17327.660.964.9
    w ft0.10233.670.344.9
    NCPUM0.09438.674.137.1
    PointPWC-Net[6]w/o ft0.06972.888.826.5
    w ft0.04582.795.125.3
    NCPUM0.04387.596.924.3
    FLOT[16]w/o ft0.05675.590.824.2
    w ft0.02989.496.818.0
    NCPUM0.02889.997.017.5
    注:黑体数字表示最优结果;w/o表示with/without;ft表示fine-funing.
    下载: 导出CSV 
    | 显示表格

    NCPUM的训练设置与骨干网络PointPWC-Net相同,对输入的点云采样到8192个点,为其构建4层的特征金字塔,每一层的损失权重α设置为α0 = 0.02,α1 = 0.04,α2 = 0.08,α3 = 0.16,训练时的初始学习率设置为0.001,训练800个周期,每80个周期学习率减少一半.在对KITTI进行微调时,参数设置与NCPUM设置一致,只是将训练周期减少为400个,并且学习率每40个周期减少一半.

    置信度预测模块设置了3个尺度,分别为2048,512,256个点,特征通道数都为64.在经过3次下采样的特征提取之后得到了3个尺度的场景流特征,再经过3次上采样将特征传播到原尺度.前2个尺度的上采样都同时考虑了前一尺度的特征和下采样时对应的特征;后1个尺度的上采样考虑原尺度的场景流,在上采样到原尺度之后,与骨干网络输出的场景流特征串联在一起,经过2个1D卷积,再经过sigmoid得到0~1分布的置信度.

    本文沿用了场景流估计任务一直以来的评价标准[3-4, 6,18],假设$ \boldsymbol{F} $代表场景流估值,${\boldsymbol{F}}_{{\boldsymbol{GT}}}$代表场景流真实值,各个评价标准的定义为:

    1)EPE3D. $ \left|\right|{\boldsymbol{F}}-{{\boldsymbol{F}}}_{{\boldsymbol{GT}}}|{|}_{2} $表示点云上的逐点误差平均值.

    2)Acc3DS. $ EPE3D < 0.05\;\mathrm{m} $或者相对误差$ < 5\% $的点占比.

    3)Acc3DR. $ EPE3D < 0.1\;\mathrm{m} $或者相对误差$ < 10\% $的点占比.

    4)utlier3D. $ EPE3D > 0.3\;\mathrm{m} $或者相对误差$ > 10\% $的点占比.

    Flyingthings3D可用于训练的数据集共有19640对点云,可用于测试的数据集有3824对点云.与FlowNet3D、 PointPWC-Net模型方法的设置相同,本文只使用深度小于35 m的点,大部分的前景物体都能包含在这个范围内.对每个点云采样得到8192个点用于训练和测试,训练设置参数在3.1节网络实现细节中进行了介绍.

    表1展示了在Flyingthings3D数据集上NCPUM在2个准确度指标和离群值占比指标上均优于骨干网络PointPWC-Net.尤其是Acc3DS指标,对比PointPWC-Net有2.3%的提升.但在EPE指标上略低于PointPWC-Net.其原因是在场景流传播模块中会使低置信度点的场景流与高置信度点的场景流相等,对于估计偏差大的点,会传播更新一个准确的场景流,对于估计误差小的低置信度点,传播更新操作反而带来了精度上的扰动.但是准确度是统计EPE3D < 0.05 m或者相对误差< 5%的点占比,所以我们的方法能够优化估计偏差大的匹配困难点,提升准确度.本文在表2中整理出更新点的统计信息(图2$\hat{{\boldsymbol{F}}}$F的比较),其中包含更新点占全部点的比例(更新点占比)、更新点中精度提升的占比(改进点占比)、更新点中精度下降的占比(扰动点占比)、精度提升的平均值(改进均值)、精度下降的平均值(扰动均值).可以看到,有一半以上的点其实是产生了扰动,并且产生扰动的均值大于改进的均值,因此在精度上NCPUM确实会产生一定的扰动,但是在准确度指标和离群值占比指标上有大幅提升.

    表  2  Flyingthings3D数据集更新点统计信息
    Table  2.  Statistical Information of Update Points on Flyingthings3D Dataset
    方法更新点占比/%改进点占比/%扰动点占比/%改进均值/m扰动均值/mEPE/mAcc3DS/%Acc3DR/%Outlier3D/%
    PointPWC-Net0.05973.892.834.2
    NCPUM9.7346.9853.020.0030.0040.06076.193.930.7
    注:黑体数字表示最优结果.
    下载: 导出CSV 
    | 显示表格

    KITTI Scene Flow 2015数据集包括200个训练场景和200个测试场景,遵循之前工作的方法,在具有真实值并且可用的142对点云数据集上进行测试,并且与之前的方法保持一致[4,6],删除了点云中高度小于0.3 m的地面点.在KITTI数据集上,先使用在Flyingthings3D数据集上训练的模型直接对KITTI数据集进行测试而不经过任何微调,以评估NCPUM的泛化能力.从表1中可以看到NCPUM在Acc3DS和Acc3DR评价指标上优于PointPWC-Net,在Acc3DS指标上有5.3%的提升,在Acc3DR指标上有2.7%的提升,提升幅度大于Flyingthings3D数据集,表现出优秀的泛化性能.不经过微调而在真实数据集上表现更加优秀的原因在于真实数据集KITTI的点云数据物体之间的距离大于Flyingthings3D数据集中物体之间的距离,NCPUM的邻域一致性假设更适用于这种数据特性,所以NCPUM在真实场景上会有更加优秀的表现.本文统计了表1中FLOT和NCPUM的场景流估计时间,FLOT在一秒钟内可以处理2.15对连续点云,而NCPUM在一秒钟内可以处理5.09对连续点云,NCPUM处理速度约是FLOT的2.37倍.在真实使用场景中,准确场景流在总体场景流中的占比比场景流的绝对平均误差值更有意义,拥有更多的准确场景流代表该方法为真实应用带来更好的稳定性.NCPUM在Acc3DS和Acc3DR准确度指标上都有可观的提升,尤其在真实数据集上的场景流Acc3DS指标超过PointPwc的Acc3DS指标7.28%,超过HALFlow的Acc3DS最佳结果2.09%,对比之前的方法,NCPUM的处理速度和准确度表现出更大的应用潜力.

    因为Flyingthings3D数据集和KITTI数据集存在较大的差异,直接使用在Flyingthings3D数据集上预训练的模型在KITTI数据集上测试并不能完全展示NCPUM在KITTI数据集上的性能.所以本文将KITTI数据集拆分,使用前100对场景微调NCPUM,并在剩余的42对场景上进行测试.分别将FlowNet3D,PointPWC-Net, FLOT 3种骨干网络在KITTI数据集上进行微调,然后进行NCPUM的微调,将微调后的3种骨干网络做对比.在微调之后,3种骨干网络的NCPUM可以获得更好的效果,如表3所示,微调后的NCPUM对比微调后对应的骨干网络,在4个评价标准上都有提升,与泛化能力测试结果不同的是,NCPUM在EPE指标上也有了一定的提升,我们认为,Flyingthings3D是虚拟数据集,场景中物体间的距离较小,对某一物体边缘的低置信度点传播更新,可能采样到距离较近的其他物体上的准确点,而不是同一物体中距离较远的准确点,例如图3所示,绿色点为更新场景流的低置信度点,红色的点是传播场景流的高置信度点,黄色的连线为传播更新关系.在图3(a)和图3(b)中都出现了采样到其他物体的现象;KITTI是真实数据集,物体之间的距离较大,如图3(c)所示,不容易出现采样错误的情况,只有在如图3(d)中的远端离群点上可能出现采样不准确的情况,因此KITTI相较于Flyingthings3D数据集更容易符合邻域一致性的假设.

    图  3  Flyingthings3D与KITTI替换细节对比
    Figure  3.  Comparison of Flyingthings3D and KITTI on replacement details

    因为NCPUM是基于邻域一致性假设进行构建的,因此传播半径阈值设置十分重要,不同的半径阈值设置带来的效果是不一样的,甚至不合适的半径阈值会降低NCPUM优化效果.当半径阈值设置过大时,高置信度点的场景流会传播到不具有一致性的低置信度点上,出现扰动;当半径设置过小时,只会有少部分低置信度点会被更新.数据集的不同也会影响到传播半径的设置,对比虚拟数据集,真实数据集因为物体间的距离更大,更容易设置一个合适的传播半径进行传播,这也是NCPUM泛化到真实数据集表现更好的原因.表4对2个数据集上设置的不同传播半径进行对比,NCPUM在Flyingthings3D数据集上的半径设置为0.4时达到最好效果,而在KITTI数据集上的半径设置为3.0时达到最好效果.这个数据的差异表现出在真实场景上对传播的约束更加小,传播更新可以影响到更多的点,从而带来更好的改进效果.

    表  4  NCPUM在不同半径阈值下的测试
    Table  4.  NCPUM Tested with Different Radius Threshold
    数据集半径阈值EPE/mAcc3DS/%Acc3DR/%Outlier3D/%
    Flyingthings3D1.00.06275.293.531.7
    0.40.06076.193.930.7
    0.20.06076.093.830.7
    KITTI5.00.05485.895.924.1
    3.00.04387.596.924.3
    1.00.04385.095.425.5
    注:黑体数字表示最优结果.
    下载: 导出CSV 
    | 显示表格

    为了证明NCPUM方法的泛化性能,本文尝试在不同的骨干网络上进行优化.我们分别以FlowNet3D,PointPWC-Net,FLOT为骨干网络构建使用置信度预测模块和场景流传播模块,在KITTI数据集上进行微调和使用NCPUM优化方法改进.测试结果如表3所示,在对FlowNet3D, PointPWC-Net,FLOT方法使用NCPUM优化方法后,4个指标上都有明显的提升,展示了NCPUM优化方法对不同骨干网络的泛化性.

    图4中可视化了NCPUM的传播更新过程,绿色点为更新场景流的低置信度点,红色点是传播场景流的高置信度点,黄色的连线为传播更新关系.可以看到KITTI数据集中具有一致性的汽车表面会出现估计不准确的绿色低置信度点,这些点更多是位于距离激光雷达更远的低密度点和邻域信息单一的边缘点上,若只关注连续点云间匹配的方法容易在这些点上出现较大的误差,NCPUM对匹配困难点寻找一个匹配准确点进行更新,和相邻的准确点保持一致,从而提高整体的估计准确度;同时传播过程要限制传播半径阈值,避免引入扰动.

    图  4  替换细节可视
    Figure  4.  Visualization of replacemental details

    本文提出了一种根据邻域一致性的传播更新方法NCPUM来优化估计场景流的精度.该方法通过置信度预测模块对骨干网络估计出的初始场景流预测置信度图来判断场景流准确度;然后通过场景流传播模块在有限制的传播半径内从匹配困难点寻找匹配准确点,将匹配准确点的场景流传播到匹配困难点上,以提高场景流的估计精度.NCPUM在不同的数据集Flyingthings3D和KITTI上都体现出了优于之前工作的性能.并且通过在真实数据集上的微调实验和不同传播半径的实验展现出NCPUM在真实场景数据有更加优秀的表现.

    作者贡献声明:郑晗负责提出模型、编写代码、实施实验过程和论文撰写;王宁负责修订和完善论文;马新柱负责数据分析和数据解释;张宏负责理论和实验设计;王智慧负责理论设计;李豪杰提供论文写作指导.

  • 表  1   基于抽象解释的程序分析工具

    Table  1   Program Analyzers Based on Abstract Interpretation

    工具编写语言目标程序开源情况主要特点
    Astrée[5]OCamlC/C++商业化组合多种抽象域、分析精度高
    CGS[80]CC内部使用基于分布式实现
    Polyspace[3]C/C++, Ada商业化工具成熟、功能丰富
    Sparrow[81]OCamlC商业化&开源稀疏分析、可选择过程间敏感分析
    IKOS[82]C++C/C++开源并行不动点迭代、Gauge抽象域
    Frama-C[83]OCamlC开源值范围分析
    Infer[84]OCamlJava, C/C++ , Objective-C开源组合分析、增量分析
    Goblint[85]OCamlC开源中断、并发程序缺陷检测
    Dai[86]OCamlC开源需求驱动抽象解释
    Clam[87-88]C++LLVM字节码开源不变式生成
    SPARTA[89]C++开源高性能抽象解释库
    Mopsa[90]OCamlC, Python开源模块化、多语言支持
    MemCAD[91]OCamlC开源形态分析
    Interproc[92]OCamlSPL开源不变式生成、上下文不敏感过程间分析、支持Apron抽象域库
    CodeHawk[93-94]OCamlC, Java字节码, 二进制程序开源多语言支持
    下载: 导出CSV
  • [1]

    Cousot P, Cousot R. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints[C] //Proc of the 4th ACM SIGACT-SIGPLAN Symp on Principles of Programming Languages. New York: ACM, 1977: 238−252

    [2]

    Cousot P. Principles of Abstract Interpretation[M]. Cambrideg, MA: MIT Press, 2021

    [3]

    MathWorks. Polyspace[EB/OL]. [2022-12-18]. https://ww2.mathworks.cn/products/polyspace.html

    [4]

    Blanchet B, Cousot P, Cousot R, et al. A static analyzer for large safety-critical software[C] //Proc of the 24th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2003: 196−207

    [5]

    Absint. Astrée. [EB/OL]. [2022-12-18]. https://www.absint.com/astree/index.htm

    [6]

    Miné A. AstréeA: A static analyzer for large embedded multi-task software[C] //Proc of the 16th Int Conf on Verification, Model Checking, and Abstract Interpretation (VMCAI'15). Berlin: Springer, 2015, 8931: 3

    [7]

    Cousot P, Giacobazzi R, Ranzato F. A2I: Abstract2 interpretation[J]. Proceedings of the ACM on Programming Languages, 2019, 3(POPL): 42:1−42:31

    [8]

    Bruni R, Giacobazzi R, Gori R, et al. A logic for locally complete abstract interpretations[C] //Proc of the 36th Annual ACM/IEEE Symp on Logic in Computer Science (LICS). Piscataway, NJ: IEEE, 2021: 1−13

    [9]

    Bruni R, Giacobazzi R, Gori R, et al. Abstract interpretation repair[C] //Proc of the 43rd ACM SIGPLAN Int Conf on Programming Language Design and Implementation. New York: ACM, 2022: 426−441

    [10]

    Campion M, Dalla Preda M, Giacobazzi R. Partial (In) completeness in abstract interpretation: Limiting the imprecision in program analysis[J]. Proceedings of the ACM on Programming Languages, 2022, 6(POPL): 59:1−59:31

    [11]

    Bonchi F, Ganty P, Giacobazzi R, et al. Sound up-to techniques and complete abstract domains[C] //Proc of the 33rd Annual ACM/IEEE Symp on Logic in Computer Science. Piscataway, NJ: IEEE, 2018: 175−184

    [12]

    Bruni R, Giacobazzi R, Gori R, et al. Abstract extensionality: On the properties of incomplete abstract interpretations[J]. Proceedings of the ACM on Programming Languages, 2019, 4(POPL): 28:1−28:28

    [13]

    Stein B, Chang B Y E, Sridharan M. Demanded abstract interpretation[C] //Proc of the 42nd ACM SIGPLAN Int Conf on Programming Language Design and Implementation. New York: ACM, 2021: 282−295

    [14]

    Wei Guannan, Chen Yuxuan, Rompf T. Staged abstract interpreters: Fast and modular whole-program analysis via meta-programming[J]. Proceedings of the ACM on Programming Languages, 2019, 3(OOPSLA): 1−32

    [15]

    Futamura Y. Partial evaluation of computation process—an approach to a compiler-compiler[J]. Systems, Computers, Controls, 1971, 2(5): 45−50

    [16]

    Feldman Y M Y, Sagiv M, Shoham S, et al. Property-directed reachability as abstract interpretation in the monotone theory[J]. Proceedings of the ACM on Programming Languages, 2022, 6(POPL): 15:1−31

    [17]

    Jeannet B, Miné A. APRON: A library of numerical abstract domains for static analysis[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2009: 661−667

    [18]

    Singh G, Püschel M, Vechev M. A practical construction for decomposing numerical abstract domains[J]. Proceedings of the ACM on Programming Languages, 2017, 2(POPL): 55:1−55:28

    [19]

    Bagnara R, Hill P M, Zaffanella E. The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems[J]. Science of Computer Programming, 2008, 72(1-2): 3−21 doi: 10.1016/j.scico.2007.08.001

    [20]

    Becchi A, Zaffanella E. PPLite: Zero-overhead encoding of NNC polyhedra[J]. Information and Computation, 2020, 275: 104620.

    [21]

    Boulmé S, Marechaly A, Monniaux D, et al. The verified polyhedron library: An overview[C] //Proc of the 20th Int Symp on Symbolic and Numeric Algorithms for Scientific Computing. Piscataway, NJ: IEEE, 2018: 9−17

    [22]

    Singh G, Püschel M, Vechev M. Making numerical program analysis fast[J]. ACM SIGPLAN Notices, 2015, 50(6): 303−313 doi: 10.1145/2813885.2738000

    [23]

    Singh G, Püschel M, Vechev M. Fast polyhedra abstract domain[C] //Proc of the 44th ACM SIGPLAN Symp on Principles of Programming Languages. New York: ACM, 2017: 46−59

    [24]

    Gange G, Ma Z, Navas J A, et al. A fresh look at zones and octagons[J]. ACM Transactions on Programming Languages and Systems, 2021, 43(3): 1−51

    [25]

    Chawdhary A, Robbins E, King A. Incrementally closing octagons[J]. Formal Methods in System Design, 2019, 54(2): 232−277 doi: 10.1007/s10703-017-0314-7

    [26]

    Chen Junjie, Cousot P. A binary decision tree abstract domain functor[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2015: 36−53

    [27]

    Chen Liqian, Yin Banghu, Wei Dengping, et al. An abstract domain to infer linear absolute value equalities[C] //Proc of the 2021 Int Symp on Theoretical Aspects of Software Engineering (TASE). Piscataway, NJ: IEEE, 2021: 47−54

    [28]

    Chen Liqian, Wei Dengping, Yin Banghu, et al. Static analysis of linear absolute value equalities among variables of a program[J]. Science of Computer Programming, 2023, Volume 225, Article 102906, 18 pages.

    [29]

    Gange G, Navas J A, Schachte P, et al. Disjunctive interval analysis[C]//Proc of the Int Static Analysis Symp. Berlin: Springer, 2021: 144−165

    [30]

    Gurfinkel A, Chaki S. Boxes: A symbolic abstract domain of boxes[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2010: 287−303

    [31]

    Maréchal A, Monniaux D, Périn M. Scalable minimizing-operators on polyhedra via parametric linear programming[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2017: 212−231

    [32]

    Becchi A, Zaffanella E. A direct encoding for NNC polyhedra[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2018: 230−248

    [33]

    Bagnara R, Hill P M, Zaffanella E. Not necessarily closed convex polyhedra and the double description method[J]. Formal Aspects of Computing, 2005, 17(2): 222−257 doi: 10.1007/s00165-005-0061-1

    [34]

    Amato G, Scozzari F, Seidl H, et al. Efficiently intertwining widening and narrowing[J]. Science of Computer Programming, 2016, 120: 1−24 doi: 10.1016/j.scico.2015.12.005

    [35]

    Boutonnet R, Halbwachs N. Improving the results of program analysis by abstract interpretation beyond the decreasing sequence[J]. Formal Methods in System Design, 2018, 53(3): 384−406 doi: 10.1007/s10703-017-0310-y

    [36]

    Cha S, Jeong S, Oh H. Learning a strategy for choosing widening thresholds from a large codebase[C] //Proc of the Asian Symp on Programming Languages and Systems. Berlin: Springer, 2016: 25−41

    [37]

    Reps T, Sagiv M, Yorsh G. Symbolic implementation of the best transformer[C] //Proc of the Int Workshop on Verification, Model Checking, and Abstract Interpretation. Berlin: Springer, 2004: 252−266

    [38]

    Bjørner N, Phan A D, Fleckenstein L. ΝZ-AN optimizing SMT solver[C] //Proc of the Int conf on Tools and Algorithms for the Construction and Analysis of Systems. Berlin: Springer, 2015: 194−199

    [39]

    Li Yi, Albarghouthi A, Kincaid Z, et al. Symbolic optimization with SMT solvers[J]. ACM SIGPLAN Notices, 2014, 49(1): 607−618 doi: 10.1145/2578855.2535857

    [40]

    Jiang Jiahong, Chen Liqian, Wu Xueguang, et al. Block-wise abstract interpretation by combining abstract domains with smt[C] //Proc of the Int Conf on Verification, Model Checking, and Abstract Interpretation. Berlin: Springer, 2017: 310−329

    [41]

    Yao Peisen, Shi Qingkai, Huang Heqing, et al. Program analysis via efficient symbolic abstraction[J]. Proceedings of the ACM on Programming Languages, 2021, 5(OOPSLA): 118:1−118:32

    [42]

    Farzan A, Kincaid Z. Compositional recurrence analysis[C] //Proc of the 2015 Formal Methods in Computer-Aided Design (FMCAD). Piscataway, NJ: IEEE, 2015: 57−64

    [43]

    Kincaid Z, Breck J, Boroujeni A F, et al. Compositional recurrence analysis revisited[J]. ACM SIGPLAN Notices, 2017, 52(6): 248−262 doi: 10.1145/3140587.3062373

    [44]

    Kincaid Z, Cyphert J, Breck J, et al. Non-linear reasoning for invariant synthesis[J]. Proceedings of the ACM on Programming Languages, 2017, 2(POPL): 54:1−54:33

    [45]

    Allamigeon X, Gaubert S, Goubault E, et al. A fast method to compute disjunctive quadratic invariants of numerical programs[J]. ACM Transactions on Embedded Computing Systems, 2017, 16(5s): 1−19

    [46]

    Yin Banghu, Chen Liqian, Liu Jiangchao, et al. Verifying numerical programs via Iterative abstract testing[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2019: 247−267

    [47]

    Toman J, Grossman D. Concerto: A framework for combined concrete and abstract interpretation[J]. Proceedings of the ACM on Programming Languages, 2019, 3(POPL): 43:1−43:29

    [48]

    Chen Tianyi, Heo K, Raghothaman M. Boosting static analysis accuracy with instrumented test executions[C] //Proc of the 29th ACM Joint Meeting on European Software Engineering Conf and Symp on the Foundations of Software Engineering. New York: ACM, 2021: 1154−1165

    [49]

    O'Hearn P W. Incorrectness logic[J]. Proceedings of the ACM on Programming Languages, 2019, 4(POPL): 10:1−10:32

    [50]

    Raad A, Berdine J, Dang H H, et al. Local reasoning about the presence of bugs: Incorrectness separation logic[C] //Proc of Int Conf on Computer Aided Verification. Berlin: Springer, 2020: 225−252

    [51]

    Raad A, Berdine J, Dreyer D, et al. Concurrent incorrectness separation logic[J]. Proceedings of the ACM on Programming Languages, 2022, 6(POPL): 34:1−34:29

    [52]

    Le Q L, Raad A, Villard J, et al. Finding real bugs in big programs with incorrectness logic[J]. Proceedings of the ACM on Programming Languages, 2022, 6(OOPSLA1): 81:1−81:27

    [53]

    Ascari F, Bruni R, Gori R. Limits and difficulties in the design of under-approximation abstract domains[C] //Proc of the Int Conf on Foundations of Software Science and Computation Structures. Berlin: Springer, 2022: 21−39

    [54]

    Kim S K, Venet A J, Thakur A V. Deterministic parallel fixpoint computation[J]. Proceedings of the ACM on Programming Languages, 2019, 4(POPL): 14:1−14:33

    [55]

    Brat G, Navas J A, Shi N, et al. IKOS: A framework for static analysis based on abstract interpretation[C] //Proc of the Int Conf on Software Engineering and Formal Methods. Berlin: Springer, 2014: 271−277

    [56]

    Ramalingam G. Identifying loops in almost linear time[J]. ACM Transactions on Programming Languages and Systems, 1999, 21(2): 175−188 doi: 10.1145/316686.316687

    [57]

    Kim S K, Venet A J, Thakur A V. Memory-efficient fixpoint computation[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2020: 35−64

    [58]

    Park J, Lee H, Ryu S. A survey of parametric static analysis[J]. ACM Computing Surveys, 2021, 54(7): 1−37

    [59]

    Heo K, Oh H, Yi K. Machine-learning-guided selectively unsound static analysis[C] //Proc of the IEEE/ACM 39th Int Conf on Software Engineering (ICSE). Piscataway, NJ: IEEE, 2017: 519−529

    [60]

    Heo K, Oh H, Yang H. Resource-aware program analysis via online abstraction coarsening[C] //Proc of the IEEE/ACM 41st Int Conf on Software Engineering (ICSE). Piscataway, NJ: IEEE, 2019: 94−104

    [61]

    Singh G, Püschel M, Vechev M. Fast numerical program analysis with reinforcement learning[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2018: 211−229

    [62]

    He Jingxuan, Singh G, Püschel M, et al. Learning fast and precise numerical analysis[C] //Proc of the 41st ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2020: 1112−1127

    [63]

    Liu Jiangchao, Chen Liqian, Rival X. Automatic verification of embedded system code manipulating dynamic structures stored in contiguous regions[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2018, 37(11): 2311−2322 doi: 10.1109/TCAD.2018.2858462

    [64]

    Journault M, Miné A, Ouadjaout A. Modular static analysis of string manipulations in C programs[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2018: 243−262

    [65]

    Illous H, Lemerre M, Rival X. A relational shape abstract domain[J]. Formal Methods in System Design, 2021, 57(3): 343−400 doi: 10.1007/s10703-021-00366-4

    [66]

    Illous H, Lemerre M, Rival X. Interprocedural shape analysis using separation logic-based transformer summaries[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2020: 248−273

    [67]

    Li Huisong, Berenger F, Chang B Y E, et al. Semantic-directed clumping of disjunctive abstract states[J]. ACM SIGPLAN Notices, 2017, 52(1): 32−45 doi: 10.1145/3093333.3009881

    [68]

    Wang Di, Hoffmann J, Reps T. PMAF: An algebraic framework for static analysis of probabilistic programs[J]. ACM SIGPLAN Notices, 2018, 53(4): 513−528 doi: 10.1145/3296979.3192408

    [69]

    Chakarov A, Sankaranarayanan S. Expectation invariants for probabilistic program loops as fixed points[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2014: 85−100

    [70]

    Cousot P, Monerau M. Probabilistic abstract interpretation[C] //Proc of the European Symp on Programming. Berlin: Springer, 2012: 169−193

    [71]

    Monniaux D. Abstract interpretation of probabilistic semantics[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2000: 322−339

    [72]

    Gershuni E, Amit N, Gurfinkel A, et al. Simple and precise static analysis of untrusted Linux kernel extensions[C] //Proc of the 40th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2019: 1069−1084

    [73]

    Ouadjaout A, Miné A, Lasla N, et al. Static analysis by abstract interpretation of functional properties of device drivers in TinyOS[J]. Journal of Systems and Software, 2016, 120: 114−132 doi: 10.1016/j.jss.2016.07.030

    [74]

    Carbonneaux Q, Hoffmann J, Shao Z. Compositional certified resource bounds[C] //Proc of the 36th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2015: 467−478

    [75]

    Fan Guangsheng, Chen Taoqing, Yin Banghu, et al. Static bound analysis of dynamically allocated resources for C programs[C] //Proc of the 2021 IEEE 32nd Int Symp on Software Reliability Engineering (ISSRE). Piscataway, NJ: IEEE, 2021: 390−400

    [76]

    Çiçek E, Bouaziz M, Cho S, et al. Static resource analysis at scale[C] //Proc of Int Static Analysis Symp. Berlin: Springer, 2020: 3−6

    [77]

    Rivera J, Franchetti F, Püschel M. An interval compiler for sound floating-point computations[C] //Proc of the 2021 IEEE/ACM Int Symp on Code Generation and Optimization (CGO). Piscataway, NJ: IEEE, 2021: 52−64

    [78]

    Rivera J, Franchetti F, Püschel M. A compiler for sound floating-point computations using affine arithmetic[C] //Proc of the 2022 IEEE/ACM Int Symp on Code Generation and Optimization (CGO). Piscataway, NJ: IEEE, 2022: 66−78

    [79]

    Ye Yapeng, Zhang Zhuo, Shi Qingkai, et al. D-ARM: Disassembling ARM Binaries by Lightweight Superset Instruction Interpretation and Graph Modeling[C/OL] //Proc of the 2023 IEEE Symp on Security and Privacy (SP). [2022-12-18]. https://qingkaishi.github.io/public_pdfs/SP23DARM.pdf

    [80]

    Venet A, Brat G. Precise and efficient static array bound checking for large embedded C programs[J]. ACM SIGPLAN Notices, 2004, 39(6): 231−242 doi: 10.1145/996893.996869

    [81]

    Oh H, Heo K, Lee W, et al. Sparrow[EB/OL]. [2022-12-18]. https://github.com/ropas/sparrow

    [82]

    Brat G, Navas J A, Shi N, et al. IKOS[EB/OL]. [2022-12-18]. https://github.com/NASA-SW-VnV/ikos

    [83]

    Alberti M, Antignac T, Barany G, et al. Frama-C[EB/OL]. [2022-12-18]. https://frama-c.com/

    [84]

    Villard J, Berdine J, Blackshear S, et al. Infer[EB/OL]. [2022-12-18]. https://github.com/facebook/infer

    [85]

    Seidl H, Erhard J, Vojdani V, et al. Goblint[EB/OL]. [2022-12-18]. https://goblint.in.tum.de/home

    [86]

    Stein B, Chang B Y E, Sridharan M, et al. Dai[EB/OL]. [2022-12-18]. https://hub.nuaa.cf/cuplv/dai

    [87]

    Navas J A, Gurfinkel A, Kahsai T, et al. Clam[EB/OL]. [2022-12-18]. https://hub.nuaa.cf/seahorn/clam

    [88]

    Navas J A, Su Yusen, Kahsai T, et al. Crab[EB/OL]. [2022-12-18].https://hub.nuaa.cf/seahorn/crab

    [89]

    Chen Yuxuan, Gorski N, Arthaud M, et al. SPARTA[EB/OL]. [2022-12-18].https://github.com/facebook/SPARTA

    [90]

    Miné A, Milanese M, Bau G, et al. Mopsa[EB/OL]. [2022-12-18].https://gitlab.com/mopsa/mopsa-analyzer

    [91]

    Rival X, Illous H, Lemerre M, et al. MemCAD[EB/OL]. [2022-12-18].https://gitlab.inria.fr/memcad/memcad

    [92]

    Jeannet B. Interproc[EB/OL]. [2022-12-18]. http://pop-art.inrialpes.fr/interproc/interprocweb. cgi

    [93]

    Sipma H, Schuffelen A C, et al. CodeHawk[EB/OL]. [2022-12-18].https://github.com/static-analysis-engineering /codehawk

    [94]

    Kestrel Technology LLC. Kestrel Technology Products[EB/OL]. [2022-12-18]. http://kestreltechnology.com/technology. html

    [95]

    Cousot P, Cousot R, Feret J, et al. Why does Astrée scale up?[J]. Formal Methods in System Design, 2009, 35(3): 229−264 doi: 10.1007/s10703-009-0089-6

    [96]

    Cousot P, Cousot R, Feret J, et al. The ASTRÉE analyzer[C] //Proc of the European Symp on Programming. Berlin: Springer, 2005: 21−30

    [97]

    Oh H, Heo K, Lee W, et al. Design and implementation of sparse global analyses for C-like languages[C] //Proc of the 33rd ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2012: 229−238

    [98]

    Oh H, Heo K, Lee W, et al. Global sparse analysis framework[J]. ACM Transactions on Programming Languages and Systems, 2014, 36(3): 1−44

    [99]

    Oh H, Lee W, Heo K, et al. Selective context-sensitivity guided by impact pre-analysis[C] //Proc of the 35th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2014: 475−484

    [100]

    Lee W, Lee W, Yi K. Sound non-statistical clustering of static analysis alarms[C] //Proc of Int Workshop on Verification, Model Checking, and Abstract Interpretation. Berlin: Springer, 2012: 299−314

    [101]

    Venet A J. The gauge domain: scalable analysis of linear inequality invariants[C] //Proc of Int Conf on Computer Aided Verification. Berlin: Springer, 2012: 139−154

    [102]

    Baudin P, Bobot F, Bühler D, et al. The dogged pursuit of bug-free C programs: The Frama-C software analysis platform[J]. Communications of the ACM, 2021, 64(8): 56−68 doi: 10.1145/3470569

    [103]

    Kirchner F, Kosmatov N, Prevosto V, et al. Frama-C: A software analysis perspective[J]. Formal Aspects of Computing, 2015, 27(3): 573−609 doi: 10.1007/s00165-014-0326-7

    [104]

    Distefano D, Fähndrich M, Logozzo F, et al. Scaling static analyses at Facebook[J]. Communications of the ACM, 2019, 62(8): 62−70 doi: 10.1145/3338112

    [105]

    Calcagno C, Distefano D. Infer: An automatic program verifier for memory safety of C programs[C] //Proc of NASA Formal Methods Symp. Berlin: Springer, 2011: 459−465

    [106]

    Calcagno C, Distefano D, Dubreil J, et al. Moving fast with software verification[C] //Proc of NASA Formal Methods Symp. Berlin: Springer, 2015: 3−11

    [107]

    Harman M, O'Hearn P. From start-ups to scale-ups: Opportunities and open problems for static and dynamic program analysis[C] //Proc of the 2018 IEEE 18th Int Working Conf on Source Code Analysis and Manipulation (SCAM). Piscataway, NJ: IEEE, 2018: 1−23

    [108]

    Calcagno C, Distefano D, O’hearn P W, et al. Compositional shape analysis by means of bi-abduction[J]. Journal of the ACM, 2011, 58(6): 1−66

    [109]

    Vojdani V, Apinis K, Rõtov V, et al. Static race detection for device drivers: The Goblint approach[C] //Proc of the 2016 31st IEEE/ACM Int Conf on Automated Software Engineering (ASE). Piscataway, NJ: IEEE, 2016: 391−402

    [110]

    Schwarz M, Saan S, Seidl H, et al. Improving thread-modular abstract interpretation[C] //Proc of Int Static Analysis Symp. Berlin: Springer, 2021: 359−383

    [111]

    Seidl H, Vogler R. Three improvements to the top-down solver[C] //Proc of the 20th Int Symp on Principles and Practice of Declarative Programming. New York: ACM, 2018: 1−14

    [112]

    Seidl H, Vogler R. Three improvements to the top-down solver[J]. Mathematical Structures in Computer Science, 2022, 1: 45

    [113]

    Vojdani V, Vene V. Goblint: Path-sensitive data race analysis[J]. ANNALES Universitatis Scientiarum Budapestinensis de Rolando Eötvös Nominatae Sectio Computatorica, 2009, 30: 141−155

    [114]

    Seidl H, Erhard J, Vogler R. Incremental abstract interpretation[M] //From Lambda Calculus to Cybersecurity Through Program Analysis. Berlin: Springer, 2020: 132−148

    [115]

    Erhard J, Saan S, Tilscher S, et al. Interactive abstract interpretation: Reanalyzing whole programs for cheap[J]. arXiv preprint, arXiv: 2209.10445, 2022

    [116]

    Monat R, Ouadjaout A, Miné A. A multilanguage static analysis of python programs with native C extensions[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2021: 323−345

    [117]

    Monat R, Ouadjaout A, Miné A. Static type analysis by abstract interpretation of Python programs[C] //Proc of the 34th European Conf on Object-Oriented Programming (ECOOP 2020). Dagstuhl: Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020(17): 1−29

    [118]

    Fromherz A, Ouadjaout A, Miné A. Static value analysis of Python programs by abstract interpretation[C] //Proc of NASA Formal Methods Symp. Berlin: Springer, 2018: 185−202

    [119]

    Jeannet B. Interproc analyzer for recursive programs with numerical variables[J/OL].[2022-12-20] . https://pop-art. inrialpes.fr/~bjeannet/bjeannet-forge/interproc/manual.pdf

    [120]

    Gehr T, Mirman M, Drachsler-Cohen D, et al. AI2: Safety and robustness certification of neural networks with abstract interpretation[C] //Proc of the 2018 IEEE symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2018: 3−18

    [121]

    Liu Changliu, Arnon T, Lazarus C, et al. Algorithms for verifying deep neural networks [J]. arXiv preprint, arXiv: 1903.06758, 2019

    [122]

    Huang Xiaowei, Kroening D, Kwiatkowska M, et al. Safety and trustworthiness of deep neural networks: A survey [J]. arXiv preprint, arXiv: 1812.08342, 2018

    [123] 卜磊, 陈立前, 董云卫, 等. 人工智能系统的形式化验证技术研究进展与趋势[R]. CCF 2019−2020中国计算机科学技术发展报告. 北京: 机械工业出版社, 2020: 91−539

    Bu Lei, Chen Liqian, Dong Yunwei, et al. Research progress and trends on formal verification of artificial intelligence systems[R]. CCF 2019−2020 Progress Report on Chinese Computer Science and Technology. Beijing: China Machine Press, 2020, 491−539 (in Chinese)

    [124]

    Albarghouthi A. Introduction to neural network verification[J]. Foundations and Trends® in Programming Languages, 2021, 7(1–2): 1−157

    [125]

    Huang P S, Stanforth R, Welbl J, et al. Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation[C] //Proc of the 2019 Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing (EMNLP-IJCNLP 19). Stroudsburg, PA: Association for Computational Linguistics, 2019: 4081−4091.

    [126]

    Mirman M, Baader M, Vechev M. The fundamental limits of interval arithmetic for neural networks[J]. arXiv preprint, arXiv: 2112.05235, 2021

    [127]

    Wang Shiqi, Pei Kexin, Whitehouse J, et al. Formal security analysis of neural networks using symbolic intervals[C] //Proc of the 27th USENIX Security Symp (USENIX Security’18). Berkeley, CA: USENIX Association, 2018: 1599−1614

    [128]

    Wang Shiqi, Pei Kexin, Whitehouse J, et al. Efficient formal safety analysis of neural networks[C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Cambrideg, MA: MIT Press, 2018: 6369−6379

    [129]

    Yin Banghu, Chen Liqian, Liu Jiangchao, et al. Efficient complete verification of neural networks via layer-wised splitting and refinement[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022, 41(11): 3898−3909 doi: 10.1109/TCAD.2022.3197534

    [130]

    Baader M, Mirman M, Vechev M. Universal approximation with certified networks[J]. arXiv preprint, arXiv: 1909.13846, 2019

    [131]

    Wang Zi, Albarghouthi A, Prakriya G, et al. Interval universal approximation for neural networks[J]. Proceedings of the ACM on Programming Languages, 2022, 6(POPL): 14:1−14:29

    [132]

    Singh G, Gehr T, Mirman M, et al. Fast and effective robustness certification[C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Cambridge, MA: MIT Press, 2018: 10825−10836

    [133]

    Singh G, Gehr T, Püschel M, et al. Boosting robustness certification of neural networks[C/OL] //Proc of the Int Conf on Learning Representations. 2018 [2022-12-20].https://openreview.net/pdf?id= HJgeEh09KQ

    [134]

    Jordan M, Hayase J, Dimakis A G, et al. Zonotope domains for Lagrangian neural network verification[J]. arXiv preprint, arXiv: 2210.08069, 2022

    [135]

    Singh G, Gehr T, Püschel M, et al. An abstract domain for certifying neural networks[J]. Proceedings of the ACM on Programming Languages, 2019, 3(POPL): 41:1−41:30

    [136]

    Tran H D, Bak S, Xiang Weiming, et al. Verification of deep convolutional neural networks using imagestars[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2020: 18−42

    [137]

    Goubault E, Palumby S, Putot S, et al. Static analysis of ReLU neural networks with tropical polyhedra[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2021: 166−190

    [138]

    Müller C, Serre F, Singh G, et al. Scaling polyhedral neural network verification on GPUS[J]. Proceedings of Machine Learning and Systems, 2021, 3: 733−746

    [139]

    Salman H, Yang G, Zhang Huan, et al. A convex relaxation barrier to tight robustness verification of neural networks[C] //Proc of the 33rd Int Conf on Neural Information Processing Systems. Cambrideg, MA: MIT Press, 2019: 9835−9846

    [140]

    Singh G, Ganvir R, Püschel M, et al. Beyond the single neuron convex barrier for neural network certification[C] //Proc of the Advances in Neural Information Processing Systems. Cambrideg, MA: MIT Press, 2019, 15098−15109

    [141]

    Clarisó R, Cortadella J. The octahedron abstract domain[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2004: 312−327

    [142]

    Tjandraatmadja C, Anderson R, Huchette J, et al. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification[C] //Proc of the Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2020, 33: 21675−21686

    [143]

    De Palma A, Behl H S, Bunel R, et al. Scaling the convex barrier with active sets[C/OL] //Proc of the ICLR Conf. 2021. [2022-12-20].https://arxiv.org/pdf/2101.05844v1.pdf

    [144]

    Müller M N, Makarchuk G, Singh G, et al. PRIMA: General and precise neural network certification via scalable convex hull approximations[J]. Proceedings of the ACM on Programming Languages, 2022, 6(POPL): 43:1−43:33

    [145]

    Li Jianlin, Liu Jiangchao, Yang Pengfei, et al. Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification[C] //Proc of the Int Static Analysis Symp. Berlin: Springer, 2019: 296−319

    [146]

    Yang Pengfei, Li Jianlin, Liu Jiangchao, et al. Enhancing robustness verification for deep neural networks via symbolic propagation[J]. Formal Aspects of Computing, 2021, 33(3): 407−435 doi: 10.1007/s00165-021-00548-1

    [147]

    Elboher Y Y, Gottschlich J, Katz G. An abstraction-based framework for neural network verification[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2020: 43−65

    [148]

    Weng Lily, Zhang Huan, Chen Hongge, et al. Towards fast computation of certified robustness for relu networks[C/OL]//Proc of the Int Conf on Machine Learning, 2018 [2022-12-20]. https://arxiv.org/abs/1804.09699

    [149]

    Zhang Huan, Weng Tsuiwei, Chen Pinyu, et al. Efficient neural network robustness certification with general activation functions[C/OL]//Proc of Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press, 2018 [2022-12-20].https://arxiv.org/abs/1811.00866

    [150]

    Bastani O, Ioannou Y, Lampropoulos L, et al. Measuring neural net robustness with constraints[C/OL] //Proc of Advances in Neural Information Processing Systems, 2016 [2022-12-20].https://arxiv.org/abs/1605.07262

    [151]

    Anderson G, Pailoor S, Dillig I, et al. Optimization and abstraction: A synergistic approach for analyzing neural network robustness[C] //Proc of the 40th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2019: 731−744

    [152]

    Bunel R R, Turkaslan I, Torr P, et al. A unified view of piecewise linear neural network verification [C/OL] //Proc of Advances in Neural Information Processing Systems, 2018 [2022-12-20]. https://arxiv.org/abs/1711.00455

    [153]

    Bunel R, Lu J, Turkaslan I, et al. Branch and bound for piecewise linear neural network verification[J]. Journal of Machine Learning Research, 2020, 21(42): 1−39

    [154]

    Ehlers R. Formal verification of piecewise linear feedforward neural networks [C] //Proc of the Int Symp on Automated Technology for Verification and Analysis. Berlin: Springer, 2017: 269–286

    [155]

    Katz G, Huang D A, Ibeling D, et al. The Marabou framework for verification and analysis of deep neural networks [C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2019: 443–452

    [156]

    Katz G, Barrett C, Dill D L, et al. Reluplex: An efficient SMT solver for verifying deep neural networks [C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2017: 97–117

    [157]

    Wang Shiqi. Efficient neural network verification using branch and bound[D]. Columbia: Columbia University, 2022

    [158]

    Zhang Hongce, Shinn M, Gupta A, et al. Verification of recurrent neural networks for cognitive tasks via reachability analysis[C/OL] //Proc of the 24th European Conf on Artificial Intelligence, 2020 [2022-12-20].https://ecai2020.eu/papers/1492_paper.pdf

    [159]

    Akintunde M E, Kevorchian A, Lomuscio A, et al. Verification of RNN-based neural agent-environment systems[C] //Proc of the AAAI Conf on Artificial Intelligence. Mento Park, CA: AAAI, 2019: 6006−6013.

    [160]

    Jacoby Y, Barrett C, Katz G. Verifying recurrent neural networks using invariant inference[C] //Proc of the Int Symp on Automated Technology for Verification and Analysis. Berlin: Springer, 2020: 57−74

    [161]

    Ko C Y, Lyu Zhaoyang, Weng L, et al. POPQORN: Quantifying robustness of recurrent neural networks[C] //Proc of the Int Conf on Machine Learning. 2019: 3468−3477.

    [162]

    Ryou W, Chen Jiayu, Balunovic M, et al. Scalable polyhedral verification of recurrent neural networks[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2021: 225−248

    [163]

    Weiss G, Goldberg Y, Yahav E. Extracting automata from recurrent neural networks using queries and counterexamples[C] //Proc of the Int Conf on Machine Learning. New York: ACM, 2018: 5247−5256

    [164]

    Khmelnitsky I, Neider D, Roy R, et al. Property-directed verification and robustness certification of recurrent neural networks[C] //Proc of the Int Symp on Automated Technology for Verification and Analysis. Berlin: Springer, 2021: 364−380

    [165]

    Mirman M, Gehr T, Vechev M. Differentiable abstract interpretation for provably robust neural networks[C] //Proc of the Int Conf on Machine Learning. New York: ACM, 2018: 3578−3586

    [166]

    Gowal S, Dvijotham K, Stanforth R, et al. On the effectiveness of interval bound propagation for training verifiably robust models[J]. arXiv preprint, arXiv: 1810.12715, 2018

    [167]

    Fischer M, Balunovic M, Drachsler-Cohen D, et al. Dl2: Training and querying neural networks with logic[C] //Proc of the Int Conf on Machine Learning. New York: ACM, 2019: 1931−1941

    [168]

    Zhang Yuhao, Albarghouthi A, D ’Antoni L. Robustness to programmable string transformations via augmented abstract training[C] //Proc of the Int Conf on Machine Learning. New York: ACM, 2020: 11023−11032

    [169]

    Zhang Yuhao, Albarghouthi A, D’Antoni L. Certified robustness to programmable transformations in LSTMs[J]. arXiv preprint, arXiv: 2102.07818, 2021

    [170]

    Jia Robin, Raghunathan A, Göksel K, et al. Certified robustness to adversarial word substitutions[J]. arXiv preprint, arXiv: 1909.00986, 2019

    [171]

    Xu Kaidi, Shi Zhouxing, Zhang Huan, et al. Automatic perturbation analysis for scalable certified robustness and beyond[J]. Advances in Neural Information Processing Systems, 2020: 1129−1141

    [172]

    Huang P S, Stanforth R, Welbl J, et al. Achieving verified robustness to symbol substitutions via interval bound propagation[J]. arXiv preprint, arXiv: 1909.01492, 2019

    [173]

    Zhang Yuhao, Ren Luyao, Chen Liqian, et al. Detecting numerical bugs in neural network architectures[C] //Proc of the 28th ACM Joint Meeting on European Software Engineering Conf and Symp on the Foundations of Software Engineering. New York: ACM, 2020: 826−837

    [174]

    Pérez V, Klemen M, López-García P, et al. Cost analysis of smart contracts via parametric resource analysis[C] //Proc of Int Static Analysis Symp. Berlin: Springer, 2020: 7−31

    [175]

    Grech N, Kong M, Jurisevic A, et al. Madmax: Surviving out-of-gas conditions in ethereum smart contracts[J]. Proceedings of the ACM on Programming Languages, 2018, 2(OOPSLA): 116:1−116:27

    [176]

    Kalra S, Goel S, Dhawan M, et al. Zeus: Analyzing safety of smart contracts[C] //Proc of NDSS 2018. San Diego, CA: University of California, 2018: 1−12

    [177]

    Giacobazzi R, Mastroeni I. Abstract non-interference: Parameterizing non-interference by abstract interpretation[J]. ACM SIGPLAN Notices, 2004, 39(1): 186−197 doi: 10.1145/982962.964017

    [178]

    Giacobazzi R, Mastroeni I. Abstract non-interference: A unifying framework for weakening information-flow[J]. ACM Transactions on Privacy and Security, 2018, 21(2): 1−31

    [179]

    Mastroeni I, Pasqua M. Statically analyzing information flows: An abstract interpretation-based hyperanalysis for non-interference[C] //Proc of the 34th ACM/SIGAPP Symp on Applied Computing. New York: ACM, 2019: 2215−2223

    [180]

    Köpf B, Mauborgne L, Ochoa M. Automatic quantification of cache side-channels[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2012: 564−580

    [181]

    Doychev G, Köpf B, Mauborgne L, et al. Cacheaudit: A tool for the static analysis of cache side channels[J]. ACM Transactions on information and system security, 2015, 18(1): 1−32

    [182]

    Barthe G, Köpf B, Mauborgne L, et al. Leakage resilience against concurrent cache attacks[C] //Proc of the Int Conf on Principles of Security and Trust. Berlin: Springer, 2014: 140−158

    [183]

    Doychev G, Köpf B. Rigorous analysis of software countermeasures against cache attacks[C] //Proc of the 38th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2017: 406−421

    [184]

    Wang Shuai, Bao Yuyan, Liu Xiao, et al. Identifying cache-based side channels through secret-augmented abstract interpretation[C] //Proc of the 28th USENIX Security Symp (USENIX Security’19). Berkeley, CA: USENIX Associatio, 2019: 657−674

    [185]

    Touzeau V, Maïza C, Monniaux D, et al. Ascertaining uncertainty for efficient exact cache analysis[C] //Proc of the Int Conf on Computer Aided Verification. Berlin: Springer, 2017: 22−40

    [186]

    Wu Meng, Guo Shengjian, Schaumont P, et al. Eliminating timing side-channel leaks using program repair[C] //Proc of the 27th ACM SIGSOFT Int Symp on Software Testing and Analysis. New York: ACM, 2018: 15−26

    [187]

    Blazy S, Pichardie D, Trieu A. Verifying constant-time implementations by abstract interpretation[J]. Journal of Computer Security, 2019, 27(1): 137−163 doi: 10.3233/JCS-181136

    [188]

    Wu Meng, Wang Chao. Abstract interpretation under speculative execution[C] //Proc of the 40th ACM SIGPLAN Conf on Programming Language Design and Implementation. New York: ACM, 2019: 802−815

    [189]

    Fang Zhiyong, Darais D, Near J P, et al. Zero knowledge static program analysis[C] //Proc of the 2021 ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2021: 2951−2967

    [190]

    Yu Nengkun, Palsberg J. Quantum abstract interpretation[C] //Proc of the 42nd ACM SIGPLAN Int Conf on Programming Language Design and Implementation. New York: ACM, 2021: 542−558

  • 期刊类型引用(1)

    1. 陈慧娴,吴一全,张耀. 基于深度学习的三维点云分析方法研究进展. 仪器仪表学报. 2023(11): 130-158 . 百度学术

    其他类型引用(1)

表(1)
计量
  • 文章访问数:  1111
  • HTML全文浏览量:  114
  • PDF下载量:  338
  • 被引次数: 2
出版历程
  • 收稿日期:  2022-11-06
  • 修回日期:  2022-12-24
  • 网络出版日期:  2023-02-10
  • 刊出日期:  2023-01-31

目录

/

返回文章
返回