-
摘要:
遥感影像是目前可以大范围获取海洋、大气和地球表面信息的数据资源,在农业、军事和城市规划等各个领域发挥重要作用. 但是在影像观测过程中会受到云雾等污染因素的影响,导致遥感影像信息缺失,在实际应用中造成巨大的资源损失和浪费. 因此,如何对遥感影像云雾覆盖区域进行检测并对其进行校正和修复是国内外专家广泛关注的具有挑战性的难点问题. 全面综述其研究进展,总结了现有遥感影像云层检测和去除的挑战;根据是否利用深度学习技术将云检测方法分为2大类,根据是否利用辅助影像将云去除方法分为3大类,依照不同方法特性系统分析和对比了其基本原理和优缺点;基于上述总结在2组遥感影像公开数据集上分别对4种云检测、4种薄云去除和4种厚云去除方法进行了性能评测;最后讨论了本领域目前仍存在的问题,对未来研究方向进行了预测,希望能够对该领域研究人员提供有价值的参考.
Abstract:Remote sensing images are the data resource that can acquire information about the ocean, atmosphere, and the earth’s surface, and have been widely applied in many fields, such as agriculture, military, and urban planning. However, clouds and hazes are inevitable factors when collecting images from satellites, resulting in the loss of information and causing a huge waste of data resources in practical applications. Therefore, how to detect and remove clouds from remote sensing images is a challenging and difficult task that draws a lot of experts’ attention. We comprehensively review current research progress and summarize the challenges of cloud detection and removal in remote sensing images. Cloud detection methods are divided into two categories based on whether using deep learning technology, and cloud removal methods are divided into three categories based on whether auxiliary images are used. Then, according to the characteristics of different methods, these methods are reviewed and analyzed systematically, including their advantages and disadvantages, respectively. Afterward, four cloud detection, four thin cloud removal and four thick cloud removal methods are evaluated on two remote sensing datasets. Finally, we discuss future challenges and predict future research directions. This review paper can provide valuable advice to scientists who are involved in remote sensing image processing.
-
Keywords:
- remote sensing images /
- cloud detection /
- cloud removal /
- thin cloud /
- thick cloud /
- deep learning
-
场景流(scene flow, SF)是定义在2个连续场景中表面点的3D运动场,是一个感知动态场景的基本工具.随着自动驾驶、人机交互等应用的大规模商业落地,感知系统需要精确感知环境中的动态运动物体[1-2],因此精确估计场景流成为近年来的研究热点.由LiDAR等3D传感器直接获得的点云数据可以得到场景中点的精确位置信息,因此点云数据被广泛应用于场景流估计任务中.点云数据仅包含3D点坐标,因此在稀疏点、边缘点处会出现特征信息不足的现象,在这些点上的匹配会更加困难,这些匹配困难点严重影响场景流估计的整体准确度.
近几年的方法都是关注2个连续点云间对应的匹配关系来优化场景流的估计精度,FlowNet3D[3]在单尺度上获取匹配关系;HPLFlowNet[4]使用双边卷积层(bilateral convolutional layer, BCL),在多个尺度上联合计算匹配关系[5];PointPWC-Net[6]在多个尺度上建立用于匹配的代价体(cost volume, CV)和重投影(warping)模块.但这些方法仅考虑了点云间的匹配关系,缺少优化匹配困难点的方式. 如图1(a)所示,图片中的点为场景的一部分,其中红色代表该点的端点误差(end point error, EPE)小于0.05 m;绿色代表该点的端点误差大于等于0.05 m且小于0.3 m;蓝色代表该点的端点误差大于等于0.3 m.在图1(a)虚线框中,PointPWC-Net在一个局部邻域内(一个汽车整体)同时有匹配准确的红色点和匹配困难的蓝色点.本文提出的基于邻域一致性的点云场景流传播更新方法(neighborhood consistency propagation update method,NCPUM)方法根据点云相邻点的相关性,即属于源点云的场景流在足够小的邻域内很大程度上存在一致性,将局部邻域中的准确场景流传播到匹配困难点上.可以有效地减少匹配困难点场景流的误差,提升整体准确度.图1(b)为经过NCPUM方法优化后的效果,可以看到在虚线框内的汽车点和匹配困难的蓝色点消失,匹配较差的绿色点明显减少,匹配准确的红色点明显增多.
具体来说,NCPUM假设利用点云内相邻点的相关性使场景流估计具有邻域一致性,通过置信度图传播更新提升整体场景流估计的准确度.基于该假设,NCPUM设计了置信度预测模块和场景流传播模块,对骨干网络输出的初始场景流预测置信度图,经过场景流传播模块在具有一致性的邻域内将场景流从高置信度点集向低置信度点集传播,改善邻域内匹配困难点的准确度.本文的贡献有2方面:
1)根据场景流的邻域一致性设计了场景流传播优化方法NCPUM.该方法使用场景流在局部邻域内传播的方式,改善估计效果.NCPUM的评估结果优于之前的工作,证明了该方法的有效性.
2)NCPUM在Flyingthings3D和KITTI数据集上的测试结果在准确度上达到国际先进水平,并更换不同的骨干网络进行了测试,验证了NCPUM对于不同的骨干网络都能明显提升其估计准确度.
1. 相关工作
1.1 场景流估计
在Vedula等人[7]工作中,定义和介绍了场景流的概念,之后许多工作[8-12]在多种类型的数据集上进行场景流的估计,随着最近几年基于点云的深度学习方法[13-15]的发展,可以在点云上直接估计场景流.其中一个使用可学习的深度网络来估计点云上的场景流的方法是FlowNet3D[3],它将下采样的特征进行嵌入,得到点云间的运动信息,通过上采样方法回归得到对应点的场景流.FlowNet3D只在单一尺度上进行了特征的嵌入,单一尺度的感受野无法在大尺度和小尺度运动上同时获得精确的估计结果. HPLFlowNet[4]使用双边卷积在多个尺度上联合计算匹配度,但限于内存使用限制无法在上采样过程中细化场景流.而PointPWC-Net[6]遵循了光流估计的“由粗到细”(coarse to fine, CTF)的范式,在多个尺度的局部范围内使用PointConv[13]建立用于匹配的代价体和重投影的模块.FLOT[16]通过最优传输(optimal transport),优化源点云和目标点云的匹配关系.这些关注于匹配关系的方法得到了很好的场景流估计结果.HALFlow[17]使用注意力机制,嵌入更多的位置信息,获得更准确的场景流估计结果.
文献[3−4,6,13,16−17]都是通过匹配连续点云间的特征回归出对应源点云的场景流,在匹配困难点处没有额外的优化措施.本文方法在源点云中根据相邻点的相关性,在邻域内改善匹配困难点的场景流,获得优于匹配方法的估计准确度.
1.2 邻域一致性
之前的场景流估计工作中都会注重在邻域内提取特征,根据提取到的特征来进行连续点云间的匹配[3-4,6,17-19],回归出点云间的场景流.但这只是在提取的特征上包含了邻域信息,在邻域特征信息不足的点上会出现匹配困难的情况.在同样使用邻域信息进行匹配的任务中[20-21],LiteFlowNet3[20]根据局部光流一致性,在代价体上对邻域内的点进行优化,获得了相对于匹配方法更好的光流估计精度.受该想法的启发,我们合理假设在2个连续场景中,一个合适的邻域内的点共享相同的运动模式,因此在邻域内的场景流具有一致性.NCPUM根据初始场景流显式的估计置信度,在邻域内的高置信度点向低置信度点进行传播更新.与现有方法不同的是,NCPUM的更新操作是在场景流上而非在特征上,所依赖的也不是特征上的相关或者相似,而是点云邻域内场景流的一致性.
2. 传播优化方法
NCPUM从场景流邻域一致性假设出发,构建了一种对场景流在邻域内传播更新的优化方法.具体网络框架如图2所示,分别构建置信度预测模块和场景流传播模块实现NCPUM优化方法.首先是估计初始场景流的骨干网络,在得到初始场景流以及对应特征之后,送入置信度预测模块;然后在置信度预测模块中使用编码器-解码器(encoder-decoder)的网络结构,对输入的场景流进行置信度图的预测,置信度图用来表示各个场景流估计是否准确;最后在场景流传播模块中,根据预测得到的置信度图将场景流从高置信度点集向低置信度点集传播,更新低置信度点的场景流,降低匹配困难点对准确度的影响.
2.1 问题定义
场景流估计任务的最终目标是估计2个连续点云之间的运动矢量,因此定义2个连续的3D点云场景:源点云
P=\left\{{\boldsymbol{x}}_{i}\right|i=\mathrm{1,2},…,{n}_{1}\} ,和目标点云Q=\left\{{\boldsymbol{y}}_{j}\right| j= \mathrm{1,2},…,{n}_{2}\} ,其中{\boldsymbol{x}}_{i},{\boldsymbol{y}}_{i}\in {\mathbb{R}}^{3} ,并且i 并不必须与j 相等.源点云P 中的点运动到目标点云Q 中的对应点的运动矢量场为{{F}}=({\boldsymbol{f}}_1 … {\boldsymbol{f}}_{n_1}) ,该运动矢量场即为最终估计的场景流.估计的场景流是基于源点云P 的,因此场景流与源点云中的点一一对应.2.2. 估计初始场景流
在估计初始场景流时,使用的是PointPWC-Net作为骨干网络,该方法使用2个连续的点云作为输入,使用特征金字塔的结构,在每个分辨率尺度上都进行一次源点云
P 到目标点云Q 的重投影,之后进行匹配度代价体的计算,代价体定义了逐点的匹配程度,PointPWC-Net对代价体进行回归得到逐点的场景流.在PointPWC-Net中,构建了4个尺度的特征金字塔,在得到4个尺度的点特征后,场景流的估计会从较粗的尺度进行,遵循由粗到细的规范.估计了当前尺度的场景流后,会上采样到更精细的尺度,将上采样的场景流对源点云进行重投影,在当前尺度上对重投影后的点云和对目标点云估计一个相对于上一个尺度场景流的残差,以完成对估计场景流的精细化.将整个重投影过程进行公式化定义:
{{{P}}}_{\mathrm{w}}=\{{\boldsymbol{p}}_{\mathrm{w},i}={\boldsymbol{p}}_{i}+{\boldsymbol{f}}_{i}|{\boldsymbol{p}}_{i}\in P,{\boldsymbol{f}}_{i}\in {F}^{\mathrm{u}\mathrm{p}}{\}}_{i=1}^{{n}_{1}}, (1) 其中
P 为源点云,{P}_{\mathrm{w}} 为重投影后的点云,{F}^{\mathrm{u}\mathrm{p}} 为从上一个尺度上采样的场景流.在PointPWC-Net中,对2个点云以及对应的特征进行了代价体的构建. 假设
{\boldsymbol{g}}_{i}\in {\mathbb{R}}^{C} 是对应目标点云点{\boldsymbol{p}}_{i}\in P 的特征,{\boldsymbol{h}}_{j}\in {\mathbb{R}}^{C} 是对应目标点云点{\boldsymbol{q}}_{i}\in Q 的特征,那么对应2个点之间的匹配度定义为:Cost({\boldsymbol{p}}_{i},{\boldsymbol{q}}_{j})=M\left(concat\right({\boldsymbol{g}}_{i},{\boldsymbol{h}}_{j},{\boldsymbol{q}}_{j}-{\boldsymbol{p}}_{i}\left)\right), (2) 使用多层感知机(multilayer perceptron)
M 将2点之间的潜在关系和点与点之间的距离串联后进行学习.在有了点对点的匹配度之后,将其组成当前尺度的代价体,PointPWC-Net根据源点云点到目标点云邻域点的距离对代价体加权,即对于1个源点云的点{\boldsymbol{p}}_{i}\in P ,得到它在目标点云Q 上的1个邻域{N}_{Q}\left({\boldsymbol{p}}_{i}\right) ,再根据目标点云邻域中的每个点到源点云点的距离得到权重C.C=\displaystyle \sum _{{\boldsymbol{q}}_{j}\in {N}_{Q}\left({\boldsymbol{p}}_{i}\right)}{W}_{Q}({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i})Cost({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i}), (3) {W}_{Q}({\boldsymbol{q}}_{j},{\boldsymbol{p}}_{i})=M({\boldsymbol{q}}_{j}-{\boldsymbol{p}}_{i}). (4) 使用PointPWC-Net估计初始场景流时,沿用了多尺度监督损失,对应估计得到4个尺度的场景流,对场景流真实值采样到同样的尺度,在对应的尺度上做不同权重
\alpha 的2范数监督.{Loss}_{\mathrm{s}\mathrm{f}}=\displaystyle \sum _{l={l}_{0}}^{L}{\alpha }_{l}\displaystyle \sum _{\boldsymbol{p}\in P}\left|\right|{{\boldsymbol{F}}}^{l}\left({\boldsymbol{p}}\right)-{{\boldsymbol{F}}}_{\boldsymbol{GT}}^{l}\left(\boldsymbol{p}\right)|{|}_{2}. (5) 2.3. 场景流置信度预测模块
在骨干网络输出初始场景流后,会经过置信度预测模块对初始场景流预测置信度图.该置信度定义为初始场景流相对于真实值的误差,即预测的误差值越小,表示该点在骨干网络中估计的初始场景流越准确,置信度值越高.置信度预测模块首先使用“编码器−解码器”的构造,以初始场景流的3D矢量信息作为输入,在编码器过程中进行点的下采样,可以扩大置信度预测模块的感受野,参考更多的相邻场景流推断置信度;然后在解码器的上采样过程中使用跳跃链接,串联编码过程中对应尺度的特征信息,为上采样提供更多精细尺度的特征,获得更精细的上采样结果,并且考虑骨干网络中输出的场景流特征;最后使用sigmoid函数输出1个(0-1)分布的置信度图,并将该置信度图用于之后的场景流传播模块中.
置信度预测模块使用的是有监督的训练方式,监督信息是初始场景流与场景流真实值的2范数二值化得到的先验分布图,该分布图为初始场景流相对于真实值的先验误差分布.设定阈值
\theta ,当初始场景流与真实值的2范数小于\theta 时,设定为0,否则设定为1.由此得到的分布图为场景流先验的二分类分布图,用来监督置信度预测模块的输出.{\boldsymbol{GT}}_{{\rm{conf}}}=\left\{\begin{aligned} &0, |{\boldsymbol{F}}-{{\boldsymbol{GT}}}_{{\rm{sf}}}| <\theta ,\\ &1,|{\boldsymbol{F}}-{{\boldsymbol{GT}}}_{{\rm{sf}}}| \geqslant \theta , \end{aligned}\right. (6) \begin{aligned} {Loss}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}}=\;&-({{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}}\times \mathrm{ln}\;confmap+\\ &(1-{{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}})\times \mathrm{ln}\left(1-confmap\right)),\end{aligned} (7) 其中
confmap 是置信度预测模块得到的置信图.{{\boldsymbol{GT}}}_{\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f}} 是场景流先验分布图,在式(6)中由初始场景流F和真实值{{\boldsymbol{GT}}}_{\mathrm{sf}} 处理得到.估计置信度图的最后一层使用sigmoid函数将其转换为0~1之间的分布,因此就可以使用二分类交叉熵(binary cross entropy, BCE)进行监督.2.4. 场景流传播模块
经过场景流置信度图的预测,根据场景流置信度得到源点云中的高置信度点集和低置信度点集,由高置信度点集向低置信度点集进行限制半径的传播;根据邻域一致性的假设,如果高置信度点与低置信度点的传播半径不大于传播半径阈值,可以认为两点的场景流拥有一致性,可以使用高置信度点的场景流更新低置信度点的场景流.
{\boldsymbol{p}}_{2}=KNN\left({\boldsymbol{p}}_{1}\right),\quad {\boldsymbol{p}}_{1},{\boldsymbol{p}}_{2}\in P, (8) {\boldsymbol{p}}_{1} 和{\boldsymbol{p}}_{2} 都属于源点云P ,因为邻域一致性依赖于点云内相邻点的相关性,所以距离最近的点最有可能存在场景流的邻域一致性,式\left(8\right) 中KNN 为K最近邻(K-nearest neighbor)方法采样低置信度点{\boldsymbol{p}}_{1} 在源点云的最近点.f\left({\boldsymbol{p}}_{1}\right)=f\left({\boldsymbol{p}}_{2}\right), \quad {\rm{if}}({\boldsymbol{p}}_{1}-{\boldsymbol{p}}_{2}) < \beta , (9) 其中
{\boldsymbol{p}}_{1} 和{\boldsymbol{p}}_{2} 分别为低置信度点和高置信度点,\beta 为传播半径阈值,当两点的距离不大于传播半径时传播更新低置信度点的场景流. 这里传播半径阈值非常重要,点云中的相邻点只有空间距离在一定阈值内才会具有相关性,在点密度足够的情况下,对于小邻域内的点的场景流具有一致性,这个邻域的半径阈值设置不同的数值会影响到优化结果.NCPUM在优化初始场景流时,会将反传的梯度在初始场景流处截断,即训练置信度预测模块时不会影响到骨干网络PointPWC-Net.
3. 实验与结果分析
与之前的工作的训练过程[4, 6, 16]类似,NCPUM在合成数据集Flyingthings3D[22]上训练模型,然后在Flyingthings3D和真实场景数据集KITTI[23]两个数据集上进行测试,将测试结果与其他方法的结果在表1中进行对比.之所以使用这样的训练策略是因为很难在真实场景中确定出来一个场景流真实值,这里使用的KITTI数据集只有142个场景,而合成数据集有更大的数据量可供训练,如Flyingthings3D数据集有19640对点云可以进行训练.在训练之前,遵循HPLFlowNet和PointPWC-Net的方式对数据进行了预处理,点云场景中没有被遮挡的点.
表 1 NCPUM与其他方法的对比Table 1. Comparison of NCPUM and Other Methods数据集 方法 EPE/m Acc3DS/% Acc3DR/% Outlier3D/% Flyingthings3D FlowNet3D[3] 0.114 41.3 77.0 60.2 HPLFlowNet[4] 0.080 61.4 85.5 42.9 PointPWC-Net[6] 0.059 73.8 92.8 34.2 FLOT[16] 0.052 73.2 92.7 35.7 HALFlow[17] 0.049 78.5 94.7 30.8 NCPUM 0.060 76.1 93.9 30.7 KITTI FlowNet3D[3] 0.177 37.4 66.8 52.7 HPLFlowNet[4] 0.117 47.8 77.8 41.0 PointPWC-Net[6] 0.069 72.8 88.8 26.5 FLOT[16] 0.056 75.5 90.8 24.2 HALFlow[17] 0.062 76.5 90.3 24.9 NCPUM 0.070 78.1 91.5 22.3 注:黑体数字表示最优结果. 在接下来的内容中,将介绍NCPUM实现的细节,然后将测试结果与之前的方法进行对比,证明了NCPUM的有效性.并且我们认为Flyingthings3D数据集与KITTI数据集差异较大,将NCPUM在KITTI数据集前100对数据上进行微调,在后42对数据上进行了测试,更换不同骨干网络微调的测试结果在表3中展示,证明NCPUM基于邻域一致性假设的传播更新方法更适用于真实场景,并进行了消融实验,对传播半径阈值进行更多的对比实验.
表 3 在KITTI数据集上微调测试Table 3. Fine Tuning and Testing on KITTI Dataset骨干网络 方法 EPE/m Acc3DS/% Acc3DR/% Outlier3D/% FlowNet3D[3] w/o ft 0.173 27.6 60.9 64.9 w ft 0.102 33.6 70.3 44.9 NCPUM 0.094 38.6 74.1 37.1 PointPWC-Net[6] w/o ft 0.069 72.8 88.8 26.5 w ft 0.045 82.7 95.1 25.3 NCPUM 0.043 87.5 96.9 24.3 FLOT[16] w/o ft 0.056 75.5 90.8 24.2 w ft 0.029 89.4 96.8 18.0 NCPUM 0.028 89.9 97.0 17.5 注:黑体数字表示最优结果;w/o表示with/without;ft表示fine-funing. 3.1 网络实现细节
NCPUM的训练设置与骨干网络PointPWC-Net相同,对输入的点云采样到8192个点,为其构建4层的特征金字塔,每一层的损失权重α设置为α0 = 0.02,α1 = 0.04,α2 = 0.08,α3 = 0.16,训练时的初始学习率设置为0.001,训练800个周期,每80个周期学习率减少一半.在对KITTI进行微调时,参数设置与NCPUM设置一致,只是将训练周期减少为400个,并且学习率每40个周期减少一半.
置信度预测模块设置了3个尺度,分别为2048,512,256个点,特征通道数都为64.在经过3次下采样的特征提取之后得到了3个尺度的场景流特征,再经过3次上采样将特征传播到原尺度.前2个尺度的上采样都同时考虑了前一尺度的特征和下采样时对应的特征;后1个尺度的上采样考虑原尺度的场景流,在上采样到原尺度之后,与骨干网络输出的场景流特征串联在一起,经过2个1D卷积,再经过sigmoid得到0~1分布的置信度.
3.2 评价标准
本文沿用了场景流估计任务一直以来的评价标准[3-4, 6,18],假设
\boldsymbol{F} 代表场景流估值,{\boldsymbol{F}}_{{\boldsymbol{GT}}} 代表场景流真实值,各个评价标准的定义为:1)EPE3D.
\left|\right|{\boldsymbol{F}}-{{\boldsymbol{F}}}_{{\boldsymbol{GT}}}|{|}_{2} 表示点云上的逐点误差平均值.2)Acc3DS.
EPE3D < 0.05\;\mathrm{m} 或者相对误差< 5\% 的点占比.3)Acc3DR.
EPE3D < 0.1\;\mathrm{m} 或者相对误差< 10\% 的点占比.4)utlier3D.
EPE3D > 0.3\;\mathrm{m} 或者相对误差> 10\% 的点占比.3.3 在Flyingthings3D上的训练和测试
Flyingthings3D可用于训练的数据集共有19640对点云,可用于测试的数据集有3824对点云.与FlowNet3D、 PointPWC-Net模型方法的设置相同,本文只使用深度小于35 m的点,大部分的前景物体都能包含在这个范围内.对每个点云采样得到8192个点用于训练和测试,训练设置参数在3.1节网络实现细节中进行了介绍.
表1展示了在Flyingthings3D数据集上NCPUM在2个准确度指标和离群值占比指标上均优于骨干网络PointPWC-Net.尤其是Acc3DS指标,对比PointPWC-Net有2.3%的提升.但在EPE指标上略低于PointPWC-Net.其原因是在场景流传播模块中会使低置信度点的场景流与高置信度点的场景流相等,对于估计偏差大的点,会传播更新一个准确的场景流,对于估计误差小的低置信度点,传播更新操作反而带来了精度上的扰动.但是准确度是统计EPE3D < 0.05 m或者相对误差< 5%的点占比,所以我们的方法能够优化估计偏差大的匹配困难点,提升准确度.本文在表2中整理出更新点的统计信息(图2中
\hat{{\boldsymbol{F}}} 与F的比较),其中包含更新点占全部点的比例(更新点占比)、更新点中精度提升的占比(改进点占比)、更新点中精度下降的占比(扰动点占比)、精度提升的平均值(改进均值)、精度下降的平均值(扰动均值).可以看到,有一半以上的点其实是产生了扰动,并且产生扰动的均值大于改进的均值,因此在精度上NCPUM确实会产生一定的扰动,但是在准确度指标和离群值占比指标上有大幅提升.表 2 Flyingthings3D数据集更新点统计信息Table 2. Statistical Information of Update Points on Flyingthings3D Dataset方法 更新点占比/% 改进点占比/% 扰动点占比/% 改进均值/m 扰动均值/m EPE/m Acc3DS/% Acc3DR/% Outlier3D/% PointPWC-Net 0.059 73.8 92.8 34.2 NCPUM 9.73 46.98 53.02 0.003 0.004 0.060 76.1 93.9 30.7 注:黑体数字表示最优结果. 3.4 在KITTI上测试和微调
KITTI Scene Flow 2015数据集包括200个训练场景和200个测试场景,遵循之前工作的方法,在具有真实值并且可用的142对点云数据集上进行测试,并且与之前的方法保持一致[4,6],删除了点云中高度小于0.3 m的地面点.在KITTI数据集上,先使用在Flyingthings3D数据集上训练的模型直接对KITTI数据集进行测试而不经过任何微调,以评估NCPUM的泛化能力.从表1中可以看到NCPUM在Acc3DS和Acc3DR评价指标上优于PointPWC-Net,在Acc3DS指标上有5.3%的提升,在Acc3DR指标上有2.7%的提升,提升幅度大于Flyingthings3D数据集,表现出优秀的泛化性能.不经过微调而在真实数据集上表现更加优秀的原因在于真实数据集KITTI的点云数据物体之间的距离大于Flyingthings3D数据集中物体之间的距离,NCPUM的邻域一致性假设更适用于这种数据特性,所以NCPUM在真实场景上会有更加优秀的表现.本文统计了表1中FLOT和NCPUM的场景流估计时间,FLOT在一秒钟内可以处理2.15对连续点云,而NCPUM在一秒钟内可以处理5.09对连续点云,NCPUM处理速度约是FLOT的2.37倍.在真实使用场景中,准确场景流在总体场景流中的占比比场景流的绝对平均误差值更有意义,拥有更多的准确场景流代表该方法为真实应用带来更好的稳定性.NCPUM在Acc3DS和Acc3DR准确度指标上都有可观的提升,尤其在真实数据集上的场景流Acc3DS指标超过PointPwc的Acc3DS指标7.28%,超过HALFlow的Acc3DS最佳结果2.09%,对比之前的方法,NCPUM的处理速度和准确度表现出更大的应用潜力.
因为Flyingthings3D数据集和KITTI数据集存在较大的差异,直接使用在Flyingthings3D数据集上预训练的模型在KITTI数据集上测试并不能完全展示NCPUM在KITTI数据集上的性能.所以本文将KITTI数据集拆分,使用前100对场景微调NCPUM,并在剩余的42对场景上进行测试.分别将FlowNet3D,PointPWC-Net, FLOT 3种骨干网络在KITTI数据集上进行微调,然后进行NCPUM的微调,将微调后的3种骨干网络做对比.在微调之后,3种骨干网络的NCPUM可以获得更好的效果,如表3所示,微调后的NCPUM对比微调后对应的骨干网络,在4个评价标准上都有提升,与泛化能力测试结果不同的是,NCPUM在EPE指标上也有了一定的提升,我们认为,Flyingthings3D是虚拟数据集,场景中物体间的距离较小,对某一物体边缘的低置信度点传播更新,可能采样到距离较近的其他物体上的准确点,而不是同一物体中距离较远的准确点,例如图3所示,绿色点为更新场景流的低置信度点,红色的点是传播场景流的高置信度点,黄色的连线为传播更新关系.在图3(a)和图3(b)中都出现了采样到其他物体的现象;KITTI是真实数据集,物体之间的距离较大,如图3(c)所示,不容易出现采样错误的情况,只有在如图3(d)中的远端离群点上可能出现采样不准确的情况,因此KITTI相较于Flyingthings3D数据集更容易符合邻域一致性的假设.
3.5 传播半径阈值设置的影响
因为NCPUM是基于邻域一致性假设进行构建的,因此传播半径阈值设置十分重要,不同的半径阈值设置带来的效果是不一样的,甚至不合适的半径阈值会降低NCPUM优化效果.当半径阈值设置过大时,高置信度点的场景流会传播到不具有一致性的低置信度点上,出现扰动;当半径设置过小时,只会有少部分低置信度点会被更新.数据集的不同也会影响到传播半径的设置,对比虚拟数据集,真实数据集因为物体间的距离更大,更容易设置一个合适的传播半径进行传播,这也是NCPUM泛化到真实数据集表现更好的原因.表4对2个数据集上设置的不同传播半径进行对比,NCPUM在Flyingthings3D数据集上的半径设置为0.4时达到最好效果,而在KITTI数据集上的半径设置为3.0时达到最好效果.这个数据的差异表现出在真实场景上对传播的约束更加小,传播更新可以影响到更多的点,从而带来更好的改进效果.
表 4 NCPUM在不同半径阈值下的测试Table 4. NCPUM Tested with Different Radius Threshold数据集 半径阈值 EPE/m Acc3DS/% Acc3DR/% Outlier3D/% Flyingthings3D 1.0 0.062 75.2 93.5 31.7 0.4 0.060 76.1 93.9 30.7 0.2 0.060 76.0 93.8 30.7 KITTI 5.0 0.054 85.8 95.9 24.1 3.0 0.043 87.5 96.9 24.3 1.0 0.043 85.0 95.4 25.5 注:黑体数字表示最优结果. 3.6 在不同骨干网络上测试
为了证明NCPUM方法的泛化性能,本文尝试在不同的骨干网络上进行优化.我们分别以FlowNet3D,PointPWC-Net,FLOT为骨干网络构建使用置信度预测模块和场景流传播模块,在KITTI数据集上进行微调和使用NCPUM优化方法改进.测试结果如表3所示,在对FlowNet3D, PointPWC-Net,FLOT方法使用NCPUM优化方法后,4个指标上都有明显的提升,展示了NCPUM优化方法对不同骨干网络的泛化性.
在图4中可视化了NCPUM的传播更新过程,绿色点为更新场景流的低置信度点,红色点是传播场景流的高置信度点,黄色的连线为传播更新关系.可以看到KITTI数据集中具有一致性的汽车表面会出现估计不准确的绿色低置信度点,这些点更多是位于距离激光雷达更远的低密度点和邻域信息单一的边缘点上,若只关注连续点云间匹配的方法容易在这些点上出现较大的误差,NCPUM对匹配困难点寻找一个匹配准确点进行更新,和相邻的准确点保持一致,从而提高整体的估计准确度;同时传播过程要限制传播半径阈值,避免引入扰动.
4. 结 论
本文提出了一种根据邻域一致性的传播更新方法NCPUM来优化估计场景流的精度.该方法通过置信度预测模块对骨干网络估计出的初始场景流预测置信度图来判断场景流准确度;然后通过场景流传播模块在有限制的传播半径内从匹配困难点寻找匹配准确点,将匹配准确点的场景流传播到匹配困难点上,以提高场景流的估计精度.NCPUM在不同的数据集Flyingthings3D和KITTI上都体现出了优于之前工作的性能.并且通过在真实数据集上的微调实验和不同传播半径的实验展现出NCPUM在真实场景数据有更加优秀的表现.
作者贡献声明:郑晗负责提出模型、编写代码、实施实验过程和论文撰写;王宁负责修订和完善论文;马新柱负责数据分析和数据解释;张宏负责理论和实验设计;王智慧负责理论设计;李豪杰提供论文写作指导.
-
表 1 基于传统方法的云检测方法对比
Table 1 Comparison of Cloud Detection Methods Based on Traditional Methods
类别 方法来源 卫星/特征 方法描述 优点 缺点/难点 阈值法 文献[5] 高分6号 多信息协同云识别算法 消除将冰/雪像素误识别为云的问题 只适用于特定卫星数据 文献[6] 二氧化碳观测卫星 基于近紫外到近红外进行不同
阈值测试解决该探测器可见光到热红外波段
有效的云识别问题只适用于特定卫星数据 文献[7] 高分6号 最大类间方差法(OTSU)自适应
获得阈值适用于高分6号卫星宽幅相机数据 只适用于特定卫星数据 文献[8] GK-2A卫星 结合滤波技术以及动态阈值方法 准确检测云覆盖区域 会在明亮地表覆盖和沙漠地区出现误分类 文献[9] PlanetScope卫星 自适应将单个图像的云指数和时间
序列中的反射异常值信息相结合对热带地区的云检测准确度更高更
有效仍有大量的低密度云/霾/云阴影遗漏 文献[10] 多种传感器 基于随机森林云检测方法 适用于多种传感器,避免了复杂的
阈值设置地表被大面积雪覆盖且纹理信息较弱时检测不准确 文献[11] 多种传感器 采用马氏距离进行聚类 不受光谱范围限制,扩大适用范围 无法响应实时变化 空间特征法 文献[12] 光谱信息 将影像划分块并根据云雾与地物的
差异特征对云区域进行自适应分割无需依赖预先确定的阈值和先验
信息,检测效率较高易受影像中山脉、雪等类云地物影响 文献[13] 光谱、纹理和频率信息 训练SVM分类器提取影像中的
云区域提高了检测准确性 只对厚云薄云进行了初步的定性区分,没有定量分析 文献[14] 成像和物理特性 利用多特征嵌入式学习SVM构造
分类器对稠密、稀薄和卷积云影像表现
良好研究区域地物类型比较单一 文献[15] 光谱和时间信息 基于多时相影像构建低秩矩阵分解
模型比基于单幅影像的效果好,同时
有效地保留了云的细节和边界对于大面积云污染的影像检测效果不佳 表 2 基于深度学习的云检测方法对比
Table 2 Comparison of Cloud Detection Methods Based on Deep Learning
方法来源 网络类型 方法描述 优点 缺点/难点 文献[36] 轻量级CNN 快速捕获图像中的多尺度特征信息并高效分割云和地物 突出云层细节,提高检测精度 无法根据云的结构信息进行自适应调整 文献[37] 轻量级CNN 利用双路径架构同时提取空间和语义信息 参数和计算量通过特征重用大大减少 没有充分利用影像的光谱特征,光谱细节容易丢失 文献[38] 轻量级CNN 融合了多尺度光谱和空间特征 计算量很少,没有额外的参数 无法根据云的结构信息进行自适应调整 文献[39-40] 自适应CNN 利用可变形的上下文特征金字塔模块和可变形卷积块 提高了多尺度特征的自适应建模能力 需要大量的像素级标注标签 文献[41] CNN 基于级联特征注意和通道注意的卷积神经网络 可以提取云区域的颜色和纹理特征,去除冗余 仅适用于单一传感器,泛化能力较差 文献[42] CNN 基于地理信息驱动网络的新型云检测方法 整合了地理信息,优于其他云和雪检测方法 容易漏掉薄云区域 文献[43] CNN 提出了将遥感影像与地理空间数据相结合的自动云检测神经网络 提高云雪共存下高分辨率影像云检测的准确性 容易漏掉薄云区域 文献[44] CNN 提出了全局上下文密集块网络 能够解决薄云漏检问题 仅适用于单一传感器,泛化能力较差 文献[45] CNN和Transformer 提出一种融合遥感影像光谱和空间信息的新型云检测深度网络 克服了光谱特征提取依赖经验性的线性组合,并能减少空间位置信息损失 多个分支的存在造成模型参数量较大 文献[46] CNN和Transformer 利用双路径解码器结构实现了对相似目标的准确分类 对于云雪以及薄云的检测更准确 模型计算速度较慢 表 3 基于单幅影像的云去除方法对比
Table 3 Comparison of Cloud Removal Methods Based on Single Image
类别 方法来源 方法描述 优点 缺点/难点 成像模型 文献[49] 将图像分块并通过新的颜色通道去除大气散射 恢复图像的清晰度较高,恢复效果较好 当图像中没有晴空区域,会导致计算不精确 文献[50-52] 通过大气散射模型估计大气光和透射图 可以恢复纹理细节并在非均匀雾图像上也有成效 直接用于高光谱图像去雾时结果不佳 文献[53-55] 改进的基于辐射传输模型薄云去除算法 去除薄云并有效保留地物信息 薄云和地面目标反射率的混合会对结果产生影响 文献[56] 通过可见光或红外波段与卷云波段之间的关系估计云污染 解决从卷云污染数据中自动识别同质背景的关键问题 对于复杂场景效果不佳 文献[57] 结合散射定律和2个相邻蓝色波段之间的高相关性 去除卷云的同时地面信息也能充分恢复 无法解决卷云与地物信息混合的问题 图像修复 文献[58] 改进的双边滤波器Retinex算法 增强整幅图像的对比度并还原图像的色彩信息 未完备考虑图像降质的原因,去雾效果有限 文献[59-60] 利用小波变换对图像进行多分辨率分解 更好地恢复地物信息,保留图像的细节 未完备考虑图像降质的原因,去雾效果有限 文献[61] 利用图像滤波自适应图像增强算法优化图像 保证了恢复图像的结构相似性,减少颜色失真 时间复杂度和计算量较大 文献[62-65] 基于暗通道先验与多尺度修正的Retinex去雾算法 颜色畸变小,更接近真实图像 性能依赖于先验和约束信息 文献[66-67] 提出邻域相似像素插值器方法 通过计算相似像素权重再重新组合得到预测信息 缺乏辅助影像,仅适用于地物分布较简单场景 文献[68-69] 学习特征字典通过稀疏表示相邻信息推断缺失补丁 结构和纹理与周围地面信息保持一致 结果不适用于裁剪过的 Landsat-7场景 深度学习 文献[70-73] 基于CNN端到端的方法进行薄云校正 提取充足的有用特征信息,并能够处理复杂场景 会丢弃一些有用的先验知识,配对数据不准确时效果较差 文献[74] 基于GAN和云失真物理模型的半监督方法 可以将未配对的多云和无云图像作为输入 没有足够的训练数据去除厚云区域 文献[75-76] 基于GAN网络模型去除薄云 保留了原始图像的纹理信息 当有云/无云数据配对不准确时会严重影响去云效果 文献[77] 利用迁移学习技术在大量模拟图像对上进行预训练 可以在训练数据稀缺的情况下取得更好的性能 在厚云存在场景会降低准确性 文献[78] 使用失真编码网络结合复合损失函数 图像语义连贯性恢复效果较好,非云区不影响去云质量 没有足够的训练数据去除厚云区域 文献[79-80] 基于GAN和CNN网络相结合的厚云去除方法 更好解决局部性问题,不需要成对图像,对复杂纹理效果好 对去雨、去雾等任务不能通用,区域破坏大,会产生不真实结果 表 4 基于参考影像的云去除方法对比
Table 4 Comparison of Cloud Removal Methods Based on Reference Images
类别 方法来源 方法描述 优点 缺点/难点 显示构建 文献[103] 应用统计相似性在克隆过程中考虑光谱特征和季节效应 保持光谱和结构一致性 结果与云检测预处理步骤中生成的精度直接相关 文献[104] 多时相影像采用泊松混合算法 在重建像素方面非常有效 严重依赖陆地卫星数据的云掩膜质量 文献[105] 采用空间和时间加权回归模型优化整合来自不变相似像素的互补信息 得到理想的去云效果 忽略了由太阳光照和大气条件引起的相互差异 文献[106] 提出增强时间序列模型和连续变化检测算法 具有更高的预测精度和处理土地覆盖突变情况的能力 对于没有物候特征的覆盖类型精度没有显著提高 文献[107] 结合使用自动编码器和基于长短期记忆的相似像素聚类 前向重建模型与前向和后向模型相比,呈现更好的结果 应设计集成网络,将相似像素聚类和重建结合在一起 隐式构建 文献[108] 提出了时间平滑的空间-光谱全变分正则化低秩组稀疏分解方法 提高了稀疏度,保证不同方向的平滑度并可以重建细节 可能会破坏固有的高维结构,没有考虑波段之间的相关性 文献[109] 提出了框约束沿光谱维度的群稀疏函数 相比稀疏函数更准确地表征云特性,不需要云掩码作为先验 不能实时快速处理相关应用,对参数敏感 文献[110] 将期望最大化Tucker方法应用于卫星遥感影像 与简单PCA方法相比,对缺失数据的补全准确度更高 待修复区域发生较大地物变化时效果较差 文献[111] 将深度先验与低秩张量补全相结合 保持空间一致性,细节纹理清晰 只能处理单幅图像 文献[112] 运用数据驱动方法将深度时空先验与低秩张量奇异值分解相结合 具有实用性,去除效果比单独使用低秩张量奇异值分解更好 不能实时快速处理 文献[113] 提出基于光谱-时间度量的缺失观测预测方法 在恢复异质植被区域方面表现出色 需对目标图像中的缺失值进行预插补来计算光谱和时间权重 文献[114] 提出基于耦合张量分解和基于拉格朗日乘子法的厚云去除方法 解决云掩膜不准确的问题 不能实时快速处理 表 5 基于多传感器影像的云去除方法对比
Table 5 Comparison of Cloud Removal Methods Based on Multi-Sensor Images
类别 方法来源 方法描述 优点 缺点/难点 光学影像辅助 文献[120] 将MODIS影像作为辅助数据,利用泊松调整的时空融合方法实现Landsat影像云层的去除 能够很好地处理土地覆盖类型发生显著变化的时间序列影像 算法非常依赖云检测结果 文献[121] 提出改进的时空数据融合模型,将MODIS和Landsat时序数据结合 处理显著土地覆盖变化的场景效果较好 难以处理多种类型云覆盖场景 文献[122] 使用MNSPI时间序列方法去除Sentinel-2和Landsat-8遥感影像云层覆盖 可以生成具有高时空分辨率时间序列影像 计算效率较低 文献[123] 基于SSRF方法,利用Landsat-8对高光谱影像进行厚云去除 能获得更高精度的高光谱重建结果 没有充分挖掘影像空间信息 SAR影像辅助 文献[124] 通过预卷积将SAR和光学影像转换为超特征图并输入到G-FAN模型网络 可以同时实现云去除、图像去模糊和图像去噪 模型缺乏可解释性 文献[125-127] 以GAN作为主干网络,将SAR作为辅助数据实现光学影像去云 充分学习了SAR与光学影像之间的映射关系 可利用的数据集有限,需要解决配准问题 文献[128] 将CNN和GAN网络结合重建云覆盖区域 能够利用被云污染区域的光谱信息 需要在不同的地点对不同模型进行预训练 文献[129] 提出基于深度学习的异构时空谱融合方法 利用端到端循环一致生成对抗网络学习生成结果与真实影像之间的映射 受SAR影像的散斑噪声较大 文献[130] 提出基于全局-局部融合的云去除算法 能够增强SAR数据的利用率 SAR和光学影像边缘提取不准确 文献[131] 采用了GAN架构,结合自动编码器 利用光学特征和边缘图指导修复模型生成无云影像 SAR和光学影像边缘提取不准确 文献[132] 训练CNN建立多时相SAR和光学影像之间的关系 在没有大量训练数据集情况下反映地面信息变化的去云结果 计算效率较低 文献[133] 利用cGAN和ConvLSTM提取SAR数据和光学时间序列影像时空特征 在大量数据集上进行验证,鲁棒性强 需要充分挖掘SAR时间序列影像特征 文献[134] 提出多模态和多时相3维卷积神经网络模型 充分保留多时相影像时间信息,并发布了大规模数据集 没有考虑无云区域在长期变化引起的纹理和结构差异 表 6 云检测与云去除公共数据集
Table 6 Public Datasets of Cloud Detection and Cloud Removal
数据集 数据规模 像素大小 空间分辨率/m 波段数量 云厚度 38-Cloud[136] 38个场景,2000多幅影像 384×384 30 4 薄、厚 GF1-WHU[137] 19个场景,950幅影像 600×600 16 4 薄、厚 Sentinel-2 Cloud[138] 513幅影像 1022×1022 20 13 薄、厚 RICE[139] RICE1:500组,RICE2:450组 512×512 30 11 薄 NWPU-RESISC45[140] 45个场景,每个场景700幅影像 256×256 0.2~30 3 薄、厚 SPARCS[141] 80个场景,共720幅影像 1000×1000 30 10 薄、厚 表 7 4种云检测方法的定量结果对比
Table 7 Quantitative Results Comparison of Four Cloud Detection Methods
方法 A P R F1 V Fmask 0.904 0.935 0.793 0.842 0.226 CNN 0.903 0.913 0.801 0.837 0.225 U-Net 0.907 0.957 0.777 0.851 0.232 Cloud-Net 0.934 0.839 0.945 0.875 0.222 注:数值为结果平均值,加粗数字表示最优结果. 表 8 4种云去除方法的定量对比结果
Table 8 Quantitative Comparison Results of Four Cloud Removal Methods
方法 PSNR/dB SSIM RMSE V HazeRemoval 21.504 0.899 0.110 0.011 CycleGAN 26.993 0.935 0.052 0.011 SpA-GAN 27.885 0.937 0.043 0.115 pix2pix 31.883 0.950 0.032 0.007 注:加粗数字表示最优结果. 表 9 4种厚云去除方法的定量对比结果
Table 9 Quantitative Comparative Results of Four Thick Cloud Removal Methods
方法 PSNR/dB SSIM RMSE V HALRTC 19.710 0.980 0.070 0.009 STDC 7.529 0.962 0.285 0.014 NLLRTC 15.245 0.975 0.117 0.010 TRLRF 21.429 0.986 0.058 0.010 注:加粗数字表示结果最优. -
[1] 陈善静,向朝参,康青,等. 基于多源遥感时空谱特征融合的滑坡灾害检测方法[J]. 计算机研究与发展,2020,57(9):1877−1887 Chen Shanjing, Xiang Chaocan, Kang Qing, et al. Multi-source remote sensing based accurate landslide detection leveraging spatial-temporal-spectral feature fusion[J]. Journal of Computer Research and Development, 2020, 57(9): 1877−1887 (in Chinese)
[2] Liu Huizeng, Zhou Qiming, Li Qingquan, et al. Determining switching threshold for NIR-SWIR combined atmospheric correction algorithm of ocean color remote sensing[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 153: 59−73 doi: 10.1016/j.isprsjprs.2019.04.013
[3] Wang Lin, Bi Jianzhao, Meng Xia, et al. Satellite-based assessment of the long-term efficacy of PM2.5 pollution control policies across the Taiwan Strait[J]. Remote Sensing of Environment, 2020, 251: 112067
[4] Liu Qi, Gao Xinbo, He Lihuo, et al. Haze removal for a single visible remote sensing image[J]. Signal Processing, 2017, 137: 33−43 doi: 10.1016/j.sigpro.2017.01.036
[5] Li Jinghan, Ma Jinji, Li Chao, et al. Multi-information collaborative cloud identification algorithm in Gaofen-5 directional polarimetric camera imagery[J]. Journal of Quantitative Spectroscopy and Radiative Transfer, 2021, 261: 107439
[6] Ding Ning, Shao Jianbing, Yan Changxiang, et al. Near-ultraviolet to near-infrared band thresholds cloud detection algorithm for TANSAT-CAPI[J]. Remote Sensing, 2021, 13(10): 1906 doi: 10.3390/rs13101906
[7] Ke Shiyun, Wang Mi, Cao Jinshan, et al. Research on cloud detection method of GaoFen-6 wide camera data[C] //Proc of the 7th China High Resolution Earth Observation Conf (CHREOC). Berlin: Springer, 2022: 321−340
[8] Lee S, Choi J. Daytime cloud detection algorithm based on a multitemporal dataset for GK-2A imagery[J]. Remote Sensing, 2021, 13(16): 3215 doi: 10.3390/rs13163215
[9] Wang Jing, Yang Dedi, Chen Shuli, et al. Automatic cloud and cloud shadow detection in tropical areas for PlanetScope satellite images[J]. Remote Sensing of Environment, 2021, 264: 112604 doi: 10.1016/j.rse.2021.112604
[10] Yao Xudong, Guo Qing, Li An, et al. Optical remote sensing cloud detection based on random forest only using the visible light and near-infrared image bands[J]. European Journal of Remote Sensing, 2022, 55(1): 150−167 doi: 10.1080/22797254.2021.2025433
[11] 郭玲,韩迎春,蔡浩宇,等. 基于马氏距离和 SLIC 算法的云检测模型[J]. 计算机科学与应用,2022,12(1):17−25 doi: 10.12677/CSA.2022.121003 Guo Ling, Han Yingchun, Cai Haoyu, et al. Cloud detection model based on Mahalanobis distance and SLIC algorithm[J]. Computer Science and Application, 2022, 12(1): 17−25 (in Chinese) doi: 10.12677/CSA.2022.121003
[12] 李俊杰,傅俏燕. “高分七号”卫星遥感影像自动云检测[J]. 航天返回与遥感,2020,41(2):108−115 Li Junjie, Fu Qiaoyan. Automatic cloud detection of GF-7 satellite imagery[J]. Spacecraft Recovery & Remote Sensing, 2020, 41(2): 108−115 (in Chinese)
[13] 张波,胡亚东,洪津. 基于多特征融合的层次支持向量机遥感图像云检测[J]. 大气与环境光学学报,2021,16(1):58−66 Zhang Bo, Hu Yadong, Hong Jin. Cloud detection of remote sensing images based on H-SVM with multi-feature fusion[J]. Journal of Atmospheric and Environmental Optics, 2021, 16(1): 58−66 (in Chinese)
[14] Zhang Weidong, Jin Songlin, Zhou Ling, et al. Multi-feature embedded learning SVM for cloud detection in remote sensing images[J]. Computers and Electrical Engineering, 2022, 102: 108177 doi: 10.1016/j.compeleceng.2022.108177
[15] Zhang Hongyan, Huang Qi, Zhai Han, et al. Multi-temporal cloud detection based on robust PCA for optical remote sensing imagery[J]. Computers and Electronics in Agriculture, 2021, 188: 106342 doi: 10.1016/j.compag.2021.106342
[16] Zhu Zhe, Woodcock C E. Object-based cloud and cloud shadow detection in Landsat imagery[J]. Remote Sensing of Environment, 2012, 118: 83−94 doi: 10.1016/j.rse.2011.10.028
[17] Shi Qiu, He Binbin, Zhu Zhe, et al. Improving Fmask cloud and cloud shadow detection in mountainous area for Landsats 4–8 images[J]. Remote Sensing of Environment, 2017, 199: 107−119 doi: 10.1016/j.rse.2017.07.002
[18] Zhu Zhe, Woodcock C E. Automated cloud, cloud shadow, and snow detection in multitemporal Landsat data: An algorithm designed specifically for monitoring land cover change[J]. Remote Sensing of Environment, 2014, 152: 217−234 doi: 10.1016/j.rse.2014.06.012
[19] 刘心燕, 孙林, 杨以坤, 等. 高分四号卫星数据云和云阴影检测算法[J]. 光学学报, 2019, 39(1): 0128001 Liu Xinyan, Sun Lin, Yang Yikun, et al. Cloud and cloud shadow detection algorithm for Gaofen-4 satellite data[J]. Acta Optica Sinica, 2019, 39(1): 0128001(in Chinese)
[20] 陈曦东,张肖,刘良云,等. 增强型多时相云检测[J]. 遥感学报,2019,23(2):280−290 Chen Xidong, Zhang Xiao, Liu Liangyun, et al. Enhanced multi-temporal cloud detection algorithm for optical remote sensing images[J]. Journal of Remote Sensing, 2019, 23(2): 280−290 (in Chinese)
[21] 谭凯,张永军,童心,等. 国产高分辨率遥感卫星影像自动云检测. 测绘学报,2016,45(5):581−591 Tan Kai, Zhang Yongjun, Tong Xin, et al. Automatic cloud detection for Chinese high resolution remote sensing satellite imagery[J]. Acta Geodaetica et Cartographica Sinica, 2016, 45(5): 581−591 (in Chinese)
[22] Hu Xiangyun, Wang Yan, Shan Jie. Automatic recognition of cloud images by using visual saliency features[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(8): 1760−1764 doi: 10.1109/LGRS.2015.2424531
[23] Li Pengfei, Dong Linmin, Xiao Huachao, et al. A cloud image detection method based on SVM vector machine[J]. Neurocomputing, 2015, 169: 34−42 doi: 10.1016/j.neucom.2014.09.102
[24] 徐冬宇,历小梅,赵辽英,等. 基于光谱分析和动态分形维数的高光谱遥感图像云检测[J]. 激光与光电子学进展,2019,56(10):101003 Xu Dongyu, Li Xiaomei, Zhao Liaoying, et al. Hyperspectral remote sensing image cloud detection based on spectral analysis and dynamic fractal dimension[J]. Laser & Optoelectronics Progress, 2019, 56((10): ): 101003 (in Chinese)
[25] 冯书谊,张宁,沈霁,等. 基于反射率特性的高光谱遥感图像云检测方法研究[J]. 中国光学,2015,8(2):198−204 Feng Shuyi, Zhang Ning, Shen Ji, et al. Method of cloud detection with hyperspectral remote sensing image based on the reflective characteristics[J]. Chinese Optics, 2015, 8(2): 198−204 (in Chinese)
[26] Xia Meng, Wang Zhijie, Han Fang, et al. Enhanced multi-dimensional and multi-grained cascade forest for cloud/snow recognition using multispectral satellite remote sensing imagery[J]. IEEE Access, 2021, 9: 131072−131086 doi: 10.1109/ACCESS.2021.3114185
[27] Goodwin N R, Collett L J, Denham R J, et al. Cloud and cloud shadow screening across Queensland, Australia: An automated method for Landsat TM/ETM+ time series[J]. Remote Sensing of Environment, 2013, 134: 50−65 doi: 10.1016/j.rse.2013.02.019
[28] Mateo-García G, Gómez-Chova L, Amorós-López J, et al. Multitemporal cloud masking in the Google earth engine[J]. Remote Sensing, 2018, 10(7): 1079 doi: 10.3390/rs10071079
[29] Chai Dengfeng, Newsam S, Zhang Hankui, et al. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks[J]. Remote Sensing of Environment, 2019, 225: 307−316 doi: 10.1016/j.rse.2019.03.007
[30] Zhan Yongjie, Wang Jian, Shi Jianping, et al. Distinguishing cloud and snow in satellite images via deep convolutional network[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(10): 1785−1789 doi: 10.1109/LGRS.2017.2735801
[31] Li Zhiwei, Shen Huanfeng, Cheng Qing, et al. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 150: 197−212 doi: 10.1016/j.isprsjprs.2019.02.017
[32] Li Yansheng, Chen Wei, Zhang Yongjun, et al. Accurate cloud detection in high-resolution remote sensing imagery by weakly supervised deep learning[J]. Remote Sensing of Environment, 2020, 250: 112045 doi: 10.1016/j.rse.2020.112045
[33] Xie Fengying, Shi Mengyun, Shi Zhenwei, et al. Multilevel cloud detection in remote sensing images based on deep learning[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(8): 3631−3640 doi: 10.1109/JSTARS.2017.2686488
[34] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C] //Proc of the 18th Medical Image Computing and Computer-Assisted Intervention (MICCAI). Berlin: Springer, 2015: 234−241
[35] Mohajerani S, Saeedi P. Cloud-Net: An end-to-end cloud detection algorithm for Landsat 8 imagery[C] //Proc of the 39th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2019: 1029−1032
[36] Hu Kai, Zhang Dongsheng, Xia Min, et al. LCDNet: Light-weighted cloud detection network for high-resolution remote sensing images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 4809−4823 doi: 10.1109/JSTARS.2022.3181303
[37] Luo Chen, Feng Shanshan, Li Xutao, et al. ECDNet: A bilateral lightweight cloud detection network for remote sensing images[J]. Pattern Recognition, 2022, 129: 108713 doi: 10.1016/j.patcog.2022.108713
[38] Li Jun, Wu Zhaocong, Hu Zhongwen, et al. A lightweight deep learning-based cloud detection method for Sentinel-2A imagery fusing multiscale spectral and spatial features[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 3069641
[39] He Qibin, Sun Xiao, Yan Zhiyuan, et al. DABNet: Deformable contextual and boundary-weighted network for cloud detection in remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 5601216
[40] Liu Yang, Wang Wen, Li Qingyong, et al. DCNet: A deformable convolutional cloud detection network for remote sensing imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 19: 8013305
[41] Zhang Jing, Wu Jun, Wang Hui, et al. Cloud detection method using CNN based on cascaded feature attention and channel attention[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 4104717
[42] Wu Xi, Shi Zhenwei, Zou Zhengxia. A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 174: 87−104 doi: 10.1016/j.isprsjprs.2021.01.023
[43] Chen Yang, Weng Qihao, Tang Luliang, et al. An automatic cloud detection neural network for high-resolution remote sensing imagery with cloud–snow coexistence[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 19: 1−5
[44] Li Xian, Yang Xiaofei, Li Xutao, et al. GCDB-UNet: A novel robust cloud detection approach for remote sensing images[J]. Knowledge-Based Systems, 2022, 238: 107890 doi: 10.1016/j.knosys.2021.107890
[45] 陈思亚,计璐艳,张鹏,等. 融合遥感图像光谱和空间信息的云检测深度网络[J]. 中国科学院大学学报,2021,40(3):371−379 Chen Siya, Ji Luyan, Zhang Peng, et al. Spectral-spatial feature fusion deep network for cloud detection in remote sensing images[J]. Journal of University of Chinese Academy of Sciences, 2021, 40(3): 371−379 (in Chinese)
[46] Zhang Zheng, Xu Zhiwei, Liu Chang’an, et al. Cloudformer: Supplementary aggregation feature and mask-classification network for cloud detection[J]. Applied Sciences, 2022, 12(7): 3221 doi: 10.3390/app12073221
[47] 赵敏钧,赵亚伟,赵雅捷,等. 一种新的基于深度学习的重叠关系联合抽取模型(英文)[J]. 中国科学院大学学报,2022,39(2):240−251 Zhao Minjun, Zhao Yawei, Zhao Yajie, et al. A new joint model for extracting overlapping relations based on deep learning[J]. Journal of University of Chinese Academy of Sciences, 2022, 39(2): 240−251
[48] Sha Youyang, Zhang Yonghong, Ji Xuquan, et al. Transformer-Unet: Raw image processing with Unet[J]. arXiv preprint, arXiv: 2109.08417, 2021
[49] Sahu G, Seal A, Krejcar O, et al. Single image dehazing using a new color channel[J]. Journal of Visual Communication and Image Representation, 2021, 74: 103008 doi: 10.1016/j.jvcir.2020.103008
[50] Song Chengfang, Xiao Chunxia, Zhang Yeting, et al. Thin cloud removal for single RGB aerial image[J]. Computer Graphics Forum, 2021, 40(1): 398−409 doi: 10.1111/cgf.14196
[51] Ganguly B, Bhattacharya A, Srivastava A, et al. Single image haze removal with haze map optimization for various haze concentrations[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(1): 286−301
[52] Shi Zhenghao, Shao Shuai, Zhou Zhaorun. A saliency guided remote sensing image dehazing network model[J]. IET Image Processing, 2022, 16(9): 2483−2494 doi: 10.1049/ipr2.12502
[53] Lv Haitao, Wang Yong, Shen Yang. An empirical and radiative transfer model based algorithm to remove thin clouds in visible bands[J]. Remote Sensing of Environment, 2016, 179: 183−195 doi: 10.1016/j.rse.2016.03.034
[54] Zhou Binxing, Wang Yong. A thin-cloud removal approach combining the cirrus band and RTM-based algorithm for Landsat-8 OLI data[C] //Proc of the 39th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2019: 1434−1437
[55] Shan Shuai, Wang Yong. An algorithm to remove thin clouds but to preserve ground features in visible bands[C] //Proc of the 40th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2020: 5321−5324
[56] Xu Meng, Jia Xiuping, Pickering M. Automatic cloud removal for Landsat 8 OLI images using cirrus band[C] //Proc of the 34th IEEE Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2014: 2511−2514
[57] Zhang Chi, Li Huifang, Shen Huanfeng. A scattering law based cirrus correction method for Landsat 8 OLI visible and near-infrared images[J]. Remote Sensing of Environment, 2021, 253: 112202 doi: 10.1016/j.rse.2020.112202
[58] 张驰,谭南林,李响,等. 基于改进型Retinex算法的雾天图像增强技术[J]. 北京航空航天大学学报,2019,45(2):309−316 Zhang Chi, Tan Nanlin, Li Xiang, et al. Foggy image enhancement technology based on improved Retinex algorithm[J]. Journal of Beijing University of Aeronautics and Astronautics, 2019, 45(2): 309−316 (in Chinese)
[59] 胡根生,周文利,梁栋,等. 融合引导滤波和迁移学习的薄云图像中地物信息恢复算法[J]. 测绘学报,2018,47(3):348−358 Hu Gensheng, Zhou Wenli, Liang Dong, et al. Information recovery algorithm for ground objects in thin cloud images by fusing guide filter and transfer learning[J]. Acta Geodaetica et Cartographica Sinica, 2018, 47(3): 348−358 (in Chinese)
[60] Hsu W Y, Chen Y. Single image dehazing using wavelet-based haze-lines and denoising[J]. IEEE Access, 2021, 9: 104547−104559 doi: 10.1109/ACCESS.2021.3099224
[61] Zhou Guangbin, He Lifeng, Qi Yong, et al. An improved algorithm using weighted guided coefficient and union self-adaptive image enhancement for single image haze removal[J]. IET Image Processing, 2021, 15(11): 2680−2692 doi: 10.1049/ipr2.12255
[62] Shi Shaoqi, Zhang Ye, Zhou Xinyu, et al. Cloud removal for single visible image based on modified dark channel prior with multiple scale[C] //Proc of the 41st IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2021: 4127−4130
[63] Liu Xinggang, Liu Changjiang, Lan Hengyou, et al. Dehaze enhancement algorithm based on retinex theory for aerial images combined with dark channel[J]. Open Access Library Journal, 2020, 7(4): 1106280
[64] Tang Qunfang, Yang Jie, He Xiangjian, et al. Nighttime image dehazing based on retinex and dark channel prior using Taylor series expansion[J]. Computer Vision and Image Understanding, 2021, 202: 103086 doi: 10.1016/j.cviu.2020.103086
[65] Xia Fei, Song Hu, Dou Hao. Fog removal and enhancement method for UAV aerial images based on dark channel prior[J]. Journal of Control and Decision, 2022, 10(2): 188−197
[66] Chen Jin, Zhu Xiaolin, Vogelmann J E, et al. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images[J]. Remote Sensing of Environment, 2011, 115((4): ): 1053−1064 doi: 10.1016/j.rse.2010.12.010
[67] Zhu Xiaolin, Gao Feng, Liu Desheng, et al. A modified neighborhood similar pixel interpolator approach for removing thick clouds in Landsat images[J]. IEEE Geoscience and Remote Sensing Letters, 2011, 9(3): 521−525
[68] Meng Fan, Yang Xiaomei, Zhou Chenghu, et al. A sparse dictionary learning-based adaptive patch inpainting method for thick clouds removal from high-spatial resolution remote sensing imagery[J]. Sensors, 2017, 17(9): 2130 doi: 10.3390/s17092130
[69] Meng Fan, Yang Xiaomei, Zhou Chenghu, et al. Multiscale adaptive reconstruction of missing information for remotely sensed data using sparse representation[J]. Remote Sensing Letters, 2018, 9(5): 457−466 doi: 10.1080/2150704X.2018.1439198
[70] Li Wenbo, Li Ying, Chen Di, et al. Thin cloud removal with residual symmetrical concatenation network[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 153: 137−150 doi: 10.1016/j.isprsjprs.2019.05.003
[71] Zi Yue, Xie Fengying, Zhang Ning, et al. Thin cloud removal for multispectral remote sensing images using convolutional neural networks combined with an imaging model[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 3811−3823 doi: 10.1109/JSTARS.2021.3068166
[72] Ma Xiaofeng, Wang Qunming, Tong Xiaohua. A spectral grouping-based deep learning model for haze removal of hyperspectral images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 188: 177−189 doi: 10.1016/j.isprsjprs.2022.04.007
[73] Wen Xue, Pan Zongxu, Hu Yuxin, et al. An effective network integrating residual learning and channel attention mechanism for thin cloud removal[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 6507605
[74] Li Jun, Wu Zhaocong, Hu Zhongwen, et al. Thin cloud removal in optical remote sensing images based on generative adversarial networks and physical model of cloud distortion[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 166: 373−389 doi: 10.1016/j.isprsjprs.2020.06.021
[75] Zhao Yunpu, Shen Shikun, Hu Jiarui, et al. Cloud removal using multimodal GAN with adversarial consistency loss[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 19: 8015605
[76] Pan Heng. Cloud removal for remote sensing imagery via spatial attention generative adversarial network[J]. arXiv preprint, arXiv: 2009.13015, 2020
[77] Wen Xue, Pan Zongxu, Hu Yuxin, et al. Generative adversarial learning in YUV color space for thin cloud removal on satellite imagery[J]. Remote Sensing, 2021, 13(6): 1079 doi: 10.3390/rs13061079
[78] Zhou Jianjun, Luo Xiaobo, Rong Wentao, et al. Cloud removal for optical remote sensing imagery using distortion coding network combined with compound loss functions[J]. Remote Sensing, 2022, 14(14): 3452 doi: 10.3390/rs14143452
[79] Ran Xinyu, Ge Liang, Zhang Xiaofeng. RGAN: Rethinking generative adversarial networks for cloud removal[J]. International Journal of Intelligent Systems, 2021, 36(11): 6731−6747 doi: 10.1002/int.22566
[80] Tao Chao, Fu Siyang, Qi Ji, et al. Thick cloud removal in optical remote sensing images using a texture complexity guided self-paced learning method[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5619612
[81] Chen Shuli, Chen Xuehong, Chen Jin, et al. An iterative haze optimized transformation for automatic cloud/haze detection of Landsat imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 54(5): 2682−2694
[82] He Kaiming, Sun Jian, Tang Xiaoou. Single image haze removal using dark channel prior[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(12): 2341−2353
[83] Xu Meng, Pickering M, Plaza A J, et al. Thin cloud removal based on signal transmission principles and spectral mixture analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 54(3): 1659−1669
[84] Land E H. The retinex theory of color vision[J]. Scientific American, 1977, 237(6): 108−129 doi: 10.1038/scientificamerican1277-108
[85] 杨晓倩,贾振红,杨杰,等. 基于小波变换和 Retinex 结合的遥感图像的薄云去除[J]. 激光杂志,2019,40(10):77−80 Yang Xiaoqian, Jia Zhenhong, Yang Jie, et al. Thin cloud removal of remote sensing images based onwavelet transform and Retinex[J]. Laser Journal, 2019, 40(10): 77−80 (in Chinese)
[86] Shen Huanfeng, Li Huifang, Qian Yan, et al. An effective thin cloud removal procedure for visible remote sensing images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 96: 224−235 doi: 10.1016/j.isprsjprs.2014.06.011
[87] Xu Meng, Jia Xiuping, Pickering M, et al. Thin cloud removal from optical remote sensing images using the noise-adjusted principal components transform[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 149: 215−225 doi: 10.1016/j.isprsjprs.2019.01.025
[88] Pan Xiaoxi, Xie Fengying, Jiang Zhiguo, et al. Haze removal for a single remote sensing image based on deformed haze imaging model[J]. IEEE Signal Processing Letters, 2015, 22(10): 1806−1810 doi: 10.1109/LSP.2015.2432466
[89] Telea A. An image inpainting technique based on the fast marching method[J]. Journal of Graphics Tools, 2004, 9(1): 23−34 doi: 10.1080/10867651.2004.10487596
[90] Li Xinghua, Shen Huanfeng, Zhang Liangpei, et al. Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2015, 106: 1−15 doi: 10.1016/j.isprsjprs.2015.03.009
[91] Liu Na, Li Wei, Tao Ran, et al. Multigraph-based low-rank tensor approximation for hyperspectral image restoration[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5530314
[92] Wang Lanxing, Wang Qunming. Fast spatial-spectral random forests for thick cloud removal of hyperspectral images[J]. International Journal of Applied Earth Observation and Geoinformation, 2022, 112: 102916 doi: 10.1016/j.jag.2022.102916
[93] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139−144 doi: 10.1145/3422622
[94] Chen Hui, Chen Rong, Li Nannan. Attentive generative adversarial network for removing thin cloud from a single remote sensing image[J]. IET Image Processing, 2021, 15(4): 856−867 doi: 10.1049/ipr2.12067
[95] Xu Meng, Deng Furong, Jia Sen, et al. Attention mechanism-based generative adversarial networks for cloud removal in Landsat images[J]. Remote Sensing of Environment, 2022, 271: 112902 doi: 10.1016/j.rse.2022.112902
[96] Singh P, Komodakis N. Cloud-GAN: Cloud removal for Sentinel-2 imagery using a cyclic consistent generative adversarial networks[C] //Proc of the 38th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2018: 1772−1775
[97] Toizumi T, Zini S, Sagi K, et al. Artifact-free thin cloud removal using GANs[C] //Proc of the 26th IEEE Int Conf on Image Processing (ICIP). Piscataway, NJ: IEEE, 2019: 3596−3600
[98] Enomoto K, Sakurada K, Wang Weimin, et al. Filmy cloud removal on satellite imagery with multispectral conditional generative adversarial nets[C] //Proc of the IEEE Conf on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ: IEEE, 2017: 48−56
[99] Liu Yang, Pan Jinshan, Ren J, et al. Learning deep priors for image dehazing[C] //Proc of the IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 2492−2500
[100] Isola P, Zhu Junyan, Zhou Tinghui, et al. Image-to-image translation with conditional adversarial networks[C] //Proc of the IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 1125−1134
[101] Zhu Junyan, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C] //Proc of the IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2017: 2223−2232
[102] Sun Linjian, Zhang Ye, Chang Xuling, et al. Cloud-aware generative network: Removing cloud from optical remote sensing images[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 17(4): 691−695
[103] Kalkan K, Maktav M D. A cloud removal algorithm to generate cloud and cloud shadow free images using information cloning[J]. Journal of the Indian Society of Remote Sensing, 2018, 46: 1255−1264 doi: 10.1007/s12524-018-0806-y
[104] Hu Changmiao, Huo Lianzhi, Zhang Zheng, et al. Multi-temporal Landsat data automatic cloud removal using Poisson blending[J]. IEEE Access, 2020, 8: 46151−46161 doi: 10.1109/ACCESS.2020.2979291
[105] Chen Bin, Huang Bo, Chen Lifan, et al. Spatially and temporally weighted regression: A novel method to produce continuous cloud-free Landsat imagery[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 55(1): 27−37
[106] Xie Shuai, Liu Liangyun, Yang Jiangning. Enhanced Landsat surface reflectance prediction considering land cover change by using an ensemble of spectro-temporal and spectro-spatial predictions[J]. Advances in Space Research, 2022, 69(7): 2697−2710 doi: 10.1016/j.asr.2022.01.009
[107] Zhou Yanan, Wang Shunying, Wu Tianjun, et al. For-backward LSTM-based missing data reconstruction for time-series Landsat images[J]. GIScience & Remote Sensing, 2022, 59(1): 410−430
[108] Duan Chenxi, Pan Jun, Li Rui. Thick cloud removal of remote sensing images using temporal smoothness and sparsity regularized tensor optimization[J]. Remote Sensing, 2020, 12(20): 3446 doi: 10.3390/rs12203446
[109] Ji Tengyu, Chu Delin, Zhao Xile, et al. A unified framework of cloud detection and removal based on low-rank and group sparse regularizations for multitemporal multispectral images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5303015
[110] Þórðarson A F, Baum A, García M, et al. Gap-filling of NDVI satellite data using Tucker decomposition: Exploiting spatio-temporal patterns[J]. Remote Sensing, 2021, 13(19): 4007 doi: 10.3390/rs13194007
[111] Zhang Qiang, Sun Fujun, Yuan Qiangqiang, et al. Thick cloud removal for Sentinel-2 time-series images via combining deep prior and low-rank tensor completion[C] //Proc of the 41st IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2021: 2675−2678
[112] Zhang Qiang, Yuan Qiangqiang, Li Zhiwei, et al. Combined deep prior with low-rank tensor SVD for thick cloud removal in multitemporal images[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 177: 161−173 doi: 10.1016/j.isprsjprs.2021.04.021
[113] Tang Zhipeng, Amatulli G, Pellikka P K E, et al. Spectral temporal information for missing data reconstruction (STIMDR) of Landsat reflectance time series[J]. Remote Sensing, 2021, 14(1): 172 doi: 10.3390/rs14010172
[114] Lin Jie, Huang Tingzhu, Zhao Xile, et al. Robust thick cloud removal for multitemporal remote sensing images using coupled tensor factorization[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5406916
[115] Lin Chaohuang, Tsai P H, Lai Kanghua, et al. Cloud removal from multitemporal satellite images using information cloning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2012, 51(1): 232−241
[116] Lin Chaohung, Lai Kanghua, Chen Zhibin, et al. Patch-based information reconstruction of cloud-contaminated multitemporal images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2013, 52(1): 163−174
[117] Surya S R, Simon P. Automatic cloud removal from multitemporal satellite images[J]. Journal of the Indian Society of Remote Sensing, 2015, 43: 57−68 doi: 10.1007/s12524-014-0396-2
[118] Xu Meng, Jia Xiuping, Pickering M, et al. Cloud removal based on sparse representation via multitemporal dictionary learning[J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(5): 2998−3006 doi: 10.1109/TGRS.2015.2509860
[119] Li Zhiwei, Shen Huanfeng, Weng Qihao, et al. Cloud and cloud shadow detection for optical satellite imagery: Features, algorithms, validation, and prospects[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 188: 89−108 doi: 10.1016/j.isprsjprs.2022.03.020
[120] Zhang Chengyue, Li Zhiwei, Cheng Qing, et al. Cloud removal by fusing multi-source and multi-temporal images[C] //Proc of the 37th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway: IEEE, 2017: 2577−2580
[121] Shen Huanfeng, Wu Jingan, Cheng Qing, et al. A spatiotemporal fusion based cloud removal method for remote sensing images with land cover changes[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(3): 862−874 doi: 10.1109/JSTARS.2019.2898348
[122] Zhao Rongkun, Li Yuechen, Chen Jin, et al. Mapping a paddy rice area in a cloudy and rainy region using spatiotemporal data fusion and a phenology-based algorithm[J]. Remote Sensing, 2021, 13(21): 4400 doi: 10.3390/rs13214400
[123] 王蓝星,王群明,童小华. 融合多光谱影像的高光谱影像厚云去除方法[J]. 测绘学报,2022,51(4):612−621 Wang Lanxing, Wang Qunming, Tong Xiaohua. Thick cloud removal of hyperspectral images by fusing with multispectral images[J]. Acta Geodaetica et Cartographica Sinica, 2022, 51(4): 612−621(in Chinese)
[124] Chen Shanjing, Zhang Wenjuan, Li Zhen, et al. Cloud removal with SAR-optical data fusion and graph-based feature aggregation network[J]. Remote Sensing, 2022, 14(14): 3374 doi: 10.3390/rs14143374
[125] Grohnfeldt C, Schmitt M, Zhu Xiaoxiang. A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images[C] //Proc of the 38th IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2018: 1726−1729
[126] Bermudez J D, Happ P N, Oliveira D A B, et al. Sar to optical image synthesis for cloud removal with generative adversarial networks[J]. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences, 2018, 4(1): 5−11
[127] Darbaghshahi F N, Mohammadi M R, Soryani M. Cloud removal in remote sensing images using generative adversarial networks and SAR-to-optical image translation[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60: 4105309
[128] Gao Jianhao, Yuan Qiangqiang, Li Jie, et al. Cloud removal with fusion of high resolution optical and SAR images using generative adversarial networks[J]. Remote Sensing, 2020, 12(1): 191 doi: 10.3390/rs12010191
[129] Jiang Menghui, Li Jie, Shen Huanfeng. A deep learning-based heterogeneous spatio-temporal-spectral fusion: SAR and optical images[C] //Proc of the 41st IEEE Int Geoscience and Remote Sensing Symp (IGARSS). Piscataway, NJ: IEEE, 2021: 1252−1255
[130] Xu Fang, Shi Yilei, Ebel P, et al. Exploring the potential of SAR data for cloud removal in optical satellite imagery[J]. arXiv reprint, arXiv: 206.02850, 2022
[131] Xiao Xiao, Lu Yilong. Cloud removal of optical remote sensing imageries using SAR data and deep learning[C/OL] // Proc of the 7th Asia-Pacific Conf on Synthetic Aperture Radar (APSAR). Piscataway, NJ: IEEE, 2021[2023-06-30].https://ieeexplore.ieee.org/abstract/document/9688535
[132] Gao Jianhao, Yi Yang, Wei Tang, et al. Sentinel-2 cloud removal considering ground changes by fusing multitemporal SAR and optical images[J]. Remote Sensing, 2021, 13(19): 3998 doi: 10.3390/rs13193998
[133] Sebastianelli A, Nowakowski A, Puglisi E, et al. Spatio-Temporal SAR-optical data fusion for cloud removal via a deep hierarchical model[J]. arXiv preprint, arXiv: 2106.12226, 2021
[134] Ebel P, Xu Yajin, Schmitt M, et al. SEN12MS-CR-TS: A remote-sensing data set for multimodal multitemporal cloud removal[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5222414
[135] Czerkawski M, Upadhyay P, Davison C, et al. Deep internal learning for inpainting of cloud-affected regions in satellite imagery[J]. Remote Sensing, 2022, 14(6): 1342 doi: 10.3390/rs14061342
[136] Mohajerani S, Krammer T A, Saeedi P. Cloud detection algorithm for remote sensing images using fully convolutional neural networks[J]. arXiv preprint, arXiv: 1810.05782, 2018
[137] Li Zhiwei, Shen Huanfeng, Li Huifang, et al. Multi-feature combined cloud and cloud shadow detection in GaoFen-1 wide field of view imagery[J]. Remote Sensing of Environment, 2017, 191: 342−358 doi: 10.1016/j.rse.2017.01.026
[138] Baetens L, Desjardins C, Hagolle O. Validation of copernicus Sentinel-2 Cloud masks obtained from MAJA, Sen2Cor, and Fmask processors using reference cloud masks generated with a supervised active learning procedure[J]. Remote Sensing, 2019, 11(4): 433 doi: 10.3390/rs11040433
[139] Lin Daoyu, Xu Guangluan, Wang Xiaoke, et al. A remote sensing image dataset for cloud removal[J]. arXiv preprint, arXiv: 1901.00600, 2019
[140] Gong Cheng, Han Junwei, Lu Xiaoqiang. Remote sensing image scene classification: Benchmark and state of the art[J]. Proceedings of the IEEE, 2017, 105(10): 1865−1883 doi: 10.1109/JPROC.2017.2675998
[141] Hughes M J, Kennedy R. High-quality cloud masking of Landsat 8 imagery using convolutional neural networks[J]. Remote Sensing, 2019, 11(21): 2591 doi: 10.3390/rs11212591
[142] Liu Ji, Musialski P, Wonka P, et al. Tensor completion for estimating missing values in visual data[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(1): 208−220
[143] Chen Yilei, Hsu C T, Liao Hongyuan. Simultaneous tensor decomposition and completion using factor priors[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 36(3): 577−591
[144] Ji Tengyu, Yokoya N, Zhu Xiaoxiang, et al. Nonlocal tensor completion for multitemporal remotely sensed images’ inpainting[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(6): 3047−3061 doi: 10.1109/TGRS.2018.2790262
[145] Yuan Longhao, Li Chao, Mandic D, et al. Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion[C] //Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 9151−9158
-
期刊类型引用(0)
其他类型引用(1)