适应立体匹配任务的端到端深度网络
Task-Adaptive End-to-End Networks for Stereo Matching
-
摘要: 针对现有立体匹配深度网络中特征提取模块冗余度高以及用于视差计算的3D卷积模块感受野受限问题,提出改进的端到端深度网络.相比现有网络,该网络特征提取模块遵循立体匹配特性,结构更简洁;引入分离3D卷积实现大卷积核3D卷积运算以扩充感受野.在SceneFlow数据集上,从匹配精度和计算开销等方面评估所提出网络.实验结果显示:所提出网络在准确度上达到了先进水平;相比现有同类型模块,所提出特征提取模块在保证结果精度的同时能减少90%的参数量,并减少约25%的训练时间;相比3D卷积,所提出的分离3D卷积将卷积核大小提升至覆盖整个视差维度,搭配群组归一化(group normalization, GN),其端点误差(end-point-error, EPE)较基础方法降低了12%的相对量.Abstract: Estimating depth/disparity information from stereo pairs via stereo matching is a classical research topic in computer vision. Recently, along with the development of deep learning technologies, many end-to-end deep networks have been proposed for stereo matching. These networks generally borrow convolutional neural network (CNN) structures originally designed for other tasks to extract features. These structures are generally redundant for the task of stereo matching. Besides, 3D convolutions in these networks are too complex to be extended for large perception fields which are helpful for disparity estimation. In order to overcome these problems, we propose a deep network structure based on the properties of stereo matching. In the proposed network, a concise and effective feature extraction module is presented. Moreover, a separated 3D convolution is introduced to avoid parameter explosion caused by increasing the size of convolution kernels. We validate our network on the dataset of SceneFlow in aspects of both accuracy and computation costs. Results show that the proposed network obtains state-of-the-art performance. Compared with the other structures, our feature extraction module can reduce 90% parameters and 25% time cost while achieving comparable accuracy. At the same time, our separated 3D convolution, accompanied by group normalization (GN), achieves lower end-point-error (EPE) than baseline methods.