Zuria Bauer

Postdoctoral Researcher at the Computer Vision and Geometry Group (CVG)| ETH Zürich

NVS-MonoDepth | Zuria Bauer

NVS-MonoDepth

Improving Monocular Depth Prediction with Novel View Synthesis

December 04, 2021

Authors: Zuria Bauer, Zuoyue Li, Sergio Orts-Escolano, Miguel Cazorla, Marc Pollefeys and Martin R Oswald

Conference: 2021 International Conference on 3D Vision (3DV)

Abstract

Building upon the recent progress in novel view synthesis, we propose its application to improve monocular depth estimation. In particular, we propose a novel training method split in three main steps. First, the prediction results of a monocular depth network are warped to an additional view point. Second, we apply an additional image synthesis network, which corrects and improves the quality of the warped RGB image. The output of this network is required to look as similar as possible to the ground-truth view by minimizing the pixel-wise RGB reconstruction error. Third, we reapply the same monocular depth estimation onto the synthesized second view point and ensure that the depth predictions are consistent with the associated ground truth depth. Experimental results prove that our method achieves state-of-the-art or comparable performance on the KITTI and NYU-Depth-v2 datasets with a lightweight and simple vanilla U-Net architecture.

Contributions

Overall, our contributions are two-fold:
  • We propose to use novel-view synthesis as an additional supervisory signal to improve the training of a monocular depth estimation network. To this end, we propose two loss functions that augment the traditional depth supervision.
  • We present comprehensive experiments on both indoor and outdoor datasets that demonstrate the benefits of our approach, as well as an ablation study which empirically justifies our design choices.

Results

State-of-the-art comparison on the KITTI dataset. For reference, we additionally show the results of our U-Net baseline in the second-to-last row, which is the same network, but trained without the proposed NVS losses. The reported numbers are from the corresponding original papers. Best results are shown in bold and second best results in blue.

Model Backbone #Params(M)↓ REL↓ RMSE↓ RMSElog Sq.Rel↓ δ1 δ2 δ3
Saxena - - 0.280 8.734 0.361 3.012 0.601 0.820 0.926
Eigen - - 0.190 7.156 0.270 1.515 0.692 0.899 0.967
Liu - 40 0.217 6.986 0.287 1.841 0.647 0.882 0.961
Godard ResNet-50 31 0.085 3.938 0.135 0.427 0.916 0.980 0.994
Kuznietsov ResNet-50 - 0.138 3.610 0.138 0.121 0.906 0.989 0.995
Gan ResNet-50 - 0.098 3.933 0.173 0.666 0.890 0.964 0.985
Fu ResNet-101 110 0.072 2.727 0.120 0.307 0.932 0.984 0.994
Yin ResNeXt-101 114 0.072 3.258 0.117 - 0.938 0.990 0.998
BTS ResNeXt-101 113 0.064 2.540 0.100 0.254 0.950 0.993 0.999
Song ResNet-50 - 0.059 2.446 0.091 0.212 0.962 0.994 0.999
AdaBins EfficientNet-B5 78 0.058 2.360 0.088 0.190 0.964 0.995 0.999
DepNet U-Net 54 0.057 3.023 0.104 0.441 0.936 0.975 0.991
NVS-MonoDepth U-Net 54 0.031 2.702 0.089 0.292 0.963 0.989 0.997

State-of-the-art comparison on the NYU-Depth-v2 dataset. Please note the substantial reduction of the relative error by our approach. The reported numbers are from the corresponding original papers. Best results are shown in bold and second best results in blue.

Model Backbone #Params(M)↓ REL↓ RMSE↓ δ1 δ2 δ3
Eigen - 141 M 0.158 0.641 0.769 0.950 0.988
Laina ResNet-50 64 M 0.127 0.573 0.811 0.953 0.989
Hao ResNet-101 60 M 0.127 0.555 0.841 0.966 0.991
Lee - 119 M 0.131 0.538 0.837 0.971 0.994
Fu ResNet-101 110 M 0.115 0.509 0.828 0.965 0.992
SharpNet ResNet-50 80 M 0.139 0.502 0.836 0.966 0.993
Hu SENet-154 157 M 0.115 0.530 0.866 0.975 0.993
Chen SENet-154 210 M 0.111 0.514 0.878 0.977 0.994
Yin ResNeXt-101 110 M 0.108 0.416 0.875 0.976 0.994
BTS DenseNet-161 47 M 0.110 0.392 0.885 0.978 0.994
DAV DRN-D-22 25 M 0.108 0.412 0.882 0.980 0.996
AdaBins EfficientNet-B5 78 M 0.103 0.364 0.903 0.984 0.997
DepNet U-Net 54 M 0.132 0.571 0.815 0.839 0.854
Ours U-Net 54 M 0.058 0.331 0.989 0.995 0.997


BibTex
    @inproceedings{bauer2021nvs, title={NVS-MonoDepth: Improving Monocular Depth Prediction with Novel View Synthesis}, author={Bauer, Zuria and Li, Zuoyue and Orts-Escolano, Sergio and Cazorla, Miguel and Pollefeys, Marc and Oswald, Martin R}, booktitle={2021 International Conference on 3D Vision (3DV)}, pages={848--858}, year={2021}, organization={IEEE} }
PDF | Poster | Long Video | Short Video

Code