Zuria Bauer

Postdoctoral Researcher at the Computer Vision and Geometry Group, ETH Zurich

UASOL | Zuria Bauer

UASOL

A Large-scale High-resolution Outdoor Stereo Dataset

August 29, 2019

Authors: Zuria Bauer, Francisco Gomez Donoso, Edmanuel Cruz, Sergio Orts-Escolano and Miguel Cazorla,

Journal: Scientific Data - Nature Publishing Group (2019)

Abstract

We propose a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset was acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning-based techniques, which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require large amounts of reliable ground-truth data. Despite there already existing several datasets that could be used for depth estimation, almost none of them are outdoor-oriented from an egocentric point of view. Our dataset introduces a large number of high-definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data, as shown in this work.


General overview of the dataset.

Below, we present some samples of the data provided by UASOL. E.g: visualization of the GPS tags, RGB-D Samples and, lastly, the object counting procedure and the semantic variability used for the Technical Validation of the data.

Responsive image

BibTex
    @article{bauer2019uasol, title={UASOL, a large-scale high-resolution outdoor stereo dataset}, author={Bauer, Zuria and Gomez-Donoso, Francisco and Cruz, Edmanuel and Orts-Escolano, Sergio and Cazorla, Miguel}, journal={Scientific data}, volume={6}, number={1}, pages={1--14}, year={2019}, publisher={Nature Publishing Group} }

Project Page


PDF | Webpage |

News

The Hoi! dataset will be presented as a Highlight paper during CVPR 2026 in Denver!

Excited to share that our papers Hoi! A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation and FunFact: Building Probabilistic Functional 3D Scene Graphs via Factor-Graph Reasoning have been accepted to #CVPR2026 🎉

I will be serving as a Poster Chair for ECCV 2026 in Malmö—looking forward to an exciting conference!

Our paper Video Perception Models for 3D Scene Synthesis has been accepted to #NeurIPS2025 🎉

Heading to #BMVC? Check out our paper MonoTracker: Monocular RGB-Only 6D Tracking of Unknown Objects!

Attending #ICCV? Come by our poster on 3D-MOOD: Lifting 2D to 3D for Monocular Open-Set Object Detection!

Going to #IROS2025 in China? Stop by our poster Lost and Found: Updating Dynamic 3D Scene Graphs from Egocentric Observations!

SpotLight: Robotic Scene Understanding through Interaction and Affordance Detection accepted to #Humanoids2025 in Korea—see you there!

Our work CroCoDL: Cross-device Collaborative Dataset for Localization will be presented at #CVPR2025 in Nashville—come say hi at our poster!