Datasets:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card

Number of samples: 20668

Columns / Features:

  • order_id: Value(dtype='string', id=None)
  • image_ids: Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)
  • ade: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • depth: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • gestalt: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • K: Sequence(feature=Array2D(shape=(3, 3), dtype='float32', id=None), length=-1, id=None)
  • R: Sequence(feature=Array2D(shape=(3, 3), dtype='float32', id=None), length=-1, id=None)
  • t: Sequence(feature=Array2D(shape=(3, 1), dtype='float32', id=None), length=-1, id=None)
  • pose_only_in_colmap: Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None)
  • wf_vertices: Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=3, id=None), length=-1, id=None)
  • wf_edges: Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=2, id=None), length=-1, id=None)
  • wf_classifications: Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
  • colmap: Value(dtype='binary', id=None)

These data were gathered over the course of several years throughout the United States from a variety of smart phone and camera platforms. Each training sample/scene consists of a set of posed image features (segmentation, depth, etc.) and a sparse point cloud as input, and a sparse wire frame (3D embedded graph) with semantically tagged edges as the target. In order to preserve privacy, original images are not provided.

Note: the test distribution is not guaranteed to match the training set.

Important

This dataset relies on the >=0.2.111 version of the webdataset. Earlier version would not work, unfortunately. You can check all required dependencies in requirements.txt

Usage example

Related package: hoho2025

pip install hoho2025

You can recreate the visualizations below with

from datasets import load_dataset
from hoho2025.vis import plot_all_modalities
from hoho2025.viz3d import *

def read_colmap_rec(colmap_data):
    import pycolmap
    import tempfile,zipfile
    import io
    with tempfile.TemporaryDirectory() as tmpdir:
        with zipfile.ZipFile(io.BytesIO(colmap_data), "r") as zf:
            zf.extractall(tmpdir)  # unpacks cameras.txt, images.txt, etc. to tmpdir
        # Now parse with pycolmap
        rec = pycolmap.Reconstruction(tmpdir)
        return rec

ds = load_dataset("usm3d/hoho22k_2026_trainval", streaming=True, trust_remote_code=True)
# Available splits: ds['train'], ds['validation']
for a in ds['train']:
    break

fig, ax = plot_all_modalities(a)

## Now 3d

fig3d = init_figure()
plot_reconstruction(fig3d, read_colmap_rec(a['colmap']))
plot_wireframe(fig3d, a['wf_vertices'], a['wf_edges'], a['wf_classifications'])
plot_bpo_cameras_from_entry(fig3d, a)
fig3d

Sample Preview

Sample 10 (row index 10)

2D Visualization: All modalities all_modalities

3D Visualization: Point cloud, cameras, and wireframe 3d_visualization

What changed compared to hoho2025

  1. Smaller but cleaner dataset. More reconstructions have been reviewed and cleaned up; those that could not be adequately verified have been removed. This reduced the size from ~25 k to ~22 k scenes — quality over quantity.
  2. Same train/test split. The split is identical to the one used in 2025, with the only difference being that removed reconstructions are no longer present in either subset.
  3. Refined BPO camera poses. Both the extrinsic pose (position and orientation) and the intrinsics (field of view) of the human-authored BPO cameras have been refined, yielding more accurate ground-truth camera parameters. That said, some inaccuracies remain — BPO poses should not be treated as perfect ground truth.
  4. Higher-quality COLMAP reconstructions. The SfM pipeline now produces more complete and more precise point clouds. Sky and background points are intentionally not filtered — removing them is left to the user, allowing maximum flexibility. Reconstructions are still not perfect, and some scenes may contain noise or missing regions.
  5. Better monocular depth. Depth maps are now computed with MoGe v2 ViT-L (Wang et al., CVPR 2025) instead of Metric3D v2, providing improved metric accuracy and sharper geometry details.
  6. More images per scene. Significantly more per-scene images — including segmentation maps (ADE20k, Gestalt) and monocular depth — are now included, giving models richer multi-view context for each reconstruction.
  7. Images downscaled to 768 px on the longest side. All images (and derived maps) have been resized so that the longest dimension is at most 768 pixels. Camera intrinsics are adjusted accordingly. This reduces storage and compute requirements while retaining sufficient detail for the task.

Additional notes on data

Camera poses

Every scene provides two complementary sets of camera poses:

  • BPO poses (K, R, t): camera intrinsics and extrinsics derived from the human-authored BPO (Building-Perspective-Ortho) model, i.e. the SketchUp/DAE reconstruction that also produced the wireframe ground truth. These are the most geometrically consistent with the wireframe. However, not all images have a BPO pose — cameras that exist in the COLMAP reconstruction but were not part of the human-authored model will have K, R, t set to all-zeros and pose_only_in_colmap=True.

  • COLMAP poses (inside colmap): all cameras, including those without a BPO pose, are present in the packed COLMAP reconstruction. These poses come from a VGGT-based SfM pipeline and are aligned to the BPO frame via a rigid RANSAC fit.

Always check pose_only_in_colmap before using K/R/t as ground-truth camera parameters.

Depth

The depth is a result of running the monocular geometry model MoGe v2 ViT-L (Wang et al., CVPR 2025), and it is not ground truth by any means. The depth is stored in millimeters, so to get the metric data, use

np.array(entry['depth']).astype(float32)/1000.

If you need to have a GT depth, the semi-sparse depth from the Colmap reconstructions with dense features available in points3d is quite accurate.

Segmentation

You have two segmentations available. gestalt is domain specific model, which "sees-through-occlusions" and provides a detailed information about house parts. See the list of classes in "Dataset" section in the navigation bar.

ade20k is a standard ADE20K segmentation model (specifically, shi-labs/oneformer_ade20k_swin_large.

Organizers

Jack Langerman (Apple Inc), Dmytro Mishkin (Hover Inc / CTU in Prague), Anastasiia Mishchuk (Hover Inc), Yuzhong Huang (Hover Inc).

Sponsors

The organizers would like to thank Hover Inc. for their sponsorship of this challenge and dataset.

@misc{S23DR_2026, 
        title={S23DR Competition at 3rd Workshop on Urban Scene Modeling @ CVPR 2026}, 
        url={usm3d.github.io},
        howpublished = {\url{https://huggingface.co/usm3d}},
        year={2026},
        author={Langerman, Jack and Mishkin, Dmytro and Mishchuk, Anastasiia and Huang, Yuzhong}
    }
Downloads last month
815