SketchMultiview Dataset
Multi-view image dataset for sketch-conditioned novel view synthesis.
Each sample contains 33 views of a scene at 480×480 resolution (4 elevation rings × 8 azimuths + original).
Dataset Structure
{split}/
├── generated_gt/ # Multi-view images (480×480 PNG)
│ ├── {group}/{coco_id}/ # Train: grouped into subfolders
│ │ ├── 00_original.png
│ │ ├── 00_azim000_elev+00.png
│ │ ├── 01_azim045_elev+00.png
│ │ ├── ...
│ │ ├── 31_azim315_elev-30.png
│ │ └── metadata.json
│ └── {coco_id}/ # Test/Val: flat structure
├── sketches/ # Sketch conditioning images (512×512 grayscale PNG)
├── captions/ # Text captions (.txt)
└── correspondences/ # COLMAP correspondences (JSON, 1024px coordinates)
Splits
| Split | Samples | Size |
|---|---|---|
| Train | 8,304 | ~102 GB |
| Test | 477 | ~5.9 GB |
| Val | 441 | ~5.5 GB |
View Layout (33 views per sample)
- Elevation 0°: 8 views at 45° azimuth intervals + original
- Elevation +30°: 8 views at 45° azimuth intervals
- Elevation +60°: 8 views at 45° azimuth intervals
- Elevation -30°: 8 views at 45° azimuth intervals
Download & Extract
# Download all files
huggingface-cli download ahmedbrs/SketchMultiview_dataset --repo-type=dataset --local-dir .
# Extract test and val
tar xf test.tar
tar xf val.tar
# Reassemble and extract train (split into 45GB parts)
cat train.tar.part_aa train.tar.part_ab train.tar.part_ac | tar xf -
Correspondences
COLMAP-extracted correspondences are stored per sample in JSON format with pixel coordinates at 1024×1024 resolution. The training pipeline maps these to latent patch indices via:
patch = int(pixel_coord * latent_dim / 1024)
Usage with Training Pipeline
# Point to the extracted dataset directory
./train_ucpe.sh --fal-multiview # set FAL_MULTIVIEW_DIR to extracted path
- Downloads last month
- 24