Instructions to use BBO66/SounDiT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use BBO66/SounDiT with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("BBO66/SounDiT", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
pipeline_tag: other
library_name: diffusers
SounDiT: Geo-Contextual Soundscape-to-Landscape Generation
SounDiT is a diffusion transformer (DiT)-based model designed for the Geo-contextual Soundscape-to-Landscape (GeoS2L) generation task. It synthesizes geographically realistic landscape images from environmental soundscapes by incorporating geo-contextual scene conditioning.
- Paper: SounDiT: Geo-Contextual Soundscape-to-Landscape Generation
- Project Page: https://gisense.github.io/SounDiT-Page/
- Repository: https://github.com/GISense/SounDiT
Overview
Recent audio-to-image models often struggle to reconstruct real-world landscapes from environmental soundscapes. SounDiT addresses this gap using a DiT architecture that leverages diverse environmental soundscapes and scene conditioning to ensure geographical coherence. To evaluate this task, the authors introduced the Place Similarity Score (PSS) framework, which captures multi-level generation consistency across element, scene, and human perception.
Code Usage
Environment Setup
conda env create -f environment.yml
conda activate SounDiT
Inference
bash ./scripts/inference.sh
Citation
If you use SounDiT in your research, please cite the following paper:
@misc{wang2025sounditgeocontextualsoundscapetolandscapegeneration,
title={SounDiT: Geo-Contextual Soundscape-to-Landscape Generation},
author={Junbo Wang and Haofeng Tan and Bowen Liao and Albert Jiang and Teng Fei and Qixing Huang and Zhengzhong Tu and Shan Ye and Yuhao Kang},
year={2025},
eprint={2505.12734},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2505.12734}
}