BiliSakura/DDBM-ckpt

Packaged DDBM (Denoising Diffusion Bridge Models) checkpoints collected from alexzhou907/DDBM.

These checkpoints use the OpenAI/improved_diffusion architecture. Use the community DDBM pipeline from pytorch-image-translation-models, not the standard DDBMPipeline.from_pretrained.

Variants

Model variant Domain
edges2handbags-vp Edges -> Handbags
diode-vp DIODE image translation

Repository layout

DDBM-ckpt/
  edges2handbags-vp/
    unet/
      config.json
      diffusion_pytorch_model.safetensors
  diode-vp/
    unet/
      config.json
      diffusion_pytorch_model.safetensors

Usage

from examples.community.ddbm import load_ddbm_community_pipeline

pipe = load_ddbm_community_pipeline(
    "/path/to/DDBM-ckpt/edges2handbags-vp",
    device="cuda",
)

source = ...  # PIL Image or torch.Tensor
out = pipe(source_image=source, num_inference_steps=40, output_type="pil")
out.images[0].save("ddbm_output.png")

Requires pytorch-image-translation-models with the community DDBM package.

Converting from raw .pt

If you have raw .pt checkpoints, convert to unet/ format:

python -m examples.community.ddbm.convert_pt_to_unet \
    /path/to/DDBM-ckpt/edges2handbags-vp \
    --checkpoint e2h_ema_0.9999_420000.pt

Citation

@inproceedings{zhou2024ddbm,
  title={Denoising Diffusion Bridge Models},
  author={Zhou, Linqi and Lou, Aaron and Khanna, Samar and Ermon, Stefano},
  booktitle={ICLR},
  year={2024}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including BiliSakura/DDBM-ckpt