EUPE-ViT-B (converted)

This repository contains a converted EUPE checkpoint (from the original Facebook release) in safetensors format, prepared under BiliSakura for downstream upload and reuse.

Metadata

  • Architecture: EupeViTModel
  • Backbone family: EUPE ViT
  • Embedding dim: 768
  • Intended use: image feature extraction / encoder backbone
  • License: FAIR Research License (non-commercial)

Source

Files

  • model.safetensors: converted checkpoint weights
  • config.json: architecture/config parameters
  • preprocessor_config.json: image preprocessing setup
  • transformers_eupe.py: local EUPE Transformers registration wrapper
  • eupe/: vendored EUPE model implementation used by transformers_eupe.py

Preprocessing

preprocessor_config.json uses:

  • Resize to 256 x 256
  • RGB conversion
  • Rescale by 1/255
  • Normalize with ImageNet mean/std:
    • mean: [0.485, 0.456, 0.406]
    • std: [0.229, 0.224, 0.225]

Quick Transformers Inference

import torch
import sys
from PIL import Image
from transformers import AutoImageProcessor, AutoModel

model_dir = "./EUPE-ViT-B"
sys.path.insert(0, model_dir)
from transformers_eupe import register_eupe_transformers

register_eupe_transformers()
processor = AutoImageProcessor.from_pretrained(model_dir)
model = AutoModel.from_pretrained(model_dir).eval()

image = Image.open("example.jpg").convert("RGB")
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

print(outputs.last_hidden_state.shape, outputs.pooler_output.shape)

Citation

If you use this model, please cite EUPE:

@misc{zhu2026eupe,
  title={Efficient Universal Perception Encoder},
  author={Zhu, Chenchen and Suri, Saksham and Jose, Cijo and Oquab, Maxime and Szafraniec, Marc and Wen, Wei and Xiong, Yunyang and Labatut, Patrick and Bojanowski, Piotr and Krishnamoorthi, Raghuraman and Chandra, Vikas},
  year={2026},
  eprint={2603.22387},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2603.22387},
}
Downloads last month
31
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including BiliSakura/EUPE-ViT-B

Paper for BiliSakura/EUPE-ViT-B