Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

nmndeep
/
CLIC-ViT-B-16-224-CogVLM

OpenCLIP
Safetensors
Model card Files Files and versions
xet
Community
1

Instructions to use nmndeep/CLIC-ViT-B-16-224-CogVLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • OpenCLIP

    How to use nmndeep/CLIC-ViT-B-16-224-CogVLM with OpenCLIP:

    import open_clip
    
    model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:nmndeep/CLIC-ViT-B-16-224-CogVLM')
    tokenizer = open_clip.get_tokenizer('hf-hub:nmndeep/CLIC-ViT-B-16-224-CogVLM')
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Improve model card: Add metadata, links, abstract, citation, and fix usage snippet

#1 opened 7 months ago by
nielsr
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs