Text Classification
Transformers
Safetensors
bert
feature-extraction
text-embedding
tinybert
text-embeddings-inference
Instructions to use rohitkumarai/my_tinybert_encoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use rohitkumarai/my_tinybert_encoder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="rohitkumarai/my_tinybert_encoder")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("rohitkumarai/my_tinybert_encoder") model = AutoModel.from_pretrained("rohitkumarai/my_tinybert_encoder") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- f3964398eef181339abde782b332088621a5dc3114295a1242fcc7e7a1174c73
- Size of remote file:
- 57.4 MB
- SHA256:
- 9ffb7e3696bb6e1c57348528c715d4ed284fe7e7faffb5917e29bc48eecbcb2b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.