Text Classification
Transformers
PyTorch
ONNX
English
roberta
text-classfication
int8
Intel® Neural Compressor
neural-compressor
PostTrainingStatic
Eval Results (legacy)
text-embeddings-inference
Instructions to use INC4AI/roberta-base-mrpc-int8-static-inc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use INC4AI/roberta-base-mrpc-int8-static-inc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="INC4AI/roberta-base-mrpc-int8-static-inc")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("INC4AI/roberta-base-mrpc-int8-static-inc") model = AutoModelForSequenceClassification.from_pretrained("INC4AI/roberta-base-mrpc-int8-static-inc") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a106bf8b8e3e784f1e0d3c89f6af5d1ac305d52fd8d517f15752d114ded8aee0
- Size of remote file:
- 127 MB
- SHA256:
- b0d78079577f379740987673369c23a05dbf9425fd406e763783b2966669e99b
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.