Instructions to use DDSC/roberta-base-danish with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DDSC/roberta-base-danish with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="DDSC/roberta-base-danish")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("DDSC/roberta-base-danish") model = AutoModelForMaskedLM.from_pretrained("DDSC/roberta-base-danish") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("DDSC/roberta-base-danish")
model = AutoModelForMaskedLM.from_pretrained("DDSC/roberta-base-danish")Quick Links
RøBÆRTa - Danish Roberta Base
Description
RøBÆRTa is a danish pretrained Roberta base model. RøBÆRTa was pretrained on the danish mC4 dataset during the flax community week. This project was organized by Dansk Data Science Community (DDSC) 👇
https://www.linkedin.com/groups/9017904/
Team RøBÆRTa:
- Dan Saattrup Nielsen (saattrupdan)
- Malte Højmark-Bertelsen (Maltehb)
- Morten Kloster Pedersen (MortenKP)
- Kasper Junge (Juunge)
- Per Egil Kummervold (pere)
- Birger Moëll (birgermoell)
- Downloads last month
- 133
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="DDSC/roberta-base-danish")