Finetuned openai/whisper-small on 58000 Swahili training audio samples from mozilla-foundation/common_voice_17_0.
This model was created from the Mozilla.ai Blueprint: speech-to-text-finetune.
Evaluation results on 12253 audio samples of Swahili:
Baseline model (before finetuning) on Swahili
- Word Error Rate: 133.795
- Loss: 2.459
Finetuned model (after finetuning) on Swahili
- Word Error Rate: 43.876
- Loss: 0.653
- Downloads last month
- 27
Model tree for Mollel/ASR-Swahili-Small
Base model
openai/whisper-smallDataset used to train Mollel/ASR-Swahili-Small
Evaluation results
- wer on Common Voice (Swahili)self-reported43.876