YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

tFINE-900m-e16-d32-instruct

Model description

This model is a fine-tuned version of BEE-spoke-data/tFINE-900m-e16-d32-flan on the pszemraj/infinity-instruct-7m-T2T_en dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3588
  • Num Input Tokens Seen: 810173896

Usage Example

You can also run inference with turboT5 on ampere+ GPUs for better performance. See example on Colab.

from transformers import pipeline

pipe = pipeline(
  "text2text-generation",
  model="BEE-spoke-data/tFINE-900m-e16-d32-instruct",
  # device_map="auto", # uncomment if have GPU/accelerate
)
prompt = "Write me a python script that demonstrates an advanced sorting algorithm"
res = pipe(
    prompt,
    max_new_tokens=384,
    num_beams=4,
    early_stopping=True,
    no_repeat_ngram_size=6,
)
print(res[0]["generated_text"])

evals

open-llm-leaderboard 2

Model Average ⬆️ IFEval BBH MATH Lvl 5 GPQA MUSR MMLU-PRO
🔶 BEE-spoke-data/tFINE-900m-e16-d32-instruct 5.82 13.21 4.74 0 0.56 13.81 2.63
🔶 BEE-spoke-data/tFINE-900m-e16-d32-flan 4.43 15.06 4.41 0 0 3.72 3.41
Downloads last month
45
Safetensors
Model size
0.9B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for BEE-spoke-data/tFINE-900m-e16-d32-instruct

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train BEE-spoke-data/tFINE-900m-e16-d32-instruct