Instructions to use keras/qwen2.5_math_1.5b_en with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- KerasHub
How to use keras/qwen2.5_math_1.5b_en with KerasHub:
import keras_hub # Load CausalLM model (optional: use half precision for inference) causal_lm = keras_hub.models.CausalLM.from_preset("hf://keras/qwen2.5_math_1.5b_en", dtype="bfloat16") causal_lm.compile(sampler="greedy") # (optional) specify a sampler # Generate text causal_lm.generate("Keras: deep learning for", max_length=64)import keras_hub # Create a Backbone model unspecialized for any task backbone = keras_hub.models.Backbone.from_preset("hf://keras/qwen2.5_math_1.5b_en") - Keras
How to use keras/qwen2.5_math_1.5b_en with Keras:
# Available backend options are: "jax", "torch", "tensorflow". import os os.environ["KERAS_BACKEND"] = "jax" import keras model = keras.saving.load_model("hf://keras/qwen2.5_math_1.5b_en") - Notebooks
- Google Colab
- Kaggle
Model Overview
Model Summary
Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. Both language models and multimodal models are pretrained on large-scale multilingual and multimodal data and post-trained on quality data for aligning to human preferences. Qwen is capable of natural language understanding, text generation, vision understanding, audio understanding, tool use, role play, playing as AI agent, etc.
Unlike Qwen2-Math series which only supports using Chain-of-Thught (CoT) to solve English math problems, Qwen2.5-Math series is expanded to support using both CoT and Tool-integrated Reasoning (TIR) to solve math problems in both Chinese and English. The Qwen2.5-Math series models have achieved significant performance improvements compared to the Qwen2-Math series models on the Chinese and English mathematics benchmarks with CoT.
While CoT plays a vital role in enhancing the reasoning capabilities of LLMs, it faces challenges in achieving computational accuracy and handling complex mathematical or algorithmic reasoning tasks, such as finding the roots of a quadratic equation or computing the eigenvalues of a matrix. TIR can further improve the model's proficiency in precise computation, symbolic manipulation, and algorithmic manipulation.
For more details, please refer to Qwen Blog, GitHub, and Documentation.
Weights are released under the Apache 2 License . Keras model code is released under the Apache 2 License.
Links
- Qwen 2.5 Math Quickstart Notebook
- Qwen 2.5 Math API Documentation
- Qwen 2.5 Math Model Card
- KerasHub Beginner Guide
- KerasHub Model Publishing Guide
Installation
Keras and KerasHub can be installed with:
pip install -U -q keras-hub
pip install -U -q keras
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the Keras Getting Started page.
Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---|---|---|
qwen2.5_math_1.5b_en |
1.5B | 28-layer Qwen model with 1.5 billion parameters. |
qwen2.5_math_instruct_1.5b_en |
1.5B | 28-layer Qwen model with 1.5 billion parameters. Instruction tuned. |
qwen2.5_math_7b_en |
7B | 28-layer Qwen model with 7 billion parameters. |
qwen2.5_math_instruct_7b_en |
7B | 28-layer Qwen model with 7 billion parameters. Instruction tuned. |
Example Usage
import keras
import keras_hub
import numpy as np
# Use generate() to do code generation.
qwen_lm = keras_hub.models.QwenCausalLM.from_preset("qwen2.5_math_1.5b_en")
qwen_lm.generate(" Find the value of x that satisfies the equation 4x+5 = 6x+7.", max_length=300)
Example Usage with Hugging Face URI
import keras
import keras_hub
import numpy as np
# Use generate() to do code generation.
qwen_lm = keras_hub.models.QwenCausalLM.from_preset("hf://keras/qwen2.5_math_1.5b_en")
qwen_lm.generate(" Find the value of x that satisfies the equation 4x+5 = 6x+7.", max_length=300)
- Downloads last month
- 2