Sentence Similarity
sentence-transformers
Safetensors
Transformers
qwen2
feature-extraction
text-embeddings-inference
Instructions to use vec-ai/lychee-embed with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use vec-ai/lychee-embed with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("vec-ai/lychee-embed") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Transformers
How to use vec-ai/lychee-embed with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("vec-ai/lychee-embed") model = AutoModel.from_pretrained("vec-ai/lychee-embed") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ tags:
|
|
| 14 |
|
| 15 |
[Github](https://github.com/vec-ai/lychee-embed) | [Paper](https://openreview.net/pdf?id=NC6G1KCxlt)
|
| 16 |
|
| 17 |
-
`Lychee-embed` is the latest generalist text embedding model
|
| 18 |
`Lychee-embed` is jointly developed by the NLP Team of Harbin Institute of Technology, Shenzhen and is built based on an innovative multi-stage training framework (warm-up, task-learning, model merging, annealing).
|
| 19 |
The first batch of open source is 1.5B parameter version.
|
| 20 |
|
|
|
|
| 14 |
|
| 15 |
[Github](https://github.com/vec-ai/lychee-embed) | [Paper](https://openreview.net/pdf?id=NC6G1KCxlt)
|
| 16 |
|
| 17 |
+
`Lychee-embed` is the latest generalist text embedding model based on the `Qwen2.5` model. It is suitable for text retrieval (semantic correlation), text similarity and other downstream tasks, and supports multiple languages of `Qwen2.5`.
|
| 18 |
`Lychee-embed` is jointly developed by the NLP Team of Harbin Institute of Technology, Shenzhen and is built based on an innovative multi-stage training framework (warm-up, task-learning, model merging, annealing).
|
| 19 |
The first batch of open source is 1.5B parameter version.
|
| 20 |
|