Instructions to use jinaai/reader-lm-1.5b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jinaai/reader-lm-1.5b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="jinaai/reader-lm-1.5b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jinaai/reader-lm-1.5b") model = AutoModelForCausalLM.from_pretrained("jinaai/reader-lm-1.5b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use jinaai/reader-lm-1.5b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "jinaai/reader-lm-1.5b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jinaai/reader-lm-1.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/jinaai/reader-lm-1.5b
- SGLang
How to use jinaai/reader-lm-1.5b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "jinaai/reader-lm-1.5b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jinaai/reader-lm-1.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "jinaai/reader-lm-1.5b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "jinaai/reader-lm-1.5b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use jinaai/reader-lm-1.5b with Docker Model Runner:
docker model run hf.co/jinaai/reader-lm-1.5b
why?
how is this any better than a rules based engine like https://github.com/mixmark-io/turndown?
why use something that's a few orders of magnitude slower like reader-lm?
They also used turndow, and answer precisely that question in their blog post: https://jina.ai/news/reader-lm-small-language-models-for-cleaning-and-converting-html-to-markdown/
At first glance, using LLMs for data cleaning might seem excessive due to their low cost-efficiency and slower speeds. But what if we're considering a small language model (SLM) — one with fewer than 1 billion parameters that can run efficiently on the edge? That sounds much more appealing, right? But is this truly feasible or just wishful thinking?
You can compare LM vs rule based approach here: https://huggingface.co/spaces/maxiw/HTML-to-Markdown
I wonder if "hallucinations" being a failure mode for LLMs could mean drastically different edge cases for LLM based vs rules based in practice.
(I say this as I tried using this in ollama q4 1.5b model - and getting hallucinated output for pasted html - I've not tried the API version - perhaps chat mangled in the input/template)
It has a ton of use cases.
Existing approaches are fragile and a pain to maintain. If this works, being able to deploy a small model inside a container and run it basically anywhere, using a few lines of code.. that's much more attractive in lots data processing pipelines, especially if you don't massively care about speed.
Also, best of luck handling multiple languages The Old Way. It's a nightmare wrapped in pain.
I'm about to try it... will be interesting to see if it works. 🙏
(I say this as I tried using this in ollama q4 1.5b model - and getting hallucinated output for pasted html - I've not tried the API version - perhaps chat mangled in the input/template)
temperature = 0 ?