SmolLM2-135M-FlashNorm
FlashNorm-prepared checkpoint of HuggingFaceTB/SmolLM2-135M. Mathematically equivalent to the source model. The per-channel RMSNorm weight tensors (input_layernorm.weight, post_attention_layernorm.weight, model.norm.weight) are folded into the following linear layers and then removed from the state dict entirely.
Framework support note. Stock vLLM currently does not load this checkpoint because the norm weight tensors are absent. The upstream patch to accept missing tensors is tracked at: TBD (vLLM issue link). Until the patch lands, use HuggingFace Transformers; it loads this with a warning that norm weights were not initialized and defaults them to ones, which is the correct behavior for FlashNorm.
Two additional Llama-family verification checkpoints are published as Llama-3.2-1B-FlashNorm-test and Llama-3.1-8B-FlashNorm-test. These retain the norm tensors as all-ones (compatibility layout) so they load in stock vLLM today and are intended for experimentation. They will be republished as weightless variants once vLLM's loader supports absent norm tensors.
What FlashNorm does
An exact reformulation of RMSNorm -> Linear:
- Fold the per-channel normalization weight
ginto the following linear layer:W_star = W @ diag(g), computed once at checkpoint conversion. - After folding, the RMSNorm layer has no learnable per-channel scale. At runtime it simply divides by
rms(x). - The resulting model computes the same output as the original, by Proposition 1 of the FlashNorm paper.
See the paper (Section 3.1 and Proposition 1) and the transformer-tricks repo for details.
What's different from the source checkpoint
| Tensor | Source | This FlashNorm checkpoint |
|---|---|---|
model.layers.*.input_layernorm.weight |
learned per-channel g |
absent |
model.layers.*.self_attn.{q,k,v}_proj.weight |
W |
W @ diag(g_input_layernorm) |
model.layers.*.post_attention_layernorm.weight |
learned per-channel g |
absent |
model.layers.*.mlp.{gate,up}_proj.weight |
W |
W @ diag(g_post_attention_layernorm) |
model.norm.weight |
learned per-channel g |
absent |
All dtype conventions match the source (bfloat16). Mathematical identity to the source holds by construction.
Usage
Regenerate locally with transformer_tricks
import transformer_tricks as tt
tt.flashify_repo('HuggingFaceTB/SmolLM2-135M', strict=True)
Via HuggingFace Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
tok = AutoTokenizer.from_pretrained('open-machine/SmolLM2-135M-FlashNorm')
model = AutoModelForCausalLM.from_pretrained('open-machine/SmolLM2-135M-FlashNorm')
ids = tok('Once upon a time there was', return_tensors='pt').input_ids
out = model.generate(ids, max_new_tokens=50, do_sample=False)
print(tok.decode(out[0], skip_special_tokens=True))
A warning about missing norm weights is expected; Transformers defaults those to ones, which is the correct value for a FlashNorm checkpoint.
Via vLLM
Not yet supported. See the tracking issue linked above.
Verification
Under fp32 inference, greedy generation from this checkpoint is bit-identical to the source SmolLM2-135M model. Under fp16 inference the output is within benchmark noise (see the Quality table in Section 5 of the paper).
License
Apache-2.0, inherited from the source model.
- Downloads last month
- 1,622
Model tree for open-machine/SmolLM2-135M-FlashNorm
Base model
HuggingFaceTB/SmolLM2-135M