Breaking the Low-Rank Dilemma of Linear Attention
Paper
•
2411.07635
•
Published
This model card describes the Rank-Augmented Vision Linear Transformer (RAVLT), introduced in the paper "Breaking the Low-Rank Dilemma of Linear Attention". RAVLT achieves state-of-the-art performance on ImageNet-1k classification while maintaining linear complexity.
Key Features:
RAVLT is based on Rank-Augmented Linear Attention (RALA), a novel attention mechanism that addresses the low-rank limitations of standard linear attention.
Several RAVLT variants were trained, offering different tradeoffs between accuracy, parameters, and FLOPs:
| Model | Params (M) | FLOPs (G) | Checkpoint |
|---|---|---|---|
| RAVLT-T | 15 | 2.4 | RAVLT-T |
| RAVLT-S | 26 | 4.6 | RAVLT-S |
| RAVLT-B | 48 | 9.9 | RAVLT-B |
| RAVLT-L | 95 | 16.0 | RAVLT-L |
Instructions on how to use the model will be provided once the code repository is available. Code will be available at https://github.com/qhfan/RALA.
@inproceedings{fan2024breakinglowrank,
title={Breaking the Low-Rank Dilemma of Linear Attention},
author={Qihang Fan and Huaibo Huang and Ran He },
year={2025},
booktitle={CVPR},
}