Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution
Paper
β’
2504.11271
β’
Published
The evaluation environments adopted by us is recorded in the requirements.txt. After you built your own basic Python (Python = 3.9 in our setting) setup via either virtual environment or anaconda, please try to keep similar to it via:
Step1: install Pytorch first:
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
Step2: install other libs via:
pip install -r requirements.txt
or take it as a reference based on your original environments.
run.shCUDA_VISIBLE_DEVICES=0 python test_demo.py --data_dir [path to your data dir] --save_dir [path to your save dir] --model_id 23
--data_dir and --save_dir.If our work is useful to you, please use the following BibTeX for citation.
@inproceedings{Chai2025DistillationSupervisedCL,
title={Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution},
author={Xinning Chai and Yao Zhang and Yuxuan Zhang and Zhengxue Cheng and Yingsheng Qin and Yucai Yang and Li Song},
year={2025},
url={https://api.semanticscholar.org/CorpusID:277787382}
}
This code repository is release under MIT License.