Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
Paper
•
2403.03432
•
Published
•
1
Mixture-of-LoRAs: An Efficient Multitask Tuning for Large Language Models
https://arxiv.org/pdf/2403.03432
We evenly sample about 10k training data and 2k validation data on each dataset.
From laion/OIG was taken only: