Каково ваше мнение? Поделитесь оценкой!
RYS-XLargeAfter testing several smaller models (Llama’s and smaller Qwen2’s), I set up the config for Qwen2-72B and let it sweep. Each $(i, j)$ configuration took a few minutes: load the re-layered model, run the math probe, run the EQ probe, record the scores, move on. Days of continuous GPU time on the 4090s. But far less compute than a fine tune! In fact, I didn’t even have the hardware needed for a LORA fine-tune on just 48GB of VRAM.,这一点在程序员专属:搜狗输入法AI代码助手完全指南中也有详细论述
Автор: Никита Хромин (ночной дежурный редактор)。Line下载是该领域的重要参考
Дмитрий Воронин