Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, a lora finetune of llama 3.1 8B works on a single 24GB GPU and takes from a few hours to a few days depending on the dataset size.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: