Everyone is fine-tuning constantly though. Training an entire model in excess of a few billion parameters. It’s pretty much on nobody’s personal radar, you have a handful of well fundedgroups using pytorch to do that. The masses are still using pytorch, just on small training jobs.
Fine-tuning is great for known, concrete use cases where you have the data in hand already, but how much of the industry does that actually cover? Managers have hated those use cases since the beginning of the deep learning era — huge upfront cost for data collection, high latency cycles for training and validation, slow reaction speed to new requirements and conditions.
That's wrong. Llama.cpp / Candle doesn't offer anything on the table that PyTorch cannot do (design wise). What they offer is smaller deployment footprint.
What's modern about LLM is the training infrastructure and single coordinator pattern, which PyTorch just started and inferior to many internal implementations: https://pytorch.org/blog/integration-idea-monarch/
Pytorch is still pretty dominant in cloud hosting. I’m not aware of anyone not using it (usually by way of vLLM or similar). It’s also completely dominant for training. I’m not aware of anyone using anything else.
It’s not dominant in terms of self-hosted where llama.cpp wins but there’s also not really that much self-hosting going on (at least compared with the amount of requests that hosted models are serving)
Wait, are LLMs not built with PyTorch?