Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> toward LLM-based AI, moving away from more traditional PyTorch use cases

Wait, are LLMs not built with PyTorch?



GP is likely saying that “building with AI” these days is mostly prompting pretrained models rather than training your own (using PyTorch).


Everyone is fine-tuning constantly though. Training an entire model in excess of a few billion parameters. It’s pretty much on nobody’s personal radar, you have a handful of well fundedgroups using pytorch to do that. The masses are still using pytorch, just on small training jobs.

Building AI, and building with AI.


Fine-tuning is great for known, concrete use cases where you have the data in hand already, but how much of the industry does that actually cover? Managers have hated those use cases since the beginning of the deep learning era — huge upfront cost for data collection, high latency cycles for training and validation, slow reaction speed to new requirements and conditions.


Llama and Candle are a lot more modern for these things than PyTorch/libtorch, though libtorch is still the de-facto standard.


That's wrong. Llama.cpp / Candle doesn't offer anything on the table that PyTorch cannot do (design wise). What they offer is smaller deployment footprint.

What's modern about LLM is the training infrastructure and single coordinator pattern, which PyTorch just started and inferior to many internal implementations: https://pytorch.org/blog/integration-idea-monarch/


Pytorch is still pretty dominant in cloud hosting. I’m not aware of anyone not using it (usually by way of vLLM or similar). It’s also completely dominant for training. I’m not aware of anyone using anything else.

It’s not dominant in terms of self-hosted where llama.cpp wins but there’s also not really that much self-hosting going on (at least compared with the amount of requests that hosted models are serving)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: