Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And its not entitled to cliam that "Most people have extra hardware lying around at home". Your story doesn't sound plausible at all.


Most people who would want to be running machine learning models probably have some hardware at home that can handle a slow task for playing around and determining if it is worthwhile to pay out for something more performant.

This is undoubtedly entitled, but thinking to yourself huh, I think it's time to try out some of this machine learning stuff is a pretty inherently entitled thing to do.


This project is literally aiming to run on devices like old phones.

I don't think having an old phone is particularly entitled.

I think casually slapping down $100 on whim to play with an API... probably, yeah.

/shrug


According to this tweet, Llama 3 costs about $0.20 per Million tokens using an M2.

https://x.com/awnihannun/status/1786069640948719956

In comparison, GPT3.5-turbo costs $0.50 per million tokens.

Do you think an old iPhone will less than 2x efficient?


FWIW depends on cost of power. Where I live cost of power is less than half the stated average.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: