Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is she going to pay enough to fund the multitrillion dollars it costs to run the current AI landscape?




Yeah, she is, because when reality sets in, these models will probably have monthly cellphone/internet level costs. And training is the main money sink, whereas inference is cheap.

500,000,000 people paying $80/mo is roughly a 5-yr ROI on a $2T investment.

I cannot believe on a tech forum I need to explain the "Get them hooked on the product, then jack up the price" business model that probably 40% of people here are kept employed with.

Right now they are (very successfully) getting everyone dependent on LLMs. They will pull rug, and people will pay to get it back. And none of the labs care if 2% of people use local/chinese models.


I personally don't know a single person that would pay $80 for some LLM. Most people i know pay nothing, or got a 1 year sub of a phone purchase or similar.

Also, everyone here conveniently always forgets the huge hardware and datacenter upfront investment that MS have already made. That cost alone will never be recouped with current prices.

If you can't even run the thing close to profitable, then how will you ever actually profit?

But don't worry guys, your robotaxi will recoup your tesla purchase within a year while you sleep.


> And training is the main money sink, whereas inference is cheap.

False. Training happens once for a time period, but inference happens again and again every time users use the product. Inference is the main money sink.

"according to a report from Google, inference now accounts for nearly 60% of total energy use in their AI workloads. Meta revealed something even more striking: within their AI infrastructure, power is distributed in a 10:20:70 ratio among experimentation, training, and inference respectively, with inference taking the lion’s share."

https://blogs.dal.ca/openthink/the-hidden-cost-of-ai-convers...


They get paid for inference, those tokens might as well be monetary tokens.

This is exactly the problem.

Companies currently are being sold that they can replace employees with little agents that cost $20 to $200 a month.

But then they realize that the $200 last for about 3.5 hours on day 1 of the month and the rest will be charged by the token. Which will then cost as much or more than the employee did, but with a nice guaranteed quota of non determinism and failure rate included.


The problem is when paying $20 or $40 a month is what's expected to pay for inference that costs $50 or $80 a month to provide. Electricity is not going to get cheaper.

> 500,000,000 people paying $80/mo

Simply not going to happen


> 500,000,000 people paying $80/mo

Or better yet, you just need 100 people paying 400 million each to get the same amount!

> "Get them hooked on the product, then jack up the price"

That only works if the product is actually good. The average person isn't going to be paying EIGHTY dollars a month to generate recipes or whatever, that's just delusional


For $960 a year, you could probably buy a recipe-ingredients-tracker-app.

I could sell my car and get my groceries delivered and save a ton of money.

Thankfully, like LLMs, cars have wide utility and use cases. I don't use my car solely for driving to the grocery store.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: