Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The author misses the forest for the trees. He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative I.e the rate of improvement of these tools.

That being said, everything is overvalued and a lot of this is ridiculous.



> He's accurately articulating the current state of tools he's using but isn't acknowledging or extrapolating the next derivative

Extrapolation would reasonably show that they're reaching an asymptote, graph cost vs improvement on a chart; you'll see that they are not proportional.


I think you are the ine missing the forest for the trees.

- The energy efficiency and cost improvements of LLMs has plateau-ed as of late. https://arxiv.org/html/2507.11417v1

- The improvements from each subsequent model have also plateau-ed, with even regressions being noticeable

- The biggest players are so wildly unprofitable that they are already trying to change their plans or squeeze their current fanbase and raise their rates

https://news.ycombinator.com/item?id=44598254

https://www.wheresyoured.at/anthropic-is-bleeding-out/

- And, as it turns out, experienced developers are 19% less productive using LLMs: https://www.theregister.com/2025/07/11/ai_code_tools_slow_do...

> I.e the rate of improvement of these tools.

They have stopped improving to match for the increase of the rate their costs and benefits. It's simple mathematics, improvements in efficiency don't match the increases in costs and the fact that they are extremely unprofitable, and all that data points to a bubble.

It's one of the most obvious bubbles if I have ever seen one, propped only by vibes and X-posts and Sama's promise that AGI is just around the corner, just inject a couple trillion more, trust me bro. All that for a fancy autocomplete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: