Hacker Newsnew | past | comments | ask | show | jobs | submit | camdenreslink's commentslogin

It does seem like things are moving very quickly even deeper than what you are saying. Less than a year ago langchain, model fine tuning and RAG were the cutting edge and the “thing to do”.

Now because of models improving, context sizes getting bigger, and commercial offerings improving I hardly hear about them.


This is very interesting to me. I’ve been working on a side project with interactive Python tutorials in the browser, and I’ve been somewhat discouraged recently by how LLMs have been changing the landscape.

It seems SEO for this sort of thing is dead, so another funnel/channel is needed. Also, CS enrollment seems to be down this past fall for the first time in a while (based on the CERP pulse survey).

But maybe there is still a market for that sort of educational content.


Are we sure LLM agents aren't the cause of these increasing outages?

Is the time it takes for an engineer to implement PRs the bottleneck in generating revenue for a software product?

In my experience it takes humans to know what to build to generate revenue, and most of the time building that product is not spent coding at all. Coding is like the last step. Spending $1k/day in tokens only makes sense if you know exactly what to build already to generate this revenue. Otherwise you are building what exactly? Is the LLM also doing the job of the business side of the house to decide what to build?


I would say I'm like 1.2x more productive, and I think I'm more of the typical case (of course I read all of the code the LLM produces, so maybe that's where I've gone wrong).

Well maintained, popular frameworks have github issues that frequently get resolved with newly patched versions of the framework. Sometimes bugs get fixed that you didn't even run into yet so everybody benefits.

Will your bespoke LLM code have that? Every issue will actually be an issue in production experienced by your customers, that will have to be identified (better have good logging and instrumentation), and fixed in your codebase.


If you handed a human an image and said please give me back this image totally unmodified, I bet the human could do it.

Not if you were asking them to redraw the image as they saw it. That's what's happening in this particular case, only with an LLM.

The new cloudflare products for blocking bots and AI scrapers might be worth a shot if you put so much work into the content.

Further, some low effort bots can be quickly handled with CF by blocking specific countries (e.g., Brazil and Russia, for one of my sites).

I work for a publisher that serves the Chinese market as a secondary market. Sucks that we can’t blanketly do this since we get hammered by Chinese bots daily. We also have an extremely old codebase (Drupal) which makes blanket caching difficult. Working to migrate from Cloudfront to Cloudflare at least

This was all possible pre-AI. The reasons that some Saas companies win have nothing to do with how quickly or cheaply code can be written for the Saas.

The stock market is trying to predict that people will vote with their dollars in the future. I’m not quite sure people are really replacing enterprise Saas at large corporations yet. It’s more of a projection.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: