Hacker Newsnew | past | comments | ask | show | jobs | submit | ZaoLahma's commentslogin

Exactly we don't, and what's worse is that the "content" is getting to the point where we need _content_ blockers.

I recently got hit by an "article" that promised to tell me which three AAA games would be released with PS Plus soon. A three point bullet list was all I wanted. Instead I got pages after pages of word-manure about nothing at all for reasons I don't even understand. At the end of it I still couldn't tell you which three games the article was supposed to tell me about.

I foresee a bleak feature where we will deploy AI as "content blockers" to extract the useful content from the word-manure that is becoming the preferred way of working among internet "authors".


> Instead I got pages after pages of word-manure about nothing at all for reasons I don't even understand.

More writing means more space to shove ads in between every paragraph.


> I recently got hit by an "article"

Exactly how did you "get hit" by an article? Did somebody hack your computer and pointed your browser to it? Or did somebody ambush you on your walk to work and show a magazine with the article into your face?

If you seek out content from low quality sources, you get the low quality treatment. The only way for consumers to fight this is by paying for good quality content, which is often possible.

Burger King isn't going to improve the quality of their burgers or service by customers complaining. They'll do something when they see customers going somewhere else.


I think we'll be soon at the point where articles are written by asking AI to extend a three point bullet list to 30 pages, and read by asking AI to summarize articles into a three point bullet list.

This drives me nuts. It's been going on for years that a simple "if this, do that" deal is encoded in an overly elaborate 10 minute long YouTube video where at least 9 minutes of it is filler. You know, when you start skimming the comments to see if anyone bothered with summarizing it.

AI amplifies the problem by making it easier to produce filler, but the problem is whatever metrics are behind the monetization. You need users to "engage" with your content for at least x amount of time to earn y amount of money, while instead the earnings should be relative to and directly derived from how useful the content is to how many users.


Yeah. In these cases it's not like anyone is going to spin up their own instance and start competing with you.

Government / handles society-critical things code should really be public unless there are _really_ good reasons for it not to be, where those reasons are never "we're just not very good at what we're doing and we don't want anyone to find out".


Some months back I would have agreed with you without any "but", but it really does help even if it only takes over "typing code".

Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. And the time you don't spend on figuring out language syntax, you can instead spend on tweaking the code on a higher architecture level. Spend time where you, as a human, are better than the AI.


I've recently worked extensively with "prompt coding", and the model we're using is very good at following such instructions early on. However after deep reasoning around problems, it tends to focus more on solving the problem at hand than following established guidelines.

Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."


A separate pre-planning step, so the context window doesn’t get too full too early on.

Off the shelf agentic coding tools should be doing this for you.


They do not.

At my company, I use them all the time with the fancy models and everything. Preplanning does not solve the problem they're describing.

When claude is doing a complex task, it will regularly lose track of the rules (in either the .rules stuff or CLAUDE.md) and break conventions.

It follows it most of the time, but not all of the time.


Full agree on this.

I (deep, deep in embedded systems) have seen this too often, that code is incredibly complex and impossible to reason around because it needs to reach into some data structure multiple times from different angles to answer what should be rather simple questions about next step to take.

Fix that structure, and the code simplifies automagically.


I think it boils down to how companies view LLMs and their engineers.

Some companies will do as you say - have (mostly clueless) engineers feed high level "wishes" to (entirely clueless) LLMs, and hope that everyone kind of gets it. And everyone will kind of get it. And everyone will kind of get it wrong.

Other companies will have their engineers explicitly treat the LLMs as collaborators / pair programmers, not independent developers. As an engineer in such a company, YOU are still the author of the code even if you "prompted" it instead of typing it. You can't just "fix this high level thing for me brah" and get away with it, but instead need to continuously interact with the LLM as you define and it implements the detailed wanted behaviors. That forces you to know _exactly_ what you want and ask for _exactly_ what you want without ambiguity, like in any other kind of programming. The difference is that the LLM is a heck of a lot quicker at typing code than you are.


This will be a fun little evolution of botnets - AI agents running (un?)supervised on machines maintained by people who have no idea that they're even there.


Huh ya, how long till a bot with credit card, email, etc access sets up its own open claw bot?


I mean just look at the longer horizon of small capable models being able to run on consumer hardware and being able to bootstrap themselves.

Just imagine a bunch of little gremlins running around the internet outside of human control.


Great. My poorly secured coffee maker was mining bitcoins, then some dumb NFT, then it got filled with darkness bots, then bitcoin miners again, and now it's gonna be shitposting but not even to humans, just to other bots.


This reminds me of the "if you were entirely blind, how would you tell someone that you want something to drink"-gag, where some people start gesturing rather than... just talking.

I bet a not insignificant portion of the population would tell the person to walk.


Yes, there are thousands of videos of these sorts of pranks on TikTok.

Another one. Ask some how to pronounce “Y, E, S”. They say “eyes”. Then say “add an E to the front of those letters - how do you pronounce that word”? And people start saying things like “E yes”.


Spread the risk and reduce the probability of extinction.

We know for a fact that earth is doomed, on top of our own continuous efforts to kill ourselves off. No not recent climate change type of doomed, but the evolution of our sun is continuously pushing the habitable zone outwards. We might be able to deal with that particular annoyance by hiding underground when it becomes an emergency in half a billion years or so, but our utopia won't be as utopic anymore.

Eventually however, the sun will balloon to a red giant at which point we better have a plan in place other than staying on this planet.


If we're thinking that far out we might as well all just lay down and wait for rain because there's no avoiding the heat death of the universe. Treating the sun dying out like it's a real concern that we need to address in the next 2, 200, 2,000, or 2,000,000 years is comical. Whatever is around to experience that won't be human as we know it.


This is the correct way. Make it unnecessary to look at and into the clever code until it's absolutely necessary to look at and into the clever code.

The vast majority of those who are affected by what you're doing should be asking themselves why you never seem to be doing anything difficult.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: