Hacker Newsnew | past | comments | ask | show | jobs | submit | vanillameow's commentslogin

This LinkedIn translator was a genius move by Kagi, honestly. A lot of people are incredibly tired not only of AI(-adjacent) writing, but overall of people with a stick up their ass thinking they're this generation's Aristotle.

Having their slop written out in plain English really shows you how vain it all is.


They actually did not add LinkedIn specifically. It's an AI translator that accepts anything in the `to` field.

https://translate.kagi.com/?from=en&to=Crypto%20Scammer&text...


So I've seen. It's just the LinkedIn one is what they advertised. Speaks to the fact that it's probably some slopcoded thing, which I'd usually get mildly upset about but who can muster the effort in this economy. I think the point still stands though.

You shouldn't pick up fights with them, even if they run out of ammo, they'll just use the stick up their ass as a backup weapon.

I'll show myself out of the Citadel.


They should make an extension for it (assuming it really is a small model behind the scenes and is relatively cheap to serve)

Most people don’t read Aristotle. If thousands of wannabe philosophers can paraphrase some of his ideas and make them accessible to the masses, that’s a net plus. If they can stroke their vanity along the way, even more people win.

It’s much better than writing bitter diatribes.


To be fair, he’s a much, much drier read than Plato.

...what is your actual point? I'm pretty sure none of the shit I read on LinkedIn is making "philosophical ideas accessible to the masses", it's churned out 20x regurgitated self-promotional material.

Is this a bot post?


haha. No, actual human here.

My point is that some people find this stuff valuable. You're not the target audience.


Well, their Bio is full of linkedin speak, so make your own conclusions...

I just translated that to English via Kagi so that I could understand it.

I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?

"I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle."

"Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one."

etc. I do believe AI currently accelerates businesses, especially in software dev. We work with a contractor who use Claude Code to reach incredible development pace for the size of their team, but also when we sit down with them in meetings they understand what's being created, they are able to argue their architectural choices, and they know how to propose business value.

You can't just buy a Claude subscription and have magically solve your problems. The thing is, as soon as Claude can do this without a business savvy human in the loop, then a) everyone can do it, so you won't actually have any value to propose, and b) Once the AI can run businesses without humans in the loop, you can bet your ass they will not out of the goodness of their hearts keep giving that ability away for $20.

In summary, AI if used to accelerate businesses _CAN_ be good. Buying it as a magic bullet to bring you out of poverty is probably a worse choice than just buying a lottery ticket.


That really reminds me of the "mashup" bubble in the late 2000's, when all services started to provide API and people were calling themselves "entrepreneurs" for combining 2 sources of data, like putting craigslist ads on a map.

That didn't last long!


Are you sure? We have many SaaS and final products which are just stitching together more SaaS. We have a very vocal part of the HN community always reminding you to buy a SaaS solution and connect it to your business instead of maintaining an in-house bespoke solution.

Isn't almost everyone doing that. Deploy docker to AWS connecting to Slack, Open AI and Anthropic to do X Y Z.

that's like saying my job is to transfer money from my employer to the homeowner. Technically true but something else happens in the process

I think that there's a "time window" right now, before most people realized the scale of AI. Those who jump there first, can monetize it. It certainly won't last forever, but you can earn some money while it lasts. And you will have years of AI-relevant experience afterwards.

Not incorrect, but it honestly borders on grifting a lot of the time imo. At least it's a spectrum. If you are supercharging your existing technical and domain knowledge, and actually caring about the security of your customers while doing so, fair play. That is real entrepreneurship.

Then there's people who are "well intentioned", I guess, but lack the technical knowledge. A friend of a friend with no technical background is selling websites to companies that he writes with Claude. They look shiny, everyone's happy in the short run, but I don't doubt issues will come up down the line that someone will have to be responsible for. I'd personally feel like I was ripping people off doing this, but I think also Dunning-Kruger prevents you from knowing any better if you are the type of person doing this.

Then there's the whole B2B SaaS gang that are basically just producing vaporware and telling other people how to produce more vaporware. This is no different from crypto, NFTs etc. before it really. Just people trying to hustle others.

And then there's the whole clawdbot gang probably burning more in tokens everyday than normal people use in a month so they can sort 18 e-mails.

So yeah I mean you're right, there certainly is a subset of people who are using this ethically (as ethically as you can use LLMs but that's another story) to make some money on the side. Certainly not the majority though I'd say.


If the technology becomes cheaper, this creates more market pressure, by changing the cost base of certain product. For example books when printing press was invented went from luxury to something expensive but more affordable. In software markets that means that will have more software, more competition and in free market segments profits will evaporate.

The pseudo "entrepreneurs" who think they could outsmart the market by working less, are just naive. In a free market economy optimization is brutal and a freelancer developer will sell the same "product" cheaper, because he has the same technology available to him.

So the only way to get the gains from these AI technologies is to have something that can't be easily copied like market knowledge, data access or sweetheart deals with big companies that can pay more because their profits support the higher spend.

Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation. But the margins will go waaay down. 25$ for a set of forms and a database, not gonna cut it anymore.


> Also, services based SAAS especially B2B will not die, because a tyre shop won't have the time to write/debug/host it's own solution and will not want to depend to a single contractor who can disappear for a vacation.

True in the current state of LLMs, possibly not true forever if someone finds the magic bullet that turns the one-shotting (reliable) software dream that companies like Anthropic and Perplexity currently peddle into reality. Seems far-fetched ATM but the gains since GPT-2 have been very real.

We're quite a ways away from this though, even with Opus 4.6 and the like. And even further from it being part of Claude Code rather than some proprietary $1000/mo. closed-source solution.

As you say though, _if_ such a technology were to exist, it's Anthropic that holds all the cards, not random entrepreneur #25721 who is asking the Anthropic API the same thing that the actual customer could just be asking directly. At that point you're an undesirable middleman, not a business.


> I can't help but feel a little bit of ... pity for a lot of the people who call themselves "entrepreneurs" in this survey?

Fake it till you make it mentality that degenerated completely once we got the internet. It used to be "crypto will make you rich, buy my coin/course", now it's "AI will make you rich buy my tool/course", the same type of people will get fleeced

These are the people getting all the attention: https://www.youtube.com/watch?v=NwaUMBQ3Wgg


That’s what I really gets me. These folks who are “so rich from said technology” always need you to buy their course for $5,000… Likes buddy if you were bringing in so much money you probably wouldn’t be pestering people to take your “course” and you certainly aren’t going to give any info away that have value only because they are obscure or hard to do… They are also almost ALWAYS self proclaimed experts. Oversight everyone because an AI expert. Before ChatGPT they probably had zero AI was a large field and machine learning is one small part of it..

It’s funny how so much of market demand ends up just ends up boiling down to basic needs. Everyone’s always trying to hustle so they don’t have to worry about financial instability.

The quote about being temporarily embarrassed millionaires comes to mind….


A great AI future is the robots doing stuff so we can be free. But none of the major isms are geared up to provide that i.e. capitalism or communism. Maybe hackable with UBI and capitalism mix.

Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.

https://simonwillison.net/guides/agentic-engineering-pattern...

Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.

"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."

Give me a fucking break


Your reaction is worse than the article. There's no way you could know for sure what their writing process was, but that doesn't stop you from making overconfident claims.

I’m sorry but no attempt was made here. It contains all the red flags in the first few paragraphs.

Sorry but seems like most people don't care or even like AI writing more:

https://x.com/kevinroose/status/2031397522590282212


That's the problem with AI writing in a nutshell. In a blind, relatively short comparison (similarly used for RLHF), AI writing has a florid, punchy quality that intuitively feels like high quality writing.

But then after you read the exact same structure a dozen times a day on the web, it becomes like nails on the chalkboard. It's a combination of "too much of a good thing" with little variation throughout a long piece of prose, and basic pattern recognition of AI output from a model coalescing to a consistent style that can be spotted as if 1-3 human ghost writers wrote 1/4 of the content on the web.


One thing I've learned recently is a lot guys (like here) have been out here reading each word of a given company's tech blog, closely parsing each sentence construction.. I really cant imagine being even concious of the prose for something like this. A corporate blog, to me, has some base level of banality to it. It's like reading a cereal box and getting angry at the lack of nuance.

Like who cares? Is there really some nostalgia for a time before this? When reading some press release from a cybersecurity company was akin to Joyce or Nabakov or whatever? (Maybe Hemingway...)

We really gotta be picking our battles here imo, and this doesn't feel like a high priority target. Let companies be the weird inhuman things that they are.

Read a novel! They are great, I promise. Then when you read other stuff, maybe you won't feel so angry?


I've picked up reading again over the last year or so! Maybe, if anything, that is why I feel so angry. Writing and reading are how we communicate thoughts and ideas between people, humans, at scale. A grand fantasy novel evokes a thirst for adventure, a romance evokes a yearning for true love.

What makes me angry, is to use the feelings we associate with this process and disingenuously pretend that there is a human that wants to tell me something, just for it to be generated drivel.

Don't get me wrong, I don't mind reading AI content, but it should read like this: "Our AI agent 'hacked' (found unexposed API endpoints) x or y company, we asked it to summarize and here's what it said:" - now I know I am about to read generated content, and I can decide myself if I want to engage with it or not. Do you ever notice how nobody that uses AI writing does this? If using AI to produce creative media, including art, music, videos, and writing, is so innocuous, why do all the "AI creatives" so desperately want to hide it from you? Because they don't want you to know that it's generated. Their literal goal is to pretend to have a deeper understanding, a better outlook, on a given topic, than they actually have. I think it is sad for them to feel the need to do this, and sad for me to have to use my limited lifespan discerning it. That is why I am angry.

Anyway, there's no need to "closely parse each sentence construction" at all to identify this post is fully AI generated. It's about as clear as they come. If you have trouble identifying that, well, in the short term you're probably at a disadvantage. In the long term, if AI does ever become able to fully mimic human expression, it won't matter anyway, I guess.

ps: FWIW, I agree with you that of all places, some random AI company with an AI generated website reporting on their AI pentesting with AI is the least surprising thing - the entire company is slop, and it's very easy to see that. My initial post was more of a projection at the dozens of posts I've read from personal blogs in recent weeks where I had to carefully decide if someone's writing that they publish under their own name actually contains original thought or not.


Ah well I guess you are on the right side of this either way! No need to even explain. It seems that people really really do care, and its wrong to say maybe its ok that they don't have to in this case. I guess I get it, I am generally more wrong the right anyway, and yes, at the very least, I am clearly in some way sub literate and uncritical as a reader, who can't tell the difference anyway. Not really the guy to be giving his opinion here. I will go find some slop to enjoy while the adults figure out the important stuff! Thanks for teaching me the lesson here.

Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.

> If the primary appeal of your VR universe is that your avatar can be an anthropomorphic banana, an anime girl, a furry, a giant penis with legs - that's never going to become a 300-million-user platform.

I mean the inherent appeal of VR is self-expression; being who you want to be, seeing the worlds you want to see. You won't get 300 million users with corporate slop either. That maybe works once, if ever, VR headsets become an interface suitable for white collar work, which they currently very much aren't, and then it wouldn't be the next Facebook - it'd be the next Microsoft Teams. Which is not really in line with Meta's other offerings, though they certainly wouldn't say no to it I guess. But I think a 500-user survey is all it would take to get a very clear signal that current VR is NOT about to replace Teams.


>Is there realistically any way to CAPTCHA anymore?

Talking to someone in real life.

Of course that's taking the piss, but realistically that is sort of the answer: Having access to a side-channel through which you can identify that a person is in fact a person, coupled with the trust that they won't try to sleight you by making you waste your attention. Honestly with how much LLM drivel is being generated and the sort-of renaissance of the personal website, blog, and RSS, I wouldn't be surprised if some kind of consensus-trust-based network for human authors were established.


Considering the topic of this article I'm giving you the benefit of the doubt, but to be honest - if you're not writing your articles with LLMs, you should strongly consider changing your writing style. I peeked some of your other articles, like the one about half your readers being bots, and it reads straight out of ChatGPT. I trust given your framing in this article that you know that's not a good thing.

Yep... read exactly like AI-slop. Became boring after the first paragraph or so, so didn't read all of it.

This is a project where I actually kind of like the idea, but the implementation looks incredibly soulless.


It really is slop isn’t it.


I would agree if not for the fact that they just let a $200M contract slip through over it. You could argue it's "safety theater" in itself but that seems like a risky gambit especially with this administration. I definitely trust Anthropic more than OpenAI. In fact I'd go as far as to say it's probably pretty imperative that Anthropic stays a frontrunner in this race and doesn't leave the field exclusively to OAI (and maybe Google which is just as bad). That doesn't mean I'm exactly happy with Anthropic's comments like "mass surveillance bad but only for the US". But Anthropic at least regularly asks questions about the direction of AI development. I haven't seen the other frontier model companies do any such thing.


What does $200m mean for someone who thinks a trillion in revenue is likely among AI companies in the next 5 years? Which is a real quote.


Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move. It's basically free financial and technological backing at the cost of ethics. Additionally, basically everyone with functioning eyeballs knows that the current US administration is incredibly vindicative, reckless and short-tempered. I would agree that in a more tame administration, you might do something like this as a publicity stunt. In the Trump administration, and while the AI arms race is still in full force, it feels like there has to be at least somewhat genuine sentiment behind it, otherwise it just doesn't really make sense. Like what do they accomplish from this? You'll get some users who will view you more favourably for it but it probably won't make up for the lost revenue, and no matter how many people like you, if you are first to AGI in this industry you win. The prior sentiment basically won't matter at that point. In the most critical interpretation I guess you could say if the bubble pops it might be more of a matter of sentiment. I don't know, in my mind the math just doesn't work for it to be a business move.


>Regardless, I think if you are thinking purely from a ruthless business standpoint then standing up to the DoD was an incredibly ill-advised move.

It wasn't, there's been non-stop talk here for days about how Anthropic is a step-above, better-than-the-rest, the "only good AI" company. Enough already. It is a marketing tactic they are taking in opposition to OpenAI.


And in reality most of what does need a heartbeat loop can also easily be automated by just asking Claude to set up a cronjob. I think genuinely the most "novel" thing about something like OpenClaw is just that it "feels" more like a "real entity", like a partner rather than a chatbot, and for some reason that resonates with people. Whether that's by itself kind of a huge red flag or kind of a nothingburger, everyone has to decide for themselves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: