Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a bunch of hand-wringing. If the AI reply is what I would have said anyways; awesome! Finding thumbnails is hard, but if AI comes up with a good one? Fantastic!

None of your interactions except the ones with your friends and family are authentic, and they never were. It's a bit silly to get annoyed over OnlyFans models or some mega YouTube celebrity using AI because you're losing out on "authentic human interaction" - you only mattered to them insofar as you provided them with money to begin with.



I think you miss the part where they fake it being real.

Just like you have a formal paper with CEO signature printed out - you know that guy is not going to sign million of copies.

But I feel offended that they think badly printed signature with pixelation will fool me or will make it somewhat better.

I don’t miss interaction with CEO but I know someone put in effort to fool me.

From all Christmas bonuses and gifts over the years from companies I remember only one where manager of 100 or so people in business unit who actually wrote 100 cards with name for each of us.


Even the US presidents don't even sign all laws by hand. They use machines since a long time.

https://en.m.wikipedia.org/wiki/Autopen


Politicians should not only be required to sign the laws they back by hand but to fully recite them without error - anything less means the law isn't important enough to make the books.


> Just like you have a formal paper with CEO signature printed out - you know that guy is not going to sign million of copies.

No, but I assume it would still be considered a valid signature in case of some legal dispute. The CEO may not have signed the document by hand (nor even read it), but the company placing the likeness of CEO's signature in the document signals that the CEO accepts responsibility for it. The CEO is still "in the loop" anyway, they had to personally approve the use of their signature like this.

Which is to say, I consider such "fake signatures" perfectly OK. I just don't consider them as a sign of care or personal interest.

Now, marketing communication that does it, is another story. It's bullshit all the way through, signature included.

> I remember only one where manager of 100 or so people in business unit who actually wrote 100 cards with name for each of us.

Which reminds me - even actually hand-written letters can be fake. Have you ever found a hand-written letter inviting you to a Bible study?

I grew up in a religion that's big on preaching; mostly door-to-door, but when that's for some reason impossible (e.g. time, health constraints), people would write letters instead. Some people were real "high performers" here, in the sense they would sit down over couple evenings and hand-write couple dozen letters, to be distributed around some neighborhood instead of going through it personally. I used to be impressed by dedication, but it eventually dawned on me - it's just exploiting the faux personal connection. They're selling something (which they may feel is genuinely worth it), and hand-written letters is just a sales tactic. They're hoping you pick it up and think about how much effort someone put into a personal letter to you. But the effort is not genuine; it's a fake signal. In reality, the author probably had a good time spending an evening with friends, writing a letter after letter after letter.

So while I 100% believe the intentions of that manager of yours were pure and his heart was in the right place, I post this as a warning for the general case: high effort doesn't automatically imply it's genuine and honest. If it feels like sales, it probably is.

Related: the secret to pulling off a magic trick is to put much, much more effort into preparing the trick than a reasonable person would expect. Same applies to sales.


> Just like you have a formal paper with CEO signature printed out - you know that guy is not going to sign million of copies.

Then don't. It is pathetic and disingenuous to pretend to be personal when you are not. Especially those who spend extra on a single squiggle of blue printer ink.


This is a silly take, I think.

The point is that people pay to have a genuine interaction. If everything is fake, why interact at all? Imagine you were typing into the void and no one saw what you said. Would you continue doing it if you knew? How about on this forum?


I actually been in this exact scenario before. I and another friend were avidly into Hearthstone, and another third person was playing with an Hearthstone Cheat bot in an University study room. We asked to watch for a while.

After a while, some dread was setting in. We started asking questions:

* Why did it hover that card?

"To pretend it's human. The card has less than 10% playrate on that class"

* Did it... just spam the Well Met meme while going face?

"Of course. Because people do it"

* Wait it ropes the opponents?

"Yes. You can set it to rope back"

We kept seeing more and more behaviors. It would squelch noisy opponents. It would even tap the ground pretending to be a bored person. And then it hit me: 95% of Hearthstone players are bots. Every single human behavior that you could perhaps use to identify 'people', it faked.

I quit Hearthstone within that week.


People looking for dates on Tinder.

Yes more than 70% of Tinder profiles are already probably bots at this time. The CEO said they had an AI strategy.

People will be more and more lonely.


It's hard for me to understand how Tinder is not dead yet. One big pile of pop-up ads (even when you pay you get upsell popups) and some Chinese scammer bots.


It's even more of a shame that OkCupid is dead. They had such a nice method, which was actually introducing good people to me.


I'd argue that what Match Group did with OKC was bordering on criminal. Everything on it worked and it worked for small, often marginalized groups. It was turned into a worse version of tender.


I met my wife through OKC, she's a lovely lady.


It's highly dependent on location. I use it while traveling, and yes in a few countries it's useless, but I've met 300-400 people over the last 7 years. It's added more value to my life than any other single app (even though I've never paid a dime for it).

For the record, I'm male, mid 30's, and average looking.


An average of 1 person per week over a period of 7 years? Impressive. Is it fair to assume you're using a relatively shallow definition of "value" here, or was there something else you had in mind?


> People will be more and more lonely.

Or people might be less lonely. There will come a point where the online experience becomes worthless and people will place greater importance on face-to-face interaction. That's how it was 20 years ago.


Don’t worry, robots are coming for that too https://en.wikipedia.org/wiki/Surrogates

Will be very hard to tell the difference until you are intimate


Why surrogates wouldn't be able to function as teledildonic proxies for 'intimacy'?


You'd be able to tell the difference.


Come on, how would you know the difference, if you'd grown up without ever have known the real thing?

To extrapolate the theme of that movie just a little bit further?


Why would you quit hearthstone over the presence of bots?

It's not like you have meaningful interactions with the other players anyway (beyond the occasional post-game friend request which has a 50/50 chance to be abusive).

(Honest question, I'm not arguing you're wrong)


If you are just playing against bots, the challenge is arbitrary. What would top 10% actually mean? 5000 ELO? Albeit, the ranks are slightly arbitrary already as the matchmaking algorithm significantly influences your competitive experience. But you know when you get higher rank, you have proven you are better than increasing amounts of real players. If every person was actually playing a single player variant of the game (matched against bots), the reward for climbing the ranked ladder is significantly diminished.


In multiplayer games much of the fun in winning is someone else losing.


> I quit Hearthstone within that week.

So it had positive effect in the end. I wish I had something like this that would stop me use any social media.


> I wish I had something like this that would stop me use any social media.

We must be looking at different social medias as this is pretty much all I see.


> Every single human behavior that you could perhaps use to identify 'people', it faked.

Except, you know, actually playing the game well. The bots all play aggro decks and get lower than average winrates.

> I quit Hearthstone within that week.

Why? Their presence makes climbing the ladder easier, so you get more rewards.


Well, very soon you can quit the Internet LOL


Well unlike with modern multiplayer games where the matchmaking algorithm decides who you play with, on the Internet you can still choose what websites to visit.


> 95% of Hearthstone players are bots

Pure cope, but whatever helps you sleep better at night.


Honestly, if more people wondered about who'll read what they write online, and whether it matters that those folks read it, the Internet would probably be a better place.

I really don't care about youtubers using tools like this. If it works for them and saves them time: great! If it reduces their level of authenticity: that'll lead to a correction of their popularity.

For OnlyFans... meuh. To me, the idea of interaction there reminds me strongly of the "adult" phone lines of old. You want someone to tease you and say naughty things to you? Well, you can get that for 99 cents/minute.

If you want to buy the attention of someone specific, when they can let someone/something else do that for cents on the dollar... don't be surprised to be tricked.


I think that is the point of this article.

On only fans people are paying extra for individual specific attention.

But AI can be good enough to fool people that are specifically looking for that experience.

Sure, free market view, if the automated chat isn't good enough then it will lead to people leaving. The point is, it is good enough.

Don't think anybody is complaining that people on only fans are being duped, maybe more concerning that if you can have realistic video and chat, and people are being fooled, then there is some wider impacts on society. Large chunks of people could get fooled by any number of relationships that aren't real.


> pay to have a genuine interaction

If you're paying for it, how genuine is it really?


Depends on the price.


I wonder if signed content will ultimately be needed?


If I use AI to generate the text of a happy birthday greeting to grandma, if I review and approve of the message, is it genuine?


Humans have been doing this long before AI. It's called Hallmark.

(and no, it's not genuine)


Hallmark cards tend to leave a lot of blank space to write. It’s absolutely genuine when you grab a pen and start writing. Bonus points if the Hallmark message is relevant to the person receiving it and maybe something you can riff on in the handwritten part. Funny cards are also great!

So I would say the golden rule continues to apply: the more effort is visible, the more genuine it is.


You could even say that Hallmark is giving you a prompt.


If you paid someone to do it for you, is it genuine? Ghostwriting birthday cards is…quite grim, whether that’s an AI or not.


To me, yes. Key point being, "if I review and approve of the message" - if you actually do this, then yes.

Being genuine is all about how much actual care and heart you pour into thing. That's really something only you can truly know. Using generative AI doesn't automatically make it not genuine, much like using a grammar checker or a thesaurus doesn't.

Conversely, not using AI tools doesn't suddenly make YouTube and OnlyFan creators' comments genuine. They never were. There is no care and heart in there, only salesmanship, and it's most likely outsourced to a brand management company anyway.


Only if you also tell me you did. Then I know I'm not worth your time.


I recently went to an social interest group meeting. It begins with a presentation, which is generally confirmed in advance by the organizer based on a text abstract of the planned presentation submitted by a future presenter. This time, the presentation turned out to be staggeringly bad, mostly consisting of filler words and interjections, and presenter was struggling to express even the basic idea. I could not fathom how that could happen, considering the idea was presumably expressed in the abstract. I had to write one before, and felt annoying and difficult to write down what I felt was all ready to be said in my head, but as a result I knew what I wanted to say and how I could say it when in front of a dozen of people.

At some point, the organizer (to help situation) quoted from the abstract that was submitted, and it was indeed apparently decently written. After that, I could not help thinking that the likelihood the abstract was written by an LLM is very high. In that case, the presenter certainly reviewed it, but crucially did not write it—the thought process that makes the idea part of your active vocabulary, and you capable of expressing it, would not have taken place.

To reiterate, I don’t know whether it happened or not, but even if no LLM was involved in this instance (perhaps it was just a particularly violent case of stage fright, despite the event being very small and the IRL vibe extremely casual) it would be beyond silly to assume that it would not be happening going forward.

I used to think that it is beneficial if an LLM can help in handling certain boring signaling communication for people who are very bad at it, acting as a sort of an equalizer. A model writes stuff, you approve it, and you gain access somewhere without having, say, the written language flair of someone who went to a prestigious school, or having to spend time on something that seems unnecessary.

I am changing my mind now. Sure, the case I have described is one of the more extreme ones, but it made me think how signals are actually signals for a reason[0], and when some signals go away the communication field does not become equalized—instead, other signals and barriers are used: money, IRL meetings, invitation, some sort of privacy-violating invasive check of humanity, etc., or the communication that relied on some signals before would simply not happen now. When the Web and tech in general had removed a whole lot of constraints on communication, we still could rely on those signals, but that is apparently coming to an end.

Writing a birthday card is another endangered signal. The impression from the movies, how these gestures become very cheap if they come from some rich CEO who certainly has a personal assistant for this sort of stuff, now applies to everyone (including people who would never touch an LLM with a ten foot pole). Once we all know that a birthday card can be reduced to “I have read and approved this message”, as social beings we won’t stop needing the psychological impact of such positive gestures—we only stop receiving it.

I am not sure I see all of the above as positive (even if in the latter case I am slightly optimistic that some viable substitute for those signals could be found within personal relationships).

[0] Even that reason is dubious, like discrimination by a social criteria, well then that’s not going to go anywhere. Tech is not going to solve that human problem, besides perhaps a very fleeting handful of months when some techies gain edge while everyone is still catching on. New barriers will be erected, the core problem left unaddressed.


I would say no.


What if you use it to give you ideas and you come up with your own greeting.

And then you generate one more, and it's the greeting you came up with.


If the commenter or chatbot doesn't say [AI simulation] and labels itself as an authentic interaction, then it is wrong.


Is it? Is it also wrong that they so far weren't labeled as [copy-pasted], and/or [outsourced to influencer management agency], and/or [not by ${influencer name}]? YouTube influencers and vloggers are brands, not people; they go big enough, they start outsourcing this stuff, which to my mind is just like "AI slop", except produced by protein parrot instead of silicon one.

Nah, first and foremost, the comment page and the video itself should start with Surgeon General's warning: "You're watching a long-form, semi-interactive ad. None of this is authentic, and none of it is meant to do anything good or nice for you."

(And perhaps also: "You're probably better of going for a smoke instead of consuming this.")


A lot of it is labeled as "you are talking to me personally".

It is akin to a movie stating "a true story" - some liberties may be taken but if the protagonist becomes world president then travels to mars and becomes king of the martians. I am going to start looking for citations.


It sounds like we agree that misleading interaction should be labelled as such, you just think it's already happening and wonder why the uproar now.


> Is it also wrong that they so far weren't labeled as [copy-pasted], and/or [outsourced to influencer management agency], and/or [not by ${influencer name}]?

IMO, yes. Copy-pased might be accepteable if it is the author himself doing the selection of what to copy and paste.

If the main value of the comments is interacting with the person in question then anything less is fraud. If authenticity doesn't matter then the label won't impact the desired effect.

Should people know that these interactions have never been genuine? Yes. Is that an excuse for scamming people? Absolutely not.


Sensible take.

Expecting internet celebrities to have "authentic" interactions with you is just parasocial relationship. It always has been, and gen-AI just reveals it.


> It always has been, and gen-AI just reveals it.

Yeah, and it doesn't even change it - it just makes it more appealing for news to run with a story. Of course, the focus is on AI/authenticity angle; I'd thought they'd give some space for the plight of brand agency marketers (and cheap labor they subcontract to) being pushed out of their jobs by LLMs, but I guess that would require first explaining to people that it was those marketers who people were having their "relationships" with all along. But that's just too sad and complex to report on; it's more of an op ed stuff anyway.


It reduces the cost of the scam siginificantly which makes it available to many more scammers. It's the same with other spam content on the internet - it was possible before but AI makes the problem so much worse.

Or in other wors, yes we should be concerned about the crime syndicate moving into town even if petty theft existed before.


God forbid people think they matter to each other even worth an iota of one's time.

The fans make the person famous. The famous person isn't doing a favor for the fans. They should remember that


True, fans are the foundation of any creator's success


Fans should remember their porno hero doesn't care about them. Worse, probably despises them.

The whole ecosystem is toxic and, at the risk of sounding like a prude, should be banned.


Some do, some don't. Some people like their bosses, some don't. Some people like their clients, some don't.

It's not all black and white.


Why should it be banned? It's not like it's the worst of mainstream humanity.


Well for one only someone with severe mental issues would ever consider paying for porn considering how much of it is available. The corollary to that is that anyone making money from the porn industry is taking advantage of people incapable of making rational purchasing decisions.

And the adult industry knows this, which is why sites like Onlyfans empahsize exclusivity and direct interactions. Faking those makes the whole thing even more of a scam.


They should, but "should have known better" has never been a good excuse for scamming the ignorant.


I think it should be banned too. And maybe they do think that, but I'm just saying, anyone whose livelihood is based on the appreciation of other people should remember not to alienate those people too much


I mean, I guess if your sole goal with consuming content online is to kill free time and pound dopamine out of your brain without putting in any effort, then yeah, fair enough. I'm sure Mr. Beast appreciates your viewership.

I don't follow any creators for such a purpose. I follow creators who make interesting and meaningful things, be they YouTube videos or otherwise. The sorts of people who make a thumbnail that explains what the video is, not just one that's most likely to get attention in the algorithm. Conversely, those folks often respond to people who comment on their interesting and thoughtful things in interesting and thoughtful ways, with interesting a thoughtful replies. This is called a conversation, and it exists for purposes far beyond engagement, and is not a task well suited to AI automation (thank fucking god).

One would counter that if your (if you are indeed some sort of creator) replies are so easily automated, and your thumbnails automated, then the question must be asked... who needs you? How long until YouTube replaces you with a bot, trained on your previous videos, and tells you to kick rocks? After all, you're an unnecessary expense.


What if I'm prone to saying things some might perceive as edgy? I doubt their AI would also know where I draw the line.


>None of your interactions except the ones with your friends and family are authentic

Please speak for yourself only.


> If the AI reply is what I would have said anyways; awesome! Finding thumbnails is hard, but if AI comes up with a good one?

If the AI can replace you so easily then your interactions are worthless. In practice it won't say what you would have said but some corporate approved PC-filtered version of it devoid of any soul.

> None of your interactions except the ones with your friends and family are authentic, and they never were.

Speak for yourself. Not everyone is a sociopath.

> It's a bit silly to get annoyed over OnlyFans models or some mega YouTube celebrity using AI because you're losing out on "authentic human interaction" - you only mattered to them insofar as you provided them with money to begin with.

If (when) this catches on, it won't just be "mega celebrities" using these dystopian methods. But no, even for them expanding the scope of their ability to delude and take advantage of simps is a negative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: