Hacker Newsnew | past | comments | ask | show | jobs | submit | Eisenstein's commentslogin

If failing to validate a page because it is pointing to an RSS feed triggers a spam flag and de-indexes all of the rest of the pages, that seems important to fix. By losing legit content because of such an error they are lowering the legit:spam ratio thus causing more harm than a spam page being indexed. It might not appear so bad for one instance, but it is indicative of a larger problem.

I doubt that the porn in the 70s was less bad than the porn today. Legal CSAM was being sold openly so what makes you think that it was more tame than modern stuff?

The fact is that as difficult as it was to get, you got a hold of it and watched it. Why would 'ease of access' make any difference if you didn't have easy access and got it anyway?


Are you implying that perhaps 15-25 mins worth of porn video total throughout all of someone’s teenage years due to such rare access of the material would have a similar emotional and mental impact as having the ability to see that much daily for years as is possible now?

There could have been years between the opportunities we had. I don’t think you conceptualize just how infrequent the opportunity would present itself.


I'm not making any claims about mental or emotional impacts, you are. What are they?

A couple of comments above, you said: “Why would 'ease of access' make any difference if you didn't have easy access and got it anyway”

So exactly what is the target of the “difference” you are referring to then here? You are referencing a differential in something…if not psychological impact from the viewing of said material…what would that something be?


For instance [1]. I am speaking out of experience, as a GenZ person who has been first introduced to the entire world of sex and porn at EIGHT years old. I myself feel it has harmed my brain in ways which I'll likely never fully understand.

[1] https://eprints.qut.edu.au/217360/1/__qut.edu.au_Documents_S...


> If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?

I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play.

> I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.

By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'.

Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others.


Doesn't really matter the type of doctor, spending all their time on revenue-generating activities would seem to be better than only 75% generating revenue and 25% on "administrative and bureaucratic tasks" that don't generate revenue and which could be accomplished by a much lower-paid employee ("secretaries and assistants").

Perhaps you're correct that the doctors are simply working much longer hours but that's one group of employees among a hospital's staff who do generally have a lot of power and aren't too easy to make extraordinary demands of.


There are reasons why the claim might be right, as noted by others, and there are reasons why the claim may not be right, as noted by you. If you think that your idea of how Doctors operate in a hospital is more compelling than other people's explanations of why the claim is legitimate, then keep believing that.

I feel like that's how you get Microsoft where each division has a gun pointed at the other division

Except it makes no sense because as a scammer your goal is to get as many people as possible in contact with you so that you can scam them. You can only score on the goals you attempt so cutting out any person, no matter the reason, is illogical.

You’re assuming that there’s no cost involved in moving a potential victim through the pipeline. I’m sure AI has changed the game, but the general idea was that beyond the initial blast of spam you would have someone actually responding to those who fell for it. Putting in signals that it was a scam filtered out individuals who would waste scammer time because they would eventually figure it out before falling victim. By selecting for people who literally can’t pick up on obvious signs of a scam, you save yourself a lot of time and energy.

Keep following through the logic... You manage to hook someone who absolutely knows you're a scammer, and they keep responding to you taking up precious time you could be spending with someone who is actually likely to give you money. So, what is the upside to getting a response from someone who is never ever going to give you anything?

Occam's razor says they are just bad at English grammar because it isn't their native language/dialect and their education probably wasn't that great.

This is easily demonstrable by conversing with the scammers and noting that their actual English ability is the same as exists in the initial letter. Even when they have no chance of the scam succeeding and have been outed they write the same way. You can see plenty of evidence here:

* https://www.419eater.com/html/letters.php


Occam's razor says the sun orbits the earth, everybody dies from Sudden Unexplained Death Syndrome, and the correct way to spell Occam's razor is Okams Raza (in all languages, because lavishes other than English are difficult).

It's literally a platitude. It's like the saying 'when the going gets tough, the tough get going': it's reallyemorable and descriptive and is maybe a good guideline in many situations.

But using it to evaluate the tensile strength of various metals according to their velocity would be wild, because it had never pretended to be anything like a rule. It's not like theory of gravity or 'I before e except after c', which are based on actual analysis and results.

Legit assuming that everything is as simple as it can be, that the most obvious idea to occur to any untrained observer is the most accurate, is literally a guaranteed way to go though life without understanding anything, at all. Using it to argue with people who appear to obvious what they're talking about (and there are so, so many undisputed studies on the exact reasons scammers do what they do: it's too filter people or. There is no debate, academically) is a pretty slippery slope to 'anybody who doesn't think and act exactly like me is lying, because no reasons or facts exist unless I personally hold or after with them', and it's definitely a thought process worth challenging.

Although to be fair, its best application might be re. online arguments that you don't really care that much about. So if you just meant that the previous poster had given a reason and you were going with that because it's easier, my bad.


> Legit assuming that everything is as simple as it can be, that the most obvious idea to occur to any untrained observer is the most accurate, is literally a guaranteed way to go though life without understanding anything, at all.

That's a misunderstanding of Occam's razor. Occam's razor says that if you don't know the answer then when you have a choice between competing explanations, pick the one that requires fewer assumptions.

The explanation that they are using incorrect grammar on purpose to screen out intelligent people is logically questionable, unproven by any evidence, and relies on a bunch of assumptions: sophistication, time is worth more than leads, good enough education and experience in English to write it well, and coordination between scammers.

The 'being bad at speaking a language or dialect they are not native in and having poor access to education' explanation is logically complete and requires far fewer assumptions.


Worker efficiency an order of magnitude greater than what it was 50 years ago. An office worker with excel and the internet can accomplish in an hour what would have taken days or weeks for their counterpart to do in 1975 with a calculator and a telephone.

Who has gained from the efficiency? We haven't gotten more vacation days and we haven't gotten more share of the money.

I think it should be natural that jobs end up being mostly pointless. Why should we produce exponentially more value without getting a share of that value?


> we haven't gotten more share of the money.

But your money buys stuff that 50 years ago would have been too expensive for the richest men in the world. A pocket supercomputer, advanced diagnostics and medicine, instant access to information anywhere in the world.


Material gains (produced by more productive workers) don't offset the increases in

    the number of expenses required to minimally live
       (ex:utilities, transpo, insurance, comms) and
    the ever escalating costs of those added requirements
Nor does it offset the accelerating increases in complexity for basic living factors - complexity that consumes internal resources and time.

More to the point, a pocket supercomputer is an irrelevancy for a typical wage worker, who's earnings are far insufficient for even the barest self-sufficiency.


Try koboldcpp with the kcppt config file. The easiest way by far.

Download the release here

* https://github.com/LostRuins/koboldcpp/releases/tag/v1.103

Download the config file here

* https://huggingface.co/koboldcpp/kcppt/resolve/main/z-image-...

Set +x to the koboldcpp executable and launch it, select 'Load config' and point at the config file, then hit 'launch'.

Wait until the model weights are downloaded and launched, then open a browser and go to:

* http://localhost:5001/sdui

EDIT: This will work for Linux, Windows and Mac


I think that real honesty works well as long as you have the character to stand up for yourself. An unflinchingly honest self-assessment which shows that you understand the error and rectified it is almost always the path to take.

Acknowledgement of mistakes do not invoke much of a mob reaction unless there is wavering, self-pity, or appeals for leniency. Self-preservation should be assumed and not set as a goal -- once you appear to be doing anything that can be thought of as covering up or minimizing or blaming others, the mob will latch on to that and you get no consideration from then on.


The Chinese Room is just a roundabout way of pleading human exceptionalism. To any particular human, all other humans are a Chinese Room, but that doesn't get addressed. Nor does it address what difference it makes if something is using rules as opposed to, what, exactly? It neither posits a reason why rules preclude understanding nor why understanding is not made of rules. All it does is say 'I am not experiencing it, and it is not human, therefore I dismiss it'. It is lazy and answers nothing.

> The Chinese Room is just a roundabout way of pleading human exceptionalism

Au contraire, LLMs have proven that Chinese Rooms that can casually fool humans do exist.

ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.

I agree that our consciousness might be fully explained by a long string of deterministic electrochemical reactions, so we could be not that different; and until we can fully explain consciousness we can't close the possibility that a statistical calculation is conscious to some degree. It just doesn't seem likely IMO right now.

Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?


> ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.

Eliza is not a Chinese room because we know how it works. The whole point of the Chinese Room is that you don't. It is a thought experiment to say 'since we don't know how this is producing output, we should consider that it is just following rules (unless it is human).

> Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?

Well, I never conceded that language models are thinking, all I did was say that the Chinese Room is a lazy way of concluding human exceptionalism.

But, I would have to conclude that if you were able to produce output which was coherent and appropriate, and exhibited all signs of what I understand a thinking system to do, then it is a possibility.


According to what, exactly? How did you come up with that analogy?

Start with LLMs are not humans, but they’re obviously not ‘not intelligent’ in some sense and pick the wildest difference that comes to mind. Not OP but it makes perfect sense to me.

I think a good reminder for many users is that LLMs are not based on analyzing or copying human thought (#), but on analyzing human written text communication.

--

(#) Human thought is based on real world sensor data first of all. Human words have invisible depth behind them based on accumulated life experience of the person. So two people using the same words may have very different thoughts underneath them. Somebody having only text book knowledge and somebody having done a thing in practice for a long time may use the same words, but underneath there is a lot more going on for the latter person. We can see this expressed in the common bell curve meme -- https://www.hopefulmons.com/p/the-iq-bell-curve-meme -- While it seems to be about IQ, it really is about experience. Experience in turn is mostly physical, based on our physical sensors and physical actions. Even when we just "think", it is based on the underlying physical experiences. That is why many of our internal metaphors even for purely abstract ideas are still based on physical concepts, such as space.


They analyse human perception too, in the form of videos.

Without any of the spatial and physical object perception you train from right after birth, see toddlers playing, or the underlying wired infrastructure we are born with to understand the physical world (there was an HN submission about that not long ago). Edit, found it: https://news.ucsc.edu/2025/11/sharf-preconfigured-brain/

They are not a physical model like humans. Ours is based on deep interactions with the space and the objects (reason why touching things is important for babies), plus mentioned preexisting wiring for this purpose.


Multimodal models have perception.

If s multimodal model were considered human, it would be diagnosed with multiple severe disabilities in its sensory systems.

Isn't it obvious that the way AI works and "thinks" is completely different from how humans think? Not sure what particular source could be given for that claim.

I wonder if it depends on the human and the thinking style? E.g. I am very inner monologue driven so to me it feels like I think very similarly as to how AI seems to think via text. I wonder if it also gives me advantage in working with the AI. I only recently discovered there are people who don't have inner monologue and there are people that think in images etc. This would be unimaginable for me, especially as I think I have sort of aphantasia too, so really I am ultimately text based next token predictor myself. I don't feel that whatever I do at least is much more special compared to an LLM.

Of course I have other systems such as reflexes, physical muscle coordinators, but these feel largely separate systems from the core brain, e.g. don't matter to my intelligence.

I am naturally weak at several things that I think are not so much related to text e.g. navigating in real world etc.


Interesting... I rarely form words in my inner thinking, instead I make a plan with abstract concepts (some of them have words associated, some don't). Maybe because I am multilingual?

English is not my native language, so I'm bilingual, but I don't see how this relates to that at all. I have monologue sometimes in English, sometimes in my native language. But yeah, I don't understand any other form of thinking. It's all just my inner monologue...

No source could be given because it’s total nonsense. What happened is not in any way akin to a psychopath doing anything. It is a machine learning function that has trained on a corpus of documents to optimise performance on two tasks - first a sentence completion task, then an instruction following task.

I think that's more or less what marmalade2413 was saying and I agree with that. AI is not comparable to humans, especially today's AI, but I think future actual AI won't be either.

This debate is a huge red herring. No one is ever going to agree on what 'thinking' means, since we can't even prove that other people are thinking, only that one's self is.

What we should concentrate on is agency. Does the system have its own desires and goals, and will it act on its own accord to achieve them? If a system demonstrates those things, we should accord it the benefit of the doubt that it should have some rights and responsibilities if it chooses to partake in society.

So far, no AI can pass the agency test -- they are all reactive such that they must be given a task before they will do anything. If one day, however, we wake up and find that an AI has written a book on its own initiative, we may have some deciding to do.


> they are all reactive such that they must be given a task before they will do anything.

Isn't that just because that's what they're being trained on though?

Wonder what you would get if the training data, instead of being task based, would consist of "wanting" to do something "on someone's own initiative".

Of course then one could argue it's just following a task of "doing things on its own initiative"...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: