Hacker Newsnew | past | comments | ask | show | jobs | submit | MarkusQ's commentslogin

This begs the question. You are assuming they wanted an LLM generated response, but were to lazy to generate one. Isn't it more likely that the reason they didn't use an LLM is that they didn't want an LLM response, so giving them one is...sort of clueless?

If you asked someone how to make French fries and they replied with a map-pin-drop on the nearest McDonald's, would you feel satisfied with the answer?


It's more like someone asks if there are McDonald's in San Francisco, and then someone else searches "mcdonald's san francisco" on Google Maps and then replies with the result. It would have been faster for the person to just type their question elsewhere and get the result back immediately instead of someone else doing it for them.

Right. If someone asks "What does ChatGPT think about ...", I'd fully agree that they're being lazy. But if that's _not_ what they ask, we shouldn't assume that that's what they meant.

We should at least consider that maybe they asked how to make French fries because they actually want to learn how to make them themselves. I'll admit the XY problem is real, and people sometimes fail to ask for what they actually want, but we should, as a rule, give them the benefit of the doubt instead of just assuming that we're smarter than them.


Such open ended questions are not the kind of questions I'm referring to.

Go ahead, use em—let the haters stew in their own typographically-impoverished purgatory.

Did you agree with it before the AI wrote it though (in which case, what was the point of involving the AI)?

If you agree with it after seeing it, but wouldn't have thought to write it yourself, what reason is there to believe you wouldn't have found some other, contradictory AI output just as agreeable? Since one of the big objections to AI output is that they uncritically agree with nonsense from the user, scycophancy-squared is even more objectionable. It's worth taking the effort to avoid falling into this trap.


Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details, sometimes showing new ways.

I find the second paragraphs contradictory - either you fear that I would agree with random stuff that the AI writes or you believe that the sycophant AI is writing what I believe. I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?


> Well - the point of involving the AI is that very often it explains my intuitions way better than I can. It instantiates them and fills in all the details

> I like to think that I can recognise good arguments, but if I am wrong here - then why would you prefer my writing from an LLM generated one?

Because the AI will happily argue either side of a debate, in both cases the meaningful/useful/reliable information in the post is constrained by the limits of _your_ knowledge. The LLM-based one will merely be longer.

Can you think of a time when you asked AI to support your point, and upon reviewing its argument, decided it was unconvincing after all and changed your mind?


You could instead ask Kimi K2 to demolish your point instead, and you may have to hold it back from insulting your mom in the ps.

Generally if your point holds up under polishing under Kimi pressure, by all means post it on HN, I'd say.

Other LLMs do tend to be more gentle with you, but if you ask them to be critical or to steelman the opposing view, they can be powerful tools for actually understanding where someone else is coming from.

Try this: Ask an LLM to read the view of the person you're answering to, and ask it steelman their arguments. Now think to see if your point is still defensible, or what kinds of sources or data you'd need to bolster it.


> why would you prefer my writing from an LLM generated one?

Because I'm interested in hearing your voice, your thoughts, as you express them, for the same reason I like eating real fruit, grown on a tree, to sucking high-fructose fruit goo squeezed fresh from a tube.


This article is almost incoherent. The author (a philosopher turned science journalist, I gather) presents everything from a "which side are you on" perspective, as if physics was a branch of sociology. Little wonder they seem to have trouble with the notion that physics can (and should) be possible without the concept of "an observer".

I stopped reading at "Let’s put this moon thing to rest. It’s true. We can’t say the moon is there if no one’s observing it."


> a philosopher turned science journalist,

Interesting, I read it as the other way round.

I wonder which of the many worlds is correct :p

The moon example is painful, but I was assuming to be a "if the tree falls in the forest... yada yada yada..." Example to justify words on a page. Although at the time my brain was screaming about things like tidal forces and gravitational effects, asif I was about to start discussing the retrograde motion of Venus with a flat earther who doesn't actually want to learn anything with rigour...

Personally I'm more worried by the comparison of Planks constant in the small to c in GR. Yes they represent asymptotic limits in many regards but are certainly not equivalent imho.


>> a philosopher turned science journalist,

> Interesting, I read it as the other way round.

I cheated and looked at the author's bio. :)


> if physics was a branch of sociology

ehhhhh but this way more apt on how it works (than you'd probably like) once you venture outside the realm of testable.

PBS Space time recently did one on multi-verse[0], watch it and, you'll get the feeling sections of this really do feel like sociology/psychology.

[0] https://www.youtube.com/watch?v=HX1EfW3euY4


That's a bit like reading Psychology Today to understand the DSM-VI committee.

That says more about PBS Space Time than about physics.

Outside the realm of the testable isn't worth discussing to experimentalists so might as well be a non quantifiable field.

Although sociology is perfectly quantifiable and measurable. Even though arguably the underlying relationships between the measurements are extremely difficult to extract.

A better example is pure philosophy and maths rather than sociology to particle theory. But then again, nobody ever accused QFT of being too simple, so maybe I'm arguing against my own point there.


From what I can tell, they are very closely related (i.e. the shared representational structures would likely make good candidates for Platonic representations, or rather, representations of Platonic categories). In any case, it seems like there should be some sort of interesting mapping between the two.

My first thought was that this was somehow distilling universal knowledge. Platonic ideals. Truth. Beauty. Then I realized- this was basically just saying that given some “common sense”, the learning essence of a model is the most important piece, and a lot of learned data is garbage and doesn’t help with many tasks. That’s not some ultimate truth, that’s just optimization. It’s still a faulty LLM, just more efficient for some tasks.

"How did you go bankrupt?" Bill asked.

"Two ways," Mike said. "Gradually and then suddenly."

-- Ernest Hemingway


Nvidia's (fiscal) Q3 2026 financial statements have been released, and are what this is all about. Fiscal years may be correlated with calendar years, but sometimes (as in this case) the correlation is rather elastic.

This is as much a failing of "peer review" as anything. Importantly, it is an intrinsic failure, which won't go away even if LLMs were to go away completely.

Peer review doesn't catch errors.

Acting as if it does, and thus assuming the fact of publication (and where it was published) are indicators of veracity is simply unfounded. We need to go back to the food fight system where everyone publishes whatever they want, their colleagues and other adversaries try their best to shred them, and the winners are the ones that stand up to the maelstrom. It's messy, but it forces critics to put forth their arguments rather than quietly gatekeeping, passing what they approve of, suppressing what they don't.


Peer review was never supposed to check every single detail and every single citation. They are not proof readers. They are not even really supposed to agree or disagree with your results. They should check the soundness of a method, general structure of a paper, that sort of thing. They do catch some errors, but the expectation is not to do another independent study or something.

Passed peer review is the first basic bar that has to be cleared. It was never supposed to be all there is to the science.


Agreed. But too often it's treated as a golden ticket confirmation of veracity, giving the process more epistemological authority than it warrants.

It would be crazy to expect them to verify every author is correct on a citation and to cross verify everything. There’s tooling that could be built for that and kinda wild isn’t a thing that’s run on paper submission.

Peer review definitely does catch errors when performed by qualified individuals. I've personally flagged papers for major revisions or rejection as a result of errors in approach or misrepresentation of source material. I have peers who say they have done similar.

I'm not sure why you think this isn't the case?


Poor wording on my part.

I should have said "Peer review doesn't catch _all_ errors" or perhaps "Peer review doesn't eliminate errors".

In other words, being "peer reviewed" is nowhere close to "error free," and if (as is often the case) the rate of errors is significantly greater than the rate at which errors are caught, peer review may not even significantly improve the quality.

https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/


Thanks for clarifying, I fully agree with your take. Peer review helps, particularly where reviewers are equipped and provided the time to do the role correctly.

However, it is not alone a guarantor of quality. As someone proximate to academia its becoming obvious that many professors are beginning to throw in the towel or are sharply reducing their time verifying quality when faced with the rising tide of slop.

The window for avoiding the natural consequences of these trends feels like it is getting scarily small.

Thanks for taking the time to reply!


I don’t think many researchers take peer review alone as a strong signal, unless it is a venue known for having serious reviewing (e.g. in CS theory, STOC and FOCS have a very high bar). But it acts as a basic filter that gets rid of obvious nonsense, which on its own is valuable. No doubt there are huge issues, but I know my papers would be worse off without reviewer feedback

Peer review is as useless as code review and unit tests, yes.

It's much more useful if everyone including the janitor and their mom can have a say on your code before you're allowed to move to your next commit.

(/s, in case it's not obvious :D )


No, it's not "as much".

The dominant "failing" here is that this is fraudulent on a professional, intellectual, and moral level.


Holly cow, we've found an exception to Betteridge's Law of Headlines! Talk about burying the lede!

If you read the article, then this is not an exception to the law

If you read the article _and_ agree with the author's conclusions. I did the former, but not the later.

Why would it not be? It's like the distinction between "a crocodile" and "being mauled" or "a credit card" and "crippling debt"; while they may frequently co-occur, either can exist without the other. Further, recognizing that they are distinct allows you to build causal models, which are vital to taking productive action.

Honest question: Are there alternative ways people can get AIDS? If so, it's news to me.

Various other viral (and even less commonly, microbial) challenges, though it's rare. HIV is special in this regard because it's the only example (so far as I know) that's transmissible.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: