Hacker Newsnew | past | comments | ask | show | jobs | submit | lumenwrites's commentslogin

Yaay, one step closer to torment nexus.


low effort comment, whose content is a stale reference to other low effort memes


For a person so eager to psychoanalyze others, the author sure seems oblivious to his own biases.


Yeah. That's the impression I came away with as well. Yes, I fully believe that he is right to point out that we are prone to many psychological failings. We're also right to look out for them and realise that we will be fooled from time to time. But we still all need to live in this world. We can't all just sit around and wait for science to tell us whether what we're doing is fine. Until evidence amasses, the reasonable thing to do is to make a careful assessment and proceed cautiously, not sit still and tremble at the thought of acting. I don't want to be dismissive - I think the author makes some good points, just that he also strongly overstates those points, IMO.


I thought at some point he’d touch upon that but then he went on about “how authors just write to reinforce their own beliefs” and then I couldn’t take the irony any longer.


I have learned about YCombinator, hacker news, Paul Graham, and startups in general through one of his essays. I was first blown away by the brilliance and clarity of his writing, and only then did I learn that he's a prominent tech figure.

So many years later, I still haven't read a better writer (except maybe Scott Alexander). So, at least from my perspective, if anyone has the authority to write about good writing, it's this guy.


Reminds of this classic tweet:

"Guy who has only seen The Boss Baby watching his second movie: Getting a lot of 'Boss Baby' vibes from this..." https://x.com/afraidofwasps/status/1177301482464526337?lang=...


You gotta think about it in terms of cost vs benefit. How much damage will a malicious AI do, vs how much value will you get out of non-neutered model?


Is that really a contradiction? We all have our ideals, and we all fail to live up to them sometimes, because life can be brutal.


Socrates allowed himself to be put to death even though his supporters had bribed the jailer to allow him to escape. Given his philosophy of ethics, even though his trial had been unjust, he felt it was incompatible with his teachings for him to avoid the sentence that had been handed down to him.

Some people believe that their ideals are important enough to live up to even though life can be brutal.


To be fair, Rand herself said (to paraphrase) that because the state took it from her against her will it was fair play to take it back and I think that was self-consistent.

That said, she wanted to let the disabled starve to death so I don't think anyone really has to be fair to her at all. Empathy is only for the empathetic.


"Selfish person happily takes from the government, but feels bad about having ever given the government anything" seems pretty consistent to me too.


Ayn Rand did not "want to let the disabled starve to death". What a ridiculous lie.


It is not a lie. She felt that the government had no right to assist, and that they should be left to depend on "charity" (IE - Begging).

There are also tapes of her saying that the retarded should not "be allowed to come near children," and that children cannot deal with the "spectacle of a handicapped human being."


Question from audience: [muffled audio which sounds like:] "...why is this culture..."

[loud noise which sounds as if it represents a point where the tape has been edited]

Rand: [mid-sentence] "...for healthy children to use handicapped materials. I quite agree with the speaker's indignation. I think it's a monstrous thing — the whole progression of everything they're doing — to feature, or answer, or favor the incompetent, the retarded, the handicapped, including, you know, the kneeling buses and all kinds of impossible expenses. I do not think that the retarded should be ~allowed~ to come ~near~ children. Children cannot deal, and should not have to deal, with the very tragic spectacle of a handicapped human being. When they grow up, they may give it some attention, if they're interested, but it should never be presented to them in childhood, and certainly not as an example of something ~they~ have to live down to."

- Ayn Rand, The Age of Mediocrity, Q & A Ford Hall Forum, April, 1981

*EDIT* Youtube video: https://youtu.be/Q1HD8KXn-kI


Great pull, thank you for the quote and the link.


> Children cannot deal, and should not have to deal, with the very tragic spectacle of a handicapped human being. When they grow up, they may give it some attention, if they're interested, but it should never be presented to them in childhood, and certainly not as an example of something ~they~ have to live down to."

There's an irony in here, since this is more of less a summary of the ideology that wants "safe spaces" in schools.

Just, you know, with an entirely different set of things that proponents want to shield children/young adults from.


I'm pretty sure you're wrong for at least 2 of those:

For 3D models, check out blender-mcp:

https://old.reddit.com/r/singularity/comments/1joaowb/claude...

https://old.reddit.com/r/aiwars/comments/1jbsn86/claude_crea...

Also this:

https://old.reddit.com/r/StableDiffusion/comments/1hejglg/tr...

For teaching, I'm using it to learn about tech I'm unfamiliar with every day, it's one of the things it's the most amazing at.

For the things where the tolerance for mistakes is extremely low and the things where human oversight is extremely importamt, you might be right. It won't have to be perfect (just better than an average human) for that to happen, but I'm not sure if it will.


Just think about the delta of what the LLM does and what a human does, or why can’t the LLM replace the human, e.g. in a game studio.

If it can replace a teacher or an artist in 2027, you’re right and I’m wrong.


It's already replacing artists; that's why they're up in arms. People don't need stock photographers or graphic designers as much as they used to.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4602944


I know that artists don’t like AI, because it’s trained on their stolen work. And yet, AI can’t create a sprite sheet for a 2d game.

This is because it can steal a single artwork but it can’t make a collection of visually consistent assets.


Bro what are you even talking about? ControlNet has been able to produce consistent assets for years.

How exactly do you think video models work? Frame to frame coherency has been possible for a long time now. A sprite sheet?! Are you joking me. Literally churning them out with AI since 2023.


Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?


So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".

If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.

I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.

Will an LLM model read a person's mind about what they want to build better than they can communicate?

That's already what recommender systems (like the TikTok algorithm) do.

But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?

I think that's where there's a gap in (basically) belief systems of the future.

If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.

This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.


It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.

It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.

If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.


I'm pretty good at what I do, at least according to myself and the people I work with, and I'm comparing its capabilities (the latest version of Claude used as an agent inside Cursor) to myself. It can't fully do things on its own and makes mistakes, but it can do a lot.

But suppose you're right, it's 60% as good as "stackoverflow copy-pasting programmers". Isn't that a pretty insanely impressive milestone to just dismiss?

And why would it just get to this point, and then stop? Like, we can all see AIs continuously beating the benchmarks, and the progress feels very fast in terms of experience of using it as a user.

I'd need to hear a pretty compelling argument to believe that it'll suddenly stop, something more compelling than "well, it's not very good yet, therefore it won't be any better", or "Sam Altman is lying to us because incentives".

Sure, it can slow down somewhat because of the exponentially increasing compute costs, but that's assuming no more algorithmic progress, no more compute progress, and no more increases in the capital that flows into this field (I find that hard to believe).


I appreciate your reply. My tone was a little dismissive; I'm currently deep deep in the trenches trying to unwind a tremendous amount of LLM slop in my team's codebase so I'm a little sensitive.

I use Claude every day. It is definitely impressive, but in my experience only marginally more impressive than ChatGPT was a few years ago. It hallucinates less and compiles more reliably, but still produces really poor designs. It really is an overconfident junior developer.

The real risk, and what I am seeing daily, is colleagues falling for the "if you aren't using Cursor you're going to be left behind" FUD. So they learn Cursor, discover that it's an easy way to close tickets without using your brain, and end up polluting the codebase with very questionable designs.


GPT-4 was released almost exactly two years ago, so “a few years ago” means GPT-3.5.

And Claude 3.7 + Cursor agent is, for me, way more than “marginally more impressive” compared to GPT-3.5


Oh, sorry to hear that you have to deal with that!

The way I'm getting a sense of the progress is using AI for what AI is currently good at, using my human brain to do the part AI is currently bad at, and comparing it to doing the same work without AI's help.

I feel like AI is pretty close to automating 60-80% of the work I would've had to do manually two years ago (as a full-stack web developer).

It doesn't mean that the remaining 20-40% will be automated very quickly, I'm just saying that I don't see the progress getting any slower.


Try this, launch Cursor.

Type: print all prime numbers which are divisible by 3 up to 1M

The result is that it will do a sieve. There's no need for this, it's just 3.


Just tried this with Gemini 2.5 Pro. Got it right with meaningful thought process.


Because ewe still haven't figured out fusion but its been promised for decades. Why would everything thats been promised by people with highly vested interests pan out any different?

One is inherently a more challenging physics problem.


I don't think the world where a mob of people can gang up on a person and take their stuff is as idyllic as you think it is. If the person who has figured out how to earn a lot of food doesn't get to "hoard" it, it'll just get hoarded by a person with the biggest stick.

What's worse (for the society), is that in this world nobody has an incentive to create wealth, because they know it'll just be taken away. When rich people aren't in power, people with political capital and big guns are. I don't think that's better.

If AGI takes over, that changes things, somewhat. If it creates unlimited abundance, then it shouldn't matter who has the most (if everyone has plenty). Yes, it would create power disparity, but the thing is, there'll always be SOMEBODY at the top of the social hierarchy, with most of the money and power - in the AGI scenario, that is someone who is in charge of AGI's actions.

Either it's AGI itself (in which case all bets are off, since it's an alien god we cannot control), or the people who have developed AGI, or the politicians who have nationalized it.

Personally, I'm uncomfortable with anyone having that much power, but if I had to pick the lesser evil - I'd prefer it to be a CEO of an AI company (who, at least, had the competence and skill to create it), instead of the AGI itself (who has no reason to care about us unless we solve alignment), or one of the political world leaders (all of whom seem actively insane and/or evil).


If the person who has figured out how to earn a lot of food doesn't get to "hoard" it, it'll just get hoarded by a person with the biggest stick.

Where are you getting the 'earn' part from, and why isn't it in scare quotes like 'hoard'? It seems like you're just changing the parameters of the argument to support the conclusion you prefer.


> What's worse (for the society), is that in this world nobody has an incentive to create wealth, because they know it'll just be taken away.

I know this has been discussed at length in many places, but I just want to point out that it isn't binary. There is some kind of distribution where "sovereign ownership" (full protection, no taxes, no redistribution) would entice the most people to create wealth (and even then, I doubt it would be 100% of the population), all the way to "mob rule" where a minimal number of people would be enticed to create wealth (and I don't think it would be 0%). People do things for multidimensional reasons.

That said, our societies have tried many variations along the spectrum between these two extremes, and I think we have uncovered the importance of protecting wealth and the incentive to create it.


Most hunter gather societies have big differences in productivity between members. I remember reading one example where one man did all of the hunting for a tribe of about 40 people. He really enjoyed both the hunting and the status of being the best hunter. He shared the meat freely. No one was taking it away.


Nothing about the current system (capitalism) prevents people from sharing freely, that's just charity. I think it's wonderful and admirable when people do that, and I fully support that, as long as it's voluntary.

I'd be happy to live in a version of society where there's enough abundance and good will that people just give to charity, and that is enough to support everyone, and nobody is being forced to do anything they don't want.

I only dislike it when people advocate for involuntary redistribution of wealth, because it has a lot of negative side effects people aren't thinking through. Also, because I think that it's evil and results in the sort of society and culture where it would be a nightmare to live in.


Isn't "involuntary redistribution of wealth" literally every country on earth though (aside from a few that have such a lack of rule of law that the state can't tax the population)? Do you consider the entire developed world (and most of the rest) a nightmare to live in?

I live in Germany where we have taxes & don't consider it a nightmare.


I think it's a gradient. When I think about the "nightmare to live in", I think Soviet Union or North Korea. Those are the places who went all-in on redistribution.

Most western countries mostly respect individual freedom and property, taxes being an exception to that, somewhat limited and controlled. I see that as a necessary evil - something we can't fully avoid (at least, I can't figure out how we'd do that), but should try to minimize, to avoid sliding down the spectrum towards more and more evil versions of that.

I think most western countries are nice to live in because they do comparatively good job at respecting people's freedom, property, and the right to keep the stuff they earn.

Advocating for more redistribution is taking steps away from that, in the direction people don't realize they don't want to go in.


  > Advocating for more redistribution is taking steps away from that, in the direction people don't realize they don't want to go in.
with shared ownership (e.g a cooperative business) there isn't a forced redistribution in the first place, i think thats the point of the original poster?


Do you consider redistributing everything from the people to the dictator indistinguishable from the reverse?


> Isn't "involuntary redistribution of wealth" literally every country on earth though (aside from a few that have such a lack of rule of law that the state can't tax the population)?

Places with governments that weak also tend to prominently feature involuntary redistribution of wealth. It tends to be more self-service at the hands of the end-recipients and without the kind of ethical theory behind it that is at least the notional framework fo redistribution by functional governments, but it still very much occurs.


It’s not as involuntary as I made it sound. I think if he decided not to share the meat then he would have problems with the rest of the tribe.


you also can't eat more than so much meat anyway and it spoils at some point (especially in a society without electricity/refrigeration).


> in which case all bets are off, since it's an alien god we cannot control

I hate to break it to you, considering how much effort you put into your comment, but this is already the case. Global economics is something we CANNOT control already, so the world you live in, is ALREADY governed by alien God. The self-described "optimists" here are naive at best, and delusional at worst.


Abundance is going to be limited by raw materials.

The less need there is for human labor, the less disincentive humans have for killing each other over raw materials.


So, a cuttting edge AI model turned out to be much cheaper and easier to produce than we thought. Weird reason to call something a "fad". Here's to hope nobody invents a way to produce much cheaper and faster cars, or this whole Car Fad will be over too.


From M-W dictionary: Fad : a practice or interest followed for a time with exaggerated zeal : CRAZE

Books are also a fad, if "for a time" stretches across lifetimes; advanced technology that requires some infrastructure, dependent on a complex system, but not nearly as much as computers do, nevermind AI, and which is popular amongst a subset of people. Ignore earth-systems collapse and the underlying technology that keeps these fads afloat will cease to function.

Computers are a cool diversion but not essential for human "survival and thrival", nor is our widespread embracing of the technology without consequences for life on earth.

I would far rather talk with other biased humans than the regurgitations of some biased amalgam-bot made with stolen data, even if it can act syncretically. My bias is as a high-school science teacher who likes helping students gain understanding about the world, and hopefully wisdom, a sense of awe and responsibility, and their own sense of purpose.


Remember in the mid-90s when people talked about the internet being a fad? I think AI is a fad in the same way, which is to say not a fad at all no matter how much certain entrenched interests wish it were so.


You're agreeing with the author. The Internet lasted. Webvan and pets.com not so much.


Now we have Instacart and Chewy so the concepts lasted but early adopters not so much.


> Remember in the mid-90s when people talked about the internet being a fad?

People keep bringing this up and it is pure bullshit.

I lived through the 90's and the Internet was never seen as a fad back then. It was hyped through the whole decade as the future, in many ways rightfully so. The hype was so strong that it culminated in the dotcom bubble early next decade.

You may think that AI skepticism is due to "entrenched interests" that want it to be a fad, I argue that AI hype is due to "entrenched interests" wishing that all overprimises are real.

I regularly use AI - Mosltly local models with either Ollama or Stable Diffusion.

I find it mildly useful in some specific scenarios, but very far from being comparable to the internet in terms of how ubiquitous and necessary it might become.


Fad wasn't the word used, but the Internet was described as a "bubble" widely and often.


Well, it was a bubble. There was a major burst in the turn of the century that proved it, with many consequences.

A bubble just meant that the valuation of internet companies at the time were overinflated and detached from reality, not that the Internet as a technology was useless.

I think comparing AI to the internet in terms of usefulness is absolute wishful thinking thinking. The Internet was a major inflection point in the history of the world, maybe in the same magnitude of the advent of computers or the industrial revolution.

AI (and we should be clear that we are actually talking about Generative AI in this context) is an interesting tech, may be pretty useful in some contexts, but it is not in the same league of the previous examples.


Can you specify which are these "entrenched interests"?


> Can you specify which are these "entrenched interests"?

Labor (meaning anyone who works for a living) and anyone who's not prioritizing shareholder value above all else.


Good luck with that. It's not your fault (hopefully). No really we all wanna pat your head, but not until we see results. Bring it fucker.

In the meantime you should expect everything to fall apart for reasons you're completely ignorant of and disconnected from. Maybe your fault? Who knows and who cares? hahahahahah

You can roll over your investments as we all do. You gotta think about number one. Act quickly. You're supposed to be smart money not dumb money...


The fad pertains to the huge valuations and market hype - which a much cheaper and easier model crashes


“Bubble” is probably more apt.


The fad part is that LLMs don’t work. People keep mistaking simplistic demos for practical applications.

They don’t provide good summaries. They don’t help non-experts simulate expert work. They don’t provide reliable search results.

If someone promoted a calculator that gets 90% of the digits correct in its answers and 90% of the time those digits were in the right order, that would be a useless calculator.

I have not spoken with any AI fanboy who can substantiate his claims about the usefulness of LLMs. I have used Deepseek twice, now, and both times its results were unusable for engineering purposes but would have impressed tipsy people at a party.

I have heard credible reports that Co-pilot is helpful. And I routinely use ChatGPT for prototyping tools— but that makes it one more interesting tool, not a revolution.


I mean, just because something is a fad doesn't mean it's not _real_. "There was a fad for X in the 1980s" doesn't necessarily, or even usually, mean that X doesn't happen today, just that it is no longer such a big deal. You could call the late 19th century railway bubble a fad of sorts; railways obviously still exist and are very important, but speculative railway building is no longer a double-digit percentage of the global economy, say.

Like, in ten years it is likely that LLMs will be used for some things. It is less likely that people will be talking about spending literally trillions of dollars on LLM arms races, however.


I keep reading people on the internet mistrusting him with a lot of confidence, but I haven't heard of any tangible evidence that he's lying about anything.

Can you name a couple of examples of the things he said that we know are lies? Or is it all just people making uninformed assumptions or being snarky?


The main thing is OpenAI itself. Altman has long pitched it as an open, non-profit and raised hundreds of millions on that. Turns out it's not open and now has been positioned as a for-profit entity owned by a non-profit that is trying to convert itself to a for-profit.


On top of that he repeatedly used the fact that he had no equity to push back on criticism and to appear more altruistic and trustworthy. Turns out, that was just part of the con.


Sam Altman is currently being sued by his sister for raping and sexually abusing her for 9 years. He is denying this, I guess that counts as lies.

https://www.cnn.com/2025/01/08/business/sam-altman-denies-si...


Changing his company from a non-profit to a for-profit once he saw the $$$, seems pretty untrustworthy to me.


OpenAI is not changing to a for-profit. It[1] always was a for-profit entity, owned by a non-profit.

The big change is that the non-profit no longer owns the entirety (or even a majority) of the for-profit entity; it is now a minority owner.

[1] OpenAI as we know it today. OpenAI was once just the non-profit entity, but back then was just an AI think tank. In 2019, it formed the for-profit corporation as a subsidiary to raise money and build the tech that we now know about (and make money from any products built on that tech).


Secretly funding the FoundationMath benchmark, contributors unaware of the COI, having access to the questions and answers with a "verbal agreement" not to train on it.

https://techcrunch.com/2025/01/19/ai-benchmarking-organizati...


I don't know if this is technically a lie, but he said that he'll purchase electricity made from fusion power within the next couple of years.

I don't know if he believes that himself. But I can tell you with extreme high confidence that this is not going to happen, and it's not even close to anything remotely realistic.


The literal bait-and-switch of "Oh hey Open AI is nonprofit I promise guys!" and then turning around and switching it to profit.

That alone is enough to walk away from him.


One doesn't have to be a liar to be untrustworthy

Why was he removed from leadership positions at Loopt, YC, and OpenAI?

https://www.disconnect.blog/p/kara-swishers-story-about-sam-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: