Neither did the dismissal of AI in the article. I'd classify it as "not even wrong" in that the factual parts are true, but the conclusions are utter nonsense as ChatGPT can be extremely useful regardless of the claims being true.
After having spoken with one of the people there I'm a lot less concerned to be honest.
They described it as something akin to an emotional vibrator, that they didn't attribute any sentience to, and that didn't trigger their PTSD that they normally experienced when dating men.
If AI can provide emotional support and an outlet for survivors who would otherwise not be able to have that kind of emotional need fulfilled, then I don't see any issue.
Most people who develop AI psychosis have a period of healthy use beforehand. It becomes very dangerous when a person decreases their time with their real friends to spend more time with the chatbot, as you have no one to keep you in check with what reality is and it can create a feedback loop.
Wow, are we already in a world where we can say "Most people who develop AI psychosis..." because there are now enough of them to draw meaningful conclusions from?
I'm not criticising your comment by the way, that just feels a bit mindblowing, the world is moving very fast at the moment.
I think there's a difference between "support" and "enabling".
It is well documented that family members of someone suffering from an addiction will often do their best at shielding the person from the consequences of their acts. While well-intentioned ("If I don't pay this debt they'll have an eviction on their record and will never find a place again"), these acts prevent the addict from seeking help because, without consequences, the addict has no reason to change their ways. Actually helping them requires, paradoxically, to let them hit rock bottom.
An "emotional vibrator" that (for instance) dampens that person's loneliness is likely to result in that person taking longer (if ever) to seek help for their PTSD. IMHO it may look like help when it's actually enabling them.
The problem is that chatbots don't provide emotional support. To support someone with PTSD you help them gradually untangle the strong feelings around a stimulus and develop a less strong response. It's not fast and it's not linear but it requires a mix of empathy and facilitation.
Using an LLM for social interaction instead of real treatment is like taking heroin because you broke your leg, and not getting it set or immobilized.
> To support someone with PTSD you help them gradually untangle the strong feelings around a stimulus and develop a less strong response.
It's about replaying frightening thoughts and activities in safe environment. When the brain notices they don't trigger suffering it fears them less in the future. Chatbot can provide such safe environment.
It may not be a concern now, but it comes down to their level of maintaining critical thinking. The risk of epistemic drift, when you have a system that is designed (or reinforced) to empathize with you, can create long-term effects not noticed in any single interaction.
I don't disagree that AI psychosis is real, I've met people who believed that they were going to publish at Neurips due to the nonsense ChatGPT told them, that believed that the UI mockup that claude gave then were actually producing insights into it's inner workings instead of just being blinking SVGs, and I even encountered someone participating at a startup event with an Idea that I'm 100% is AI slop.
My point was just that the interaction I had from r/myboyfriendisai wans't one of those delusional ones.
For that I would take r/artificialsentience as a much better example. That place is absolutely nuts.
Not necessarily: transactional, impersonal directions to a machine to complete a task don't automatically imply, in my mind, the sorts of feedback loops necessary to induce AI psychosis.
All CASE tools, however, displace human skills, and all unused skills atrophy. I struggle to read code without syntax highlighting after decades of using it to replace my own ability to parse syntactic elements.
Perhaps the slow shift risk is to one of poor comprehension. Using LLMs for language comprehension tasks - summarising, producing boilerplate (text or code), and the like - I think shifts one's mindset to avoiding such tasks, eventually eroding the skills needed to do them. Not something one would notice per interaction, but that might result in a major change in behaviour.
I think this is true but I don't feel like atrophied Assembler skills are a detriment to software development, it is just that almost everyone has moved to a higher level of abstraction, leaving a small but prosperous niche for those willing to specialize in that particular bit of plumbing.
As LLM-style prose becomes the new Esperanto, we all transcend the language barriers(human and code) that unnecessarily reduced the collaboration between people and projects.
Won't you be able to understand some greater amount of code and do something bigger than you would have if your time was going into comprehension and parsing?
I broadly agree, in the sense of providing the vision, direction, and design choices for the LLM to do a lot of the grunt work of implementation.
The comprehension problem isn't really so much about software, per se, though it can apply there too. LLMs do not think, they compute statistically likely tokens from their training corpus and context window, so if I can't understand the thing any more and I'm just asking the LLM to figure it out, do a solution, and tell me I did a good job sitting there doomscrolling while it worked, I'm adding zero value to the situation and may as well not even be there.
If I lose the ability to comprehend a project, I lose the ability to contribute to it.
Is it harmful to me if I ask an LLM to explain a function whose workings are a bit opaque to me? Maybe not. It doesn't really feel harmful. But that's the parallel to the ChatGPT social thing: it doesn't really feel harmful in each small step, it's only harmful when you look back and realise you lost something important.
I think comprehension might just be that something important I don't want to lose.
I don't think, by the way, that LLM-style prose is the new Esperanto. Having one AI write some slop that another AI reads and coarsely translates back into something closer to the original prompt like some kind of telephone game feels like a step backwards in collaboration to me.
Acceptance of vibe coding prompt-response answers from chatbots without understanding the underlying mechanisms comes to mind as akin to accepting the advice of a chatbot therapist without critically thinking about the response.
> If AI can provide emotional support and an outlet for survivors who would otherwise not be able to have that kind of emotional need fulfilled, then I don't see any issue.
Surely something that can be good can also be bad at the same time? Like the same way wrapping yourself in bubble wrap before leaving the house will provably reduce your incidence of getting scratched and cut outside, but there's also reasons you shouldn't do that...
"PTSD" is going through the same semantic inflation as the word "trauma". Or perhaps you could say the common meaning is an increasingly more inflated version of the professional meaning. Not surprising since these two are sort of the same thing.
BTW, a more relevant word here is schizoid / schizoidism, not to be confused with schizophrenia. Or at least very strongly avoidant attachment style.
The parent post is getting flack, but it’s hard to see why it is controversial. I have heard “women want a man who will provide and protect” from every single woman I have ever dated or been married to, from every female friend with whom I could have such deep conversations, and from the literature I read in my anthropology-adjacent academic field. At some point one feels one has enough data to reasonably assume it’s a heterosexual human universal (in the typological sense, i.e. not denying the existence of many exceptions).
I can believe that many women are having a hard time under modernity, because so many men no longer feel bound by the former expectations of old-school protector and provider behavior. Some men, like me, now view relationships as two autonomous individuals coming together to share sublime things like hobbies, art or travel, but don’t want to be viewed as a source of security. Other men might be just extracting casual sex from women and then will quickly move on. There’s much less social pressure on men to act a certain way, which in turn impacts on what women experience.
You’re probably consuming too much red pill nonsense if it’s hard for you to see why claiming that women who experience multiple sexual partners are mentally damaged is controversial.
The veneer of modern pop psych doesn’t change that this is just slut shaming, no different fundamentally from the claim that women who have multiple partners have loose vaginas. There’s no science behind these sorts of claims. It’s just a mask for insecurity.
Your understanding of the "anthropology-adjacent academic field" is wrong. There are so many ways humans have organized their societies and so many ways men and women have interacted, that to pretend there is some primeval hunter-gatherer society that generated all human evolutionary behaviours is silly. And a typical patriarchal construct that benefits men.
Making claims about "evolution" of "women" without even demonstrating a passing familiarity with the (controversial!) field of evolutionary psychology is a choice.
Because the post is making an unfounded claim about human female evolution along with another unfounded claim about modernity being different from the rest of history, which involves a ton of cultures and societies.
I think the claim that modernity is different is easily defendable. No society during the rest of history had such effective birth control, nor welfare states that removed pressure to produce offspring or even interact so much with family or other members of society. Again, as a man I feel like I am able to live a life very different than I would have been pressured into before, and this surely has ramifications for modern dating and relationships.
This is from the evolutionary psychiatry book The Moral Animal:
>"What the theory of natural selection says, rather, is that people's minds were designed to maximize fitness in the environment in which those minds evolved. This environment is known as the EEA—the environment of evolutionary adaptation. Or, more memorably: the 'ancestral environment.'...
>"What was the ancestral environment like? The closest thing to a twentieth-century example is a hunter-gatherer society, such as the !Kung San of the Kalahari Desert in Africa, the Inuit (Eskimos) of the Arctic region, or the Ache of Paraguay.
>"Inconveniently, hunter-gatherer societies are quite different from one another, rendering simple generalization about the crucible of human evolution difficult. This diversity is a reminder that the idea of a single EEA is actually a fiction, a composite drawing; our ancestral social environment no doubt changed much in the course of human evolution. Still, there are recurring themes among contemporary hunter-gatherer societies, and they suggest that some features probably stayed fairly constant during much of the evolution of the human mind. For example: people grew up near close kin in small villages where everyone knew everyone else and strangers didn't show up very often. People got married—monogamously or polygamously—and a female typically was married by the time she was old enough to be fertile."
--
The idea that modern life is different is obvious.
I get the impression that there's some other conversation going on here that has nothing to do with evolution and you are not saying "lets all live in Igloos...".
Nonsense. Chimpanzees and Bonobos are our distant ancestors. Have a look at how they operate.
From what I can tell, men have cause significant damage to women's psyche. Men that turn women into a commodity plaything instead of a fellow human being.
Women are human beings just like men, they aren't some alien species. Trauma hurts their psyche, not pleasure. If women were in a safe, supportive, mature society, some would be monogamous, some would be poly, some would be non-committal (but honest), and some would be totally loose. Just like men. In every case they would be safe to be who they are without abuse.
Instead, and this is where men and women deviate, it is not safe. Men will often kill or crush women, physically, professionally, and often at random. Women are not allowed to walk around at night because some men having a bad day or a wild night may not be able to control themselves, and most of society is just okay with this. Police in large swaths of the world do not help make anything safer, in fact they make it more dangerous.
The only reason women who are more monogamous can seem better off is because society does not make room for those who aren't that way. And there are many who aren't that way. There are many who are forced to mask as that way because it is often dangerous otherwise. At large, a prison for women has been created. I think that people may even enjoy how dangerous it is, in order to force women to seek the safety of a man.
Most of society doesn't make room for liberated women and it is heartbreaking. I will dream of a future where I can meet women as total equals, in all walks of life, without disproportionate power, where all of us as humans are free to be who we are in totality.
If you read journalism about why women are frustrated with dating today, one of the number-one complaints is that the men they are meeting are “flaky”, women can’t trust that the man will be there for her. Your depiction that “women don’t really need men” completely misses the current trend that this thread is about.
> complaints is that the men they are meeting are “flaky”, women can’t trust that the man will be there for her.
No, that's not a complaint that the "modern" man isn't some sort of 1950s provider, it's a complaint that he does not text back. Everyone on the apps suffers from ghosting. It's exhausting because you have to be "On" in 100% of your interactions and texts but there's only like a 2% chance it will continue in any shape no matter what you do.
Even the "tradwife" trend is not actually harkening back to the 50s and a strong provider man, and instead lionizes a reality that never existed and is much more about wanting to check out of the rat race that harms us all. These women do not want to be a 1950s homemaker, they just want to focus on their hobbies and not worry about money.
I never said women don't need men, did I? Let me read what I said again.
No, I never said that. I said women need safety, and society is largely not safe for them.
Human beings are social creatures. Women need men. Women need women. Men need women. Men need men. We all need each other.
The system patterns of online dating cultivate undesirable traits in both men and women which result in side effects that no one would want. "Flakiness" is one such side effect.
Online dating dynamics create high abundance, low commitment environments that systematically produce “flakiness,” so the issue isn’t about women needing men or not, but that both sexes operate in a degraded safety/trust landscape shaped by platform incentives rather than by real world social cues. Restore actual interpersonal safety and the entire pattern shifts positive, with less defensive behavior, less attrition, less pain, and more ethical orgasms.
All people, regardless of gender, should cultivate a safety in both society and in themselves. This safety is liberating. Instead of controlling people, you free them. Instead of binding, you uplift. Instead of harming, you heal. This is the basis of safety.
Perhaps one of the problems with modern dating is that women expect a man to provide safety, but many men don’t want to be viewed as a source of safety? Me, I am only interested in relationship for companionship, someone with whom I can share interesting experiences, because joy is not complete unless it is shared. But when it comes to safety and security, a partner is on her own. That’s not to say that I wouldn’t do this or that for a partner, but it would be supererogatory. My male friends have a similar complaint, this isn’t just a HN thing.
Again, this is probably an outcome of modernity. I likely wouldn’t think this way as a man, if I didn’t grow up in a modern age hearing that women are strong, they can take care of themselves and no longer depend on men.
Safety doesn't mean you're a provider. It means you are safe to be authentic with. Safe to share truth with.
That safety takes many forms.
You cannot have depth without that safety. It is physical, it is also emotional and intellectual.
For instance, without safety a partner would never join you on many interesting experiences. If you want those experiences, they need to be able to trust you.
Now extend that idea of safety to a broad society context, and that is approaching what I was speaking to.
The safety I have heard demanded directly from women to me as a partner – or from female friends about the man they seek – is the safety of being a provider, giving them a feeling of security that they can’t manage to achieve on their own. It’s not just about a man being safe to be with. Again, you are speaking about something I haven’t heard from actual women, and I think I’ll trust the latter (and reportage matching it) over a HN stranger for forming my assumption of what women want from relationships.
And again, maybe part of why women might be having problems with dating is that many men today don’t want to be seen as a big emotional support for a partner either. That’s draining and time-consuming. This might bother you, but my whole point is that the social pressures are no longer there to compel men (or women) to act a certain way, and that is impacting dating.
> from women to me as a partner – or from female friends about the man they seek
How many people are you talking about here? Like if you had to rephrase this point using numbers would you say “I’ve heard half a dozen women say this”?
That aside, can you elaborate on safety as a demand? I’ve never had a partner or friend demand safety from me, ever. The only times in my life that I have seen someone demand safety from another is when the latter is acting violent or reckless to the point that their behavior poses a threat.
I fear our friend we're replying to here may have never had a deep relationship with the opposite sex.
This is unfortunately the reality of countless men, often going their entire lives like this, with bitterness and resentment growing outwardly instead of reflection inwardly.
Hijacking this response now for some advice / thoughts.
So for the lurking straight men: women are simply human beings trapped in a form you desire. The game here is simple. Don't try and control women as objects. Instead, try and control your desire.
I can promise with certainty, if you control your desire, everything you've ever dreamed and more will appear. This is not an easy game to play. But it is the only way to win.
Don't pursue women as romantic interests. Ever. Leave them alone. Instead, connect with them only as friends, and only as they initiate. This is the first step to escape the brainwashing we've all been subjected to.
This means you will be going through a withdrawal. It is difficult. Take a hike. Pour yourself into work. Take on new hobbies. Grow yourself.
Friends will appear. It doesn't matter what sex they are, they are friends, treat them with the same respect and kindness as you would anyone. This is your first test. This could appear in months, it could appear in years, it all depends on you.
We need to start seeing the light in each other, beyond the skin. Every single person, regardless of how you view them, has a universe in them. Help them become their universe. Don't trap them in yours.
I would wish we existed in a world where these things are lived by, and need not be said. But I know that someday, it will be this way. We will all see each other's humanity. We will inspire each other, enabling the maximum in creative output for everyone, regardless of our lineage and forms. We won't desire vengeance towards nor suffering for anyone any longer because the vastness of the ever expanding cosmos is so much larger than the finite histories of our pain.
It is from that place I try to share some thoughts. I wouldn't think I'd have to say "women are people too" from that place, but it has broad applicability and seems to be necessary in today's world.
You just proved my point. Men are undoubtedly stronger than women. Men are evolved to "spread their seed". Some men will take advantage of women whenever possible. Therefore a woman walking alone at night is not safe. Therefore a woman needs the protection of a man. You cannot change the behavior of every man. You can change some of them, even most of them. At the end, some men will keep being violent. Therefore a woman without a man's protection will never be safe. And this is already burned into their psyche.
> nobody is yet ready to have a serious discussion about this.
There are a ton of people that are happy to have serious discussions about how their superior knowledge of biology gives them oracular insight into the minds of women. These discussions happen every day in Discord chats full of pubescent boys, Discord chats full of young men, and YouTube comments sections full of older men.
Agreed, but this is also a male-dominated space with a lot of men with relationship issues, so objectivity goes out the window when it comes to women here.
I enjoy all the technical discourse here but the views on women are alarming to say the least.
>I enjoy all the technical discourse here but the views on women are alarming to say the least.
You are gell-mann amnesia'ing. The takes on technology or anything else are just as buttfuck stupid and off.
The other day HN was full of people insisting that there would be some "unforseen downside" of dropping the penny and making stores round purchase amounts to the nickle.
Meanwhile, the first cash registers were only able to operate on 5 cent increments because in the early 1900s pennies were "inconvenient"!
Similarly, it's extremely common for people here to insist that "sales tax in the US is complicated" but it just isn't. The entry level cash register from the 90s supports "US, Canada, and VAT" tax schemes and supports 4 custom tax regimes and that is treated as fully expected functionality and was the norm in earlier systems as well.
I am still slightly worried about accepting emotional support from a bot. I don't know if that slope is slippery enough to end in some permanent damage to my relationships and I am honestly not willing to try it at all even.
That being said, I am fairly healthy in this regard. I can't imagine how it would go for other people with serious problems.
A friend broke up with her partner. She said she was using ChatGPT as a therapist. She showed me a screenshot, ChatGPT wrote "Oh [name], I can feel how raw the pain is!".
all humans want sometimes, is to be told that what they're feeling is real or not. A sense of validation. It doesn't necessarily matter that much if its an actual person doing it or not.
Yes, it really, truly does. It's especially helpful if that person has some human experience, or even better, up-to-date training in the study of human psychology.
An LLM chat bot has no agency, understanding, empathy, accountability, etc. etc.
I completely agree that it is certainly something to be mindful of.
It's just that found the people from there were a lot less delusional than the people from e.g. r/artificialsentience, which always believed that AI Moses was giving them some kind of tech revelation though magical alchemical AI symbols.
Avoid it and you're good, you just have to accept that a big part of the language is not worth its weight. I guess at that point a lot of people get disillusioned and abandon it whole, when in reality you can just choose to ignore that part of the language.
(I'm rewriting my codex-rs fork to remove all traces of async as we speak.)
If "any amount" means millions of concurrent connections maybe. But in reality we've build thread based concurrency, event loops and state machines for decades before automatic state machine creation from async code came along.
Async doesn't have access to anything that sync rust doesn't have access to, it just provides syntactic sugar for an opinionated way of working with it.
On the contrary, async is very incompatible with MMAP for example, because a page fault can pause the thread but will block the entire executor or executor thread.
I'd even argue that once you hit that scale you want more control than async offers, so it's only good at that middle ground where you have a lot of stuff, but you also don't really care enough to architect the thing properly.
What these LLMs enable is fixing the foundations. If you considered writing a novel database, operating system, or other foundational piece of software two years ago, you had to be mad. Now you still do, but at least you got a chance.
I can highly recommend these talks to get your eyes slightly opened to how stuck we are in a local minima.
people downvote your sarcasm, but if you do the calculations you're kinda right.
1Kg of Beef costs:
- The energy equivalent of 60.000 ChatGPT queries.
- The water equivalent of 50.000.000 ChatGPT queries.
Applied to their metric Mistral Large 2 used:
- The water equivalent of 18.8 Tons of Beef.
- The CO2 equivalent of 204 Tons of Beef.
France produces 3836 Tons of Beef per day,
and one large LLM per 6 months.
So yeah, maybe use ChatGPT to ask for vegan recipes.
People will try to blame everything else they can get a hold on before changing the stuff that really has an impact, if it means touching their lifestyle.
Wow, thanks. I’m even coming up with 500K chatGPT queries for the amount of energy consumed as a KG of beef, though I might have moved a decimal place somewhere - feel free to check my math :)
“average query uses about 0.34 watt-hours of energy” - or 0.00034MWH
Using this calculator: https://www.epa.gov/energy/greenhouse-gas-equivalencies-calc... - in my zip, 0.0002KG of CO2 per MWH. (Though, I suppose it depends more on the zip where they’re doing inference, however this translation didn’t seem to vary much when I tried other zips)
Then, 99.48KG/0.0002KG= 497,400 chatGPT queries worth of CO2 per KG of beef?
This is spot on because there can’t be two issues that exist simultaneously. There can only be one thing that wastes enormous amounts of energy and that thing is beef
You can try to misconstrue and ridicule the argument,
but that won't change the math that if you have one thing that causes 1 unit of damage, and another thing that causes 100.000 units of damage, then for all intents and purposes the thing that produces 1 unit of damage is irrelevant.
And any discussion that tries to frame them as somewhat equally important issues is dishonest and either malicious or delusional.
My guess, as I've expressed earlier in the comment chain, is that it's emotionally easier for people to bike-shed about the 0.01% of their environmental impact, than to actually tackle things that make up 20%.
And no it's not only beef (which is a stand-in for meat and diary), another low hanging fruit is also transport, like switching your car for a bike.
But switching from meat and diary to a vegan diet would reduce up to 20% of your personal environmental impact, in terms of CO2.
And about 80-90% of rainforest deforestation is driven directly or indirectly by livestock production.
So it's simply the easiest most impactful thing everyone can do. (Switching your car for a bike isn't possible for people in rural areas for example.)
>1 unit of damage, and another thing that causes 100.000 units of damage, then for all intents and purposes the thing that produces 1 unit of damage is irrelevant
You make a good point. A problem is only a real problem if you can’t find a bigger thing that makes it look small by comparison. For example, the worldwide concrete industry creates more co2 than beef does so there is no reason to stop eating beef if you enjoy it.
Now I know that some might say that “all of this is cumulative” or “the material problems that stem from entrenched industries is actually a reason not to invent completely novel wasteful things rather than a justification for them” but in reality only two things are true: only the biggest problem is real, and the only problem is definitely some other guy’s doing. If I waste x energy and my neighbor wastes y amount, a goal of reducing (x+y) is oppressive whereas a goal where I just need to try to keep x lower than y feels a lot nicer.
I agree. Humans have been eating meat and doing construction for the entire history of civilization, they are not the sort of things that could be affected by posting online. LLMs on the other hand are new, largely in the hands of a small handful of companies, and a couple of those companies are bleeding cash in such a way that they might actually respond to consumer pressure. It is cynical to compare them to things that we know will not change as a justification for a blanket excuse for them.
Seeing as these models being wasteful is integral to the revenue of companies like OpenAI and Anthropic, the more people that tell them that the right business strategy is to start perpetually building data centers and power plants, the less incentive they have to build models that run efficiently on consumer hardware.
They just suggested a different bike shed — one for the purpose of their argument won’t ever get fixed. J-pb’s point is that running a bunch of generators 24/7 in Memphis is fine because people eat meat. Inefficient LLMs in the real world are okay because people could theoretically become vegan but have not. It’s just a thought experiment
If something costs too much, and you find a way to completely pay for it, that's not bikeshedding.
And it's not a thought experiment. It's a very real suggestion. If you're worried about the resource cost from your personal use, doing something to 100% offset it lets you stop worrying.
> become vegan
For one day per year. Replacing a day you would have otherwise eaten meat. That is an extremely attainable action for anyone that cares enough about LLM resource use enough to strongly consider avoiding them. It's not something that "will not change".
By the way, your goal of running efficiently on consumer hardware isn't as great as it sounds. One of the best ways to improve efficiency is batching multiple requests, and datacenter hardware generally uses more efficient nodes and runs at more efficient clock speeds. There's an efficiency sweet spot where models are moderately too big to run at home.
And it really undermines your argument when you throw in this stupid strawman about elon's toxic generators. You know j-pb was talking about typical datacenter resource use and not that. Get that insulting claim out of here.
It is only a “very real suggestion” if you believe that your argument might be effective.
Do you believe that “skip meat for a day use LLMs for a year” will have a climate impact?
Because if not then you agree with me that in this case theoretical vegans are just being used to justify more real consumption, not less
>stupid strawman about elon's toxic generators
They exist in the real world, right now. It is a real phenomenon and no matter how many vegans I imagine it’s still there. I’m not really clear on why the real thing that’s really happening is a strawman unless you think that the existence of that system is so bad that it undermines your position. Even then it wouldn’t be a strawman though, just a thing that doesn’t support your position that using LLMs is categorically fine because you can picture a vegan in your head
> Do you believe that “skip meat for a day use LLMs for a year” will have a climate impact?
If "use LLMs for a year" is enough to count as having a climate impact (negatively), then yes I believe "skip meat for a day use LLMs for a year" is enough to count (positively).
I'd be tempted to write off both of those, but the whole point of your argument is to consider LLM resource use important, so I'm completely accepting that for the sake of the above argument.
There are no theoretical vegans involved.
And the suggestion doesn't even involve vegans, unless there's a massive contingent of americans that only eat meat one day per year that I wasn't aware of.
And to get at what I think is your core objection: The fact that people can do this isn't being used to let companies off the hook. If only 2% of LLM users set up a meat skipping day, then LLM companies are only 2% let off the hook.
But at the same time let's keep a proportional sense of how big the hook is.
> They exist in the real world, right now. It is a real phenomenon
The strawman is you accusing people of supporting those generators.
> your position that using LLMs is categorically fine
>If "use LLMs for a year" is enough to count as having a climate impact (negatively), then yes I believe "skip meat for a day use LLMs for a year" is enough to count (positively).
Sorry, I should have clarified. In this case I meant “argument” as a thing that leads real people to either understand or agree with your position, not the construction of an idea in your mind.
With that in mind, do you think that “skip meat for a day use LLMs for a year” will convince enough real people, in real life, to not eat meat, that it offsets the emissions from LLM use?
Like imagine the future.
Since LLM use is a new category of energy use, you would have to convince people that haven’t already been convinced to skip meat by animal cruelty, health, philosophy, or existing climate concerns. People that were vegan before LLMs became popular obviously don’t count. The group of people that resisted decades of all that messaging will now make a meaningful adjustment to their consumption to cancel that out — and there will be enough of these new part time/full time vegans that it offsets the entire chat bot industry’s energy usage.
Do you imagine that being what happens?
If not it’s just somebody advocating for increased consumption in real life by invoking imaginary vegans.
As somebody that’s spent years as a vegan I am incredibly wary of “vegans can recruit” as a pitch. I’ve only ever heard that from people that have never tried to recruit in earnest or charlatans. Like I’ve mostly heard that from people that are not, never have been, and have no interest in being vegan.
Edit:
>The strawman is you accusing people of supporting those generators.
That’s not what a strawman is and it’s not an accusation, it’s an observation. If you say “I want subscription based online batched mega-high-compute language models” you are advocating for that industry, and those generators are part of it. Saying you feel that they’re somehow special and different because they’re icky does not make them any different from the thing that you say is necessarily the future. That you want!
I think anyone that does get convinced and skip meat should be able to use LLMs without shame or guilt, while we continue to pressure everyone else to save resources and we continue to pressure LLM companies to save resources.
LLM companies only get let off the hook if a very large fraction of their users do the meat skip thing, which is not very likely but could theoretically happen.
LLMs being a new category of energy use should get them some extra scrutiny, but only some. Maybe 3x scrutiny per wasted kilowatt hour compared to entrenched uses? If our real motivation is resource use, and not overreacting to change, LLMs should get some pressure but most of the pressure should go toward preexisting wasteful uses.
Nobody is advocating to ignore LLMs. But we shouldn't overstate them too much either.
And the giving up meat defense is not a defense for the companies, it's a defense for individual users that actually do it.
Like not an if or maybe thing, what do you see when you picture the future?
Do you think “Skip meat for a day use LLMs for a year” will produce enough new vegans to offset the energy usage and co2 produced by the LLM architecture of your choice?
Not asking if you want it to happen or if it’s something you can imagine could happen, I’m asking if you think it will
[_] yes
[_] no
Because if no, then the idea is just advocating for increased real consumption by invoking imaginary vegans!
Edit:
>LLM companies only get let off the hook if a very large fraction of their users do the meat skip thing, which is not very likely but could theoretically happen.
The person I was initially talking to took the position that LLM companies have negligible impact because people can be vegan. J-bp was saying that LLM companies shouldn’t be on anybody’s radars because uh, meat is 100,000 times worse.
The person you hopped in to defend was saying that LLM companies do not and should not have a “hook” because meat eaters exist
> It was a yes or no question [...] I’m asking if you think it will
[x] no
> Because if no, then the idea is just advocating for increased real consumption by invoking imaginary vegans!
Wrong.
> The person I was initially talking to took the position that LLM companies have negligible impact because people can be vegan.
He said "LLMs are not the problem here", which is true.
And he was arguing for individual use being offset when he said "maybe use ChatGPT to ask for vegan recipes".
The top level comment was also about individual use. "I would really like it if an LLM tool would show me the power consumption and environmental impact of each request I’ve submitted."
The comments right before you replied were also about individual use. "lifestyle choice".
> J-bp was saying that LLM companies shouldn’t be on anybody’s radars because uh, meat is 100,000 times worse.
The 100,000 number was a throwaway hypothetical to make a point. Not a number he was applying to LLMs in particular. Two lines later he threw in a 2,000x too.
And what he said is that LLM companies are not "somewhat equally important". Which is true. He didn't say you should ignore them entirely, just to have a sense of proportion.
-
Edit: Here is an important distinction that I think isn't getting through. There are multiple separate points being made by j-bp:
Point A, about not eating meat for a day, is only excusing anyone that actually does it. It's not a hypothetical that excuses the entire company.
Point B, about the size of the impact, suggests caring less about LLMs based on raw resource use. Point B does not care about the relatively small group of people that take up the offer in Point A. Point B is just looking at the big picture.
Then it is not a “very serious suggestion”. It is a thought experiment which should be taken with commensurate weight.
>Wrong
Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
>The 100,000 number was a throwaway hypothetical to make a point
>Two lines later he threw in a 2,000x too.
Alright he said that meat is 2,000 times worse than language models as well as 100,000 times worse than language models. He might have meant 100k but could also mean 2k.
Do you have a real problem in real life where if somebody called you and said “it’s gotten two thousand times worse” versus “it’s gotten a hundred thousand times worse?” the former would be fine and the latter alarming?
If yes, what is the problem? Why was it a problem at 1x? 2000x? 100,000x? Why was it a problem at at 1x and 100,000x but not 2000x?
> Explain what “skip a day of meat do a year of LLMs” is then. If it’s not just an ad for feeling good about using LLMs, what is it?
You can stop being part of the problem if you do it. The problem still exists, but you are no longer part of it. You reduced it by more than your fair share. While the problem would stop existing if everyone made the same choice, there's no pretense that that's actually going to happen. LLM companies are not being excused by such an unlikely hypothetical.
j-lb also made an argument to not care much about LLMs at all, but it was separate from the "skip a day of meat" argument. That's where the big multiplier comes in. But again, separate argument.
I don't want to argue about the example ratio he used. The real ratio is very big if the numbers cited earlier are correct. So if you're going to sit here and say 2000x might as well be arbitrarily large then I think you just joined the "LLM resource use doesn't matter" team, because going by the above citation 2000x is in the ballpark of the correct number, so LLM use is 1 divided by arbitrarily large, making it negligible. Congrats.
Just wanted to chime in and say you represented my case perfectly and got all my points (and their separation) 100%!
You're right, I never said we should not care about LLMs because we also "rightfully don't care about meat".
To me the whole AI resource discussion is just a distraction for people who want to rally against a new scary thing, but not look at the real scary thing that they just gotten used to over the years.
In a sense it's the `banality of evil`, or maybe `banality of self destruction`:
The “banality of evil” is the idea that evil does not have the Satan-like, villainous appearance we might typically associate it with. Rather, evil is perpetuated when immoral principles become normalized over time by people who do not think about things from the standpoint of others.
We've gotten so used to using huge amounts of resources in our day to day lives, that we are completely unwilling to stop and reflect about what we could readily change. Instead we fight against the new and shiny, because it tells a better story, distracting us from what really matters.
In a sense we are procrastinating on changing.
It's not the Skynet like AI that is going to be the doom of humankind, but the hot-dogs, taking your car for the commute, and shitty insulation.
> Whatever you need to tell yourself to keep eating meat buddy.
I’m not the one that brought up moralizing or food. I can’t really comment on your relationship with your diet but it kind of seems like you saw somebody mention power usage and unprompted shared “well I don’t eat meat or cheese or yogurt” so I guess keep that up while you use enough energy to power your home to write some code slower than you would without it?
Where does a Youtube LetsPlay video fall into that calculation? My understanding is that a single watch of a video is orders of magnitude more than a day's active use of ChatGPT.
I don't eat meat for the last 10 years, don't own a car, and take cold showers. Can we talk about energy and resource efficiency of chatGPT now or you gonna gatekeep the conversation till I convince all of humanity to be vegan?
I'm gonna (mathematically speaking rightfully) consider the conversation an absolute waste of time and resources, until a significant portion of humanity does that, yes. Because you falling over dead and consuming 0 resources makes only reduce global resource consumption by 0.00000001%.
i'm vegan for a decade if that matters, i also opted for not having a driver license even lured to recieve a car when i turned 18. i'm also straight and i don't want children. the later (not having children) has a bigger impact than the first 2 together... should we be anti-natalists?
anyway your also missing the copyright violation and the whole other plethora of problems coming from generative AI. like bias and misiformation. water and electricity is just a tiny point of this great pile of shit. also this low % still uses USA coal power most of the times and drains water from nearby houses. i'm not a rat to dry-bath on talc powder because some greed entrepreneur wants their 6 figures in-check
I disagree; Elegant software is explicit.
Tbh I wouldn't mind if we got rid of derives tomorrow. Given the ability of LLMs to generate and maintain all that boilerplate for you, I don't see a reason for having "barely enough smarts" heuristic solutions to this.
I rather have a simple and explicit language with a bit more typing, than a perl that tries to include 10.000 convenience hacks.
(Something like Uiua is ok too, but their tacitness comes from simplicity not convenience.)
Debug is a great example for this. Is derived debug convenient? Sure. Does it produce good error messages? No. How could it? Only you know what fields are important and how they should be presented. (maybe convert the binary fields to hex, or display the bitset as a bit matrix)
We're leaving so much elegance and beauty in software engineering on the table, just because we're lazy.
Welcome to the new normal. Love it or hate it, there are now a bunch of devs who use LLMs for basically everything. Some are producing good stuff, but I worry that many don't understand the subtleties of the code they're shipping.
For me the thing that convinced me was the ability to write so much more documentation, specification, tests and formal verification stuff than I could before, that the LLM basically has no choice but to build the right thing.
OpenAIs Codex model is also ridiculously capable compared to everything else, which helps a lot.
I’ve never tried to use it for formal verification. Does it work well for that? Is it smart enough to fix formal verification errors?
The place this approach falls down for me is in refactoring. Sure, you can get chatgpt to help you write a big program. But when I work like that, I don’t have the sort of insights needed to simplify the program and make it more elegant. Or if I missed some crucial feature that requires some of the code to be rewritten, chatgpt isn’t anywhere near as helpful. Especially if I don’t understand the code as well as I would have if I authored it from scratch myself.
I never said to use LLMs to generate Uiua. I said that Uiua is an edge case where tacitness is indeed elegance.
I wouldn't write anything but Rust via LLMs, because that's the only language where I feel that the static type checking story is strong enough, in large parts thanks to kani.
If it internally has a `InnerFuncErr::WriteFailed` error, you might handle it, and then you don't have to pass it back at all, or you might wrap it in an `OuterFuncErr::BadIo(inner_err)`or throw it away and make `BadIo` parameterless, if you feel that the caller won't care anyways.
Errors are not Exceptions, you don't fling them across half of your codebase until they crash the process, you try to diligently handle them, and do what makes sense.
reply