Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Dangerous Ideas of “Longtermism” and “Existential Risk” (currentaffairs.org)
23 points by shafyy on June 6, 2023 | hide | past | favorite | 41 comments


So longtermism seems to be basically a crude form of utilitarianism that takes all possible future human-like life forms as input to its happiness calculation? This means it multiples all problems of utilitarianism by the uncertainty of predicting the future.

This sound like an insane ethical framework, though I'm sure the actual writings of longtermists are more nuanced that what is portrayed in the article, I hope.


It’s kind of “genius” as a hack to solve utilitarianisms problems - we can label all potential future utility as effectively infinite and thus any extinction scenario as a utility of negative infinite. Then, anything that influences odds of extinction a nonzero amount (say, invention of AI) multiplied by the negative infinity utility to have itself negative infinite utility. It doesn’t actually matter what the odds are, because the factor of all future utility is so powerful.

So utilitarianism’s primary problem - we can’t make a consistent, calculable, useful, and agreeable utility function to aid decision making - is solved by overruling everything with the risk of extinction (consistent, calculable insofar as making a plausible argument that odds of extinction from X are greater than zero, useful that it informs how to proceed wrt X, and arguably agreeable because everyone knows extinction bad)

Anyways my personal POV is that a good utility function should discount future utility, which brings us back to the incalculability issue. But if you treat future utility undiscounted, it makes it very easy to convince yourself of crazy things


What I never understood is how their logic would deal with the existence of future longtermists and why this wouldn't basically lead to an eternity of misery by their own logic.

Basically, longtermism is willing to sacrifice everything in the present - health, wellbeing, a livable present - in order to save the existence of hypothetical human (or formerly human) beings in a presumed "glorious" future.

Except, no matter which point in time you are, there is always a future - so the moment we arrived in the supposed glorious future, there would be another future to work towards and another population of future humans to sacrifice everything to. In that regard, longtermism would the proverbial donkey chasing the carrot in front of its face.

With the one difference that the donkey is following an incentive, i.e. it discounts the present because the future appears more desirable to it. In contrast, longtermism is presented as a moral obligation: Because the "death by nonexistence" of future humans is seen as equivalent to actual death, you have to prioritize future humans if you don't want to be saddled with an infinite amount of guilt. If the future is seen as more desirable is irrelevant.

Consequently, the "glorious future" can never appear because at that very moment, we'd have to stop sacrificing the present for the future - and thus commiting an atrocity on the future humans.

So my impression is that if you took longtermism seriously, it would really lead to all times being miserable, not just the present.


I don’t think I would give such an uncharitable interpretation

Longtermism isn’t about acceleration per se, it’s about avoiding extinction

Right now it’s plain to see mankind could see extinction in a few hundred or thousand years, for example a freak meteor. Or we could (maybe) irrevocably set back human civilization with a global thermonuclear war.

Humanity and nature both have some capacity to violently and suddenly end humanity. In theory if we were distributed to the stars, that might[0] not be the case. At such point, the risk of extinction goes to 0, and longtermism should become the same as bog standard utilitarianism, or atleast that’s how I see it

[0] unless we invent weapons to suddenly destroy ourselves on a galactic scale, like antimatter bombs delivered on FTL missiles


Provocative question, but what would be so bad about extinction actually?

Personally, I'm interested in living a good life and for as many other people as possible (who are currently alive or are very likely to be born soon) to be able to do so as well.

I honestly couldn't care less whether Earth in the distant future is populated by lifeforms that share some DNA with me or by other lifeforms that don't.

Or to take the freak meteor you mentioned. Such an event would no doubt be horrible - but more because of the immediate experience of those people who had to endure it, experience nuclear winter, collapse of civilisation, ect. However if the meteor really managed to wipe out all humans then... what? There would be no one left to judge. Earth would go on like it did for billions of years before humans.

On the other hand, why should we endure greatly reduced quality of life (through e.g. increasing effects of climate change) only so we get the economic output required to launch some rocket to mars and plant some DNA there?

So why exactly the focus on the survival of something abstract like "the human race" instead of the lives of the actual humans?


Thanks for explaining why this is an apocalyptic religion. It's all there:

- reliance on apocalyptic fantasies to defend unethical decisions (while actual suffering and real threats flourish)

- inability to comprehend pragmatic humanistic and ethical thinking unless it's justified by irrational utopian fantasies (e.g. space civilization)

It's some kind of a perverse attempt to define agnostic morals.

But sincerely, I don't even believe that - it's more like a modern scam religion to distract from actual problems.

E.g, AI could be used to limit resource consumption, provide efficient public transport, feed the world.

Instead it'll be used to maximize resource consumption of rich people (cough, autonomous self-driving electric cars, but not for everyone...)


> it's more like a modern scam religion to distract from actual problems.

This is more or less how I feel. I’m sure some (most?) proponents of longtermism genuinely agree with the philosophy. But then, it’s a time honored tradition of the rich and powerful to soothe their conscience with philosophies that conveniently paint their actions and power as a moral good.

One thing I’ve learned in life is that brilliant folks are more than capable of convincing themselves of what they want to believe, whether or not it sounds compelling to someone outside their bubble.


It's not just that it introduces uncertainty by trying to predict the future, it's that it contends that most important issues are the ones that effect people in the extreme distant future--which is the time that's the most difficult to predict, meaning the predictions have the largest possible uncertainty.

In other words, Longtermism tells you to give preeminent weight to matters you have no way of knowing the nature of or whether you're improving things or not. At that point, you might as well switch to astrology.


Just remember, some time in the past there was a "crazy" person saying we should prepare for bad seasons by storing food and securing water, and a bunch of other people who killed the "crazy" person for having dangerous ideas.


I'm no historian, but I'm pretty sure that storing food for hard times is something humans have done at least since the invention of pottery.


I believe they're referring to that cranky guy who argued to save even more. Every culture has one, I'm sure. I'll be that guy today. Every industrialized country should have decades worth of dried food stockpiled for every person. We have barely enough food stored to survive one major crop failure! That's insanely close to the edge considering our capacity.

When I suggest this, people mostly dismiss it as bizarre and just too outside-the-box to really appreciate what I'm even suggesting. They don't live in the same world I do with the same perceived risks.

To the people I've suggested it to, a volcanic eruption disrupting agriculture for years just isn't real, not in a way that would make them want to save food in planning for it. Just as a 1 in 1000 year harsh winter or drought wasn't real to a lot of neolithic farmers. Until it was. And I'm sure one of their neighbours probably even suggested the possibility to them.


> Every culture has one, I'm sure. I'll be that guy today. Every industrialized country should have decades worth of dried food stockpiled for every person.

The Mormons come closest-- they stockpile for a year. They're a good resource when you need a shopping list for the next pandemic:

https://www.churchofjesuschrist.org/study/ensign/2006/03/ran...

Decades is too long. Nothing lasts forever, even if it's the container itself degrading. You get rat infestation, rot, botulism, etc.


Let me rewrite this to be a bit more explicit:

For every cranky guy warning against "apparently theoretical problem which is not immediately affecfting our daily life", society will contain some other people who criticize the idea for being wasteful, or anticipating problems that won't occur, or won't have nearly the impact claimed, and will push back.

For some subset of the warnings, they are legitimate and represent rational response to rare, but impactful events. Identifying and planning for existential threats is something that successful societies do. Only extremely unlucky societies that fail to plan survive while those that do plan are statistically more likely to survive.

I don't think we exist in a time and place where if you say "we should multi-home humanity to solve the meteor problem" will lead to people spending trillions on mars bases, though.


This article had the opposite of its intended effect on me. Longtermism seems like pretty sound thinking. I don't know why the article calls it a religion, it just seems like a good idea.


Judging by their writings, longtermists take the point of view that it's acceptable to basically ignore the real problems of today in order to address fantasy problems in the distant future.

That seems like a poor idea to me.


This is unhelpful framing.

What if we could both care about the problems of today and also care about the long term survival of our species? They’re not incompatible. In fact care and concern are probably self reinforcing. The more the better.

Criticism of longtermism seems to center around two concerns: I will discount these ideas because of who has them, and don’t think of the big picture because the immediate picture is so bad.

Neither of which are very good arguments.


> What if we could both care about the problems of today and also care about the long term survival of our species?

Oh, I'm on board with that. But that's not what I hear longtermists talking about.


> it's acceptable to basically ignore the real problems of today in order to address fantasy problems in the distant future.

Longtermism came from the "Effective Altruism" movement, which focuses a large portion of it's effort on things like Malaria nets - https://forum.effectivealtruism.org/topics/against-malaria-f...

I don't think that you can argue that donating $58 million to fight Malaria is "basically ignoring real problems"


Yet, there's nothing in longtermism which tells us to donate to fight Malaria now, not really. Even though effective altruists might, a longtermist could argue we stop all malaria aid and donate it to SpaceX instead. Or enslave the poorest 1 billion people to work in Bezos asteroid mines. Or maybe we should execute the 10% most obese people.

These are all perfectly defensible positions in longtermism, and if one of these obviously morally dubious positions turn out to benefit 'our potential' the most it would even be immoral not to pursue them.


> These are all perfectly defensible positions in longtermism

What are you basing this on? They are only defensible if you have convincing arguments that they are indeed more beneficial than the alternative. Imagining that they might doesn't make them defensible.


> I don't think that you can argue that donating $58 million to fight Malaria is "basically ignoring real problems"

Certainly not.

But that's not longtermism. That's, as you say, part of EA. I have a ton of issues with EA as well, but far less than with longtermists.


With all their trigger warnings for things that deviate slightly from the classic Yudkowsky line, Less Wrong seems to be a center for free thinking right up there with a weekend that half the conference center is rented out to transsexual maximalists and the other half is rented out to anti-vaxxers.

But seriously, betting it all on a very rare outcome that requires multiple unknowns to line up doesn't make a lot of sense. I could just as easily make up 10 other improbable scenarios of effective measure zero that demand people's attention as much.


Keep in mind, every scenario that Less Wrong warns about (nuclear war, climate change, bio-terrorisms, AI risk, etc.) is backed up by a large number of experts in the field. Can you point to even one improbable scenario you made up, that has anywhere near that level of expert agreement?


The underlying crazy scenario is that they say you have to evaluate all of the above relative to the possibility of losing a glorious future where we colonize the galaxy with self-replicating machines and put Dyson spheres around all of the stars, and can then simulate millions of copies of all the people who had the foresight to be cryonically frozen or something…

And practically those folks are dismissive of climate change relative to the others and are generally not interested in technology relevant to their “long term” vision such as space industrialization, rather they spam us will trolley problems in an attempt to erase our moral sensibilities.

Overall there is a religious orientation to the problem and a complete indifference to the complex social and technical problems of, for instance, switching to a carbon neutral energy system and otherwise managing risks.


I mean, you can drop everything about Dyson spheres and the like and still appreciate "I would prefer the human race not to go extinct". The central arguments aren't about cryonics or anything, they're about "I would prefer the human race not to go extinct".

To say those folks are "dismissive and generally not interested in technology" seems pretty wild, given those folks include Elon Musk, Bill Gates, the CEO of OpenAI, etc. - despite any flaws you might find with them, this is hardly a group known for being disinterested in tech.


As far as I know, those people (Musk, Gates, et. al) aren’t hanging about on Less Wrong. Sure, anti-capitalists impute great significance in Musk giving nearly $2M to a longtermist organization but for him that is like me giving 78 cents to a bum and getting blamed for the latest explosion at the homeless colony.

If you don’t want mankind to go extinct I would recommend you look at

https://en.wikipedia.org/wiki/The_Logic_of_Collective_Action

for some explanation why problems that could get solved never get solved. Civilizations don’t fail because they have problems, they fail because they can’t solve the problems in front of them. Some mash-up up of H.P. Lovecraft, Teilhard de Chardin, the Jehovah’s Witnesses and Scientology isn’t going to help.


I would take longtermists seriously if they were one of those poor people they talk so much about. But right now, there is an obvious conflict of interest when they speak about longtermism.


Keep in mind that Longtermism is a branch of Effective Altruism, which is also doing things like supporting Malaria nets to save lives: https://forum.effectivealtruism.org/topics/against-malaria-f....


The Anthroposophy movement has also produced a number of unquestionably good things - i.e. Waldorf schools - but still has an esoteric quasi-religion at its core. One does not preclude the other.


I think we should be able to talk about ideas independently of the people who are talking about them. A speaker's biases have no bearing on the actual merits of the idea.


I agree with you regarding many areas of thought, where conclusions can be reached by deductive or inductive reasoning from premises.

However, these people are proposing a "new axiom set" of sorts, where lives now have the same utility as the possibility of saving life in the future. Ethical principles have no further premises and are derived entirely from one's ideas and feelings. The ideas and feelings here are obviously biased. I think we should look critically at ideas that spring up from biased people.


To say that climate change is not an "existential risk" is not the same as saying that it's not a massive problem, or that we should not address it, right?


Correct. An existential risk is something that threatens to wipe out all of humanity - "extinction risk" would probably be a better term.


According to the article, in longtermism 'existential risk' is something more specific: everything that threatens our long term 'potential'.

So if becoming 'a multi planetary species' as Musk puts it is an essential part of our potential, destroying in whatever way our capability of achieving that is putting us at an existential risk. Not because we might all die on this planet, but just for the very reasons that we stay stuck here at the limits of earth.


Existential Risk includes the edge-case that we might damage civilization so badly that we're trapped on this planet until the sun goes supernova, yes. Or become nothing more than pets to some AI. But it's a pretty niche edge case - if we meaningfully continue as a species, expanding out into space seems pretty inevitable. And, honestly, I don't think anyone really views the edge case of "we collapsed back to medieval times" as a good ending.

In particular, there's nothing about our "long term potential" that requires us to get off Earth today, or even in the next century or two - that's why it's called long-term potential.


This is the same thing as extinction. If humans don’t escape earth, then with probability 1 we will go extinct relatively soon (relative to the lifetime of the universe). If not from a standard extinction event like meteors and volcanoes, then from the sun going supernova (ETA 5 billion years)


The article drips with bias. I gave up reading and will research this myself.


Something I've never quite understood is the (tacit) judgment that the end of the human race would be a bad thing, and that creating more people is inherently a good thing. And in fact, that it would maybe be acceptable to allow a large fraction of people to suffer if it ensured the propagation of humanity.

It makes sense why this is such a common judgment – individuals with the drive to reproduce and ensure the perpetuation of their progeny are more likely to have surviving progeny, so the population is dominated by this kind of individual.

But people talk about a species as if it was a living individual. It's not! a species can't feel hurt; its members can feel hurt. A species cannot prefer to continue to live (over the millenia); its members can prefer to live (for the duration of their own time on earth).

In some cases talking about a species is just metonymy, or shorthand, but in other cases I don't think that usage fits. For example, if "we must ensure the human race persists" is just shorthand, it's shorthand for "people should have children", and it's strange to insist that other persons have the personal desire to have children.

When someone is suffering, e.g. from depression, we can encourage them to continue to live for hope of a time where they aren't suffering (by helping them imagine a plausible, happier future), which to them might be worth enduring their present circumstances. In other words, we want to help them make a better-informed decision about their own life or death by reminding them that their current outlook is strongly colored by their current mental state.

But when it comes to a species, the future will not be experienced by any of its currently living members – it will be experienced by individuals who don't yet exist! And if they are never born, they've not been robbed of life or had a preference violated (because there's no one to rob or disregard).

The philosophy is much more complicated than I'm presenting, but I wanted to say something on the matter, if only because it's the sort of questioning I wish I saw more often.


One thing that annoys me about longtermism is that they focus on the survival of human race. I can't tell if this means humans as currently exist or humans as will exist in the future. Because humans will change a lot conquering the galaxy and surviving for a long span of time. How much change is allowed before considered non-human? Do all human descendants count? Does this include modifications or cyborgs? Or uploaded minds?

Most importantly, they don't seem to think of AIs as human descendants. Is it better for human-like AIs to conquer the galaxy, and/or for humans to be trapped on Earth for a million years? If that is the only choice.


Good and bad are judgments based on our evolutionary goal of survival eventually culminating in some idea of society. Lots of these ideas often become contradictory in the end as morality is to organize society and their are more effective and ineffective strategies to that end. To prevent bad things, make things predictable, that others can be relied on, and so on. But there is no omnipresent law, akin to say math platonism, or the physical laws of our universe. So it doesn't make much sense for an actor in a society to say it might be moral to end the society/species as it's ignoring the purpose of that "morality" in the first place. It is "detached" needing to be based on other "axioms", say "that every rock is sacred". But these are no more universally valid despite people's hope for a ethical platonism, and are potentially an inferior choice for the only objective that matters, continuation of the system (spieces/individual/etc).

In your example, optimizing at all cost to remove suffering ignores that suffering and pain are part of the process of warning in survival and are an inferior objective in the overall goal of survival. Sure you can choose it at all cost, but it's a poor strategy. A greedy algorithm as it were.

To be forthright and to illustrate this line of thought, I responded to you in the hopes that you will understand part of this, and continue to live and keep this system going, as it appears to be a better overall strategy to continue society. Not morality but "strategy". And frankly, from personal preference, it's more interesting than some inky blackness with nothing to do.

You do hit on a point that's often overlooked that a lot of designs of morality start to break down at large timescales of a couple generations. For example, arguing that a bad action in the past seems a bit poor for oneself if it results in the person not existing. How about the countless others that would exists on just minor variations of that history?

Anyway, try to challenge your initial thoughts and see if you come to this or something else. And keep thinking about moral systems at timescales as it's interesting to think about.


Thank you so much for your thoughtful response!

We may disagree from the very start (with the premise you wrote) depending on what you meant,

> Good and bad are judgments based on our evolutionary goal of survival eventually culminating in some idea of society.

but we may also be in heated agreement. [If we disagree, this will continue a nice conversation, and if not, well, it's useful for me to periodically put my thoughts into words so I can see whether they make sense or whether I really believe them ^_^]

So I agree that this is almost certainly why I (or anyone else) has a sense of morality. We've evolved to be social creatures (as a strategy) and our sense of good and bad lines up pretty well with prosocial and antisocial behavior (e.g. a sense of justice or fairness as a means to enable cooperation).

But I don't think this endows use with any obligation to obey the biological imperative. We may agree that we have some social obligations – at the very least to avoid causing harm to each other, perhaps also to actively help each other. (And if not out of altruism, out of self-interest and good sense.) But I don't think building a good society requires that we aim to reproduce (let alone perpetually borrow against future generations, as we've been doing lately).

Consider supernormal stimuli: there are birds that seem to sit on their eggs not because they care for their young, but because they feel a compulsion to sit on large, round, colorful objects. If you present them with a larger, rounder, or more colorful object than their own eggs, they will preferentially sit on the faux egg (abandoning their own). The only thing necessary for the trait to be selected (passed on) is that it ultimately results in viable offspring, even if the mechanism is absurd, and even if having viable offspring is a side effect rather than the goal.

What I'm hoping to establish is that what it feels like to be a certain way – and the underlying values or priorities you actually hold – can be rather divorced from the reason you have those feelings. So we may have a sense of morality because we're social creatures because that's the strategy that let our species survive, but it does not automatically follow that you can reverse the arrows of implication – without more argument, you can only claim that if we want our species to survive, we should be moral (not that we ought to have our species survive). To argue that we ought to have our species survive, you'd need to appeal to some common goal or feeling – either a desire for the survival of species itself or some other goal that the survival of our species enables. But we may not necessarily have this feeling, as the example with the birds and their eggs shows.

[Stepping away from theory for a moment: people obviously are trying to ensure the perpetuation of humanity, but I wonder whether people don't actually desire it directly. Maybe what they actually desire is some preservation of their legacy (as a way to feel remembered or connected), or a sense of victory, or vicariously experiencing in some small part a happier future that they are imagining. And of course, this is only when people do stop to consider the long-term; people may usually reproduce simply because sex can be awesome.]

In any case, I think it's alright to value something even if it doesn't serve the ultimate purpose of reproduction. Because of other values that I hold, I've tried to extrapolate or generalize useful social feelings to a sense of good and bad that considers and gives significant weight to the lives of other species, or that cares more about the eudamonia/suffering of people than about life/death. This thwarts the reproduction of humanity (indeed, I'm evolutionarily a dead end, because as a result of my thoughts and feelings I've taken lengths to ensure I can't create new life with my genes), but it makes me personally feel better to live this way and I think it's also works out to be better for others, too.

In the absence of any (strong evidence for) objective, universal morality – as you pointed out! – we're left to determine it for ourselves, and in a sense there are no right or wrong answers, because we're also left to do our own meaning-making and to determine our own goals (which I suppose would be true even if there were objective, universal morality). Perhaps you could say life on earth has an inherent goal, namely reproduction, but I'd say that's not a goal so much as an observation about what living things on earth do, definitionally.

To wrap up (good lord I wrote more than I meant to), morality is a useful tool, something we have for practical purposes, and I think it can be for social purposes and personal satisfaction.

I do agree having a diversity of life on earth is more interesting, and I'm glad to be alive to witness it. It's also my preference. I just wouldn't want to keep someone or something alive that was suffering, or didn't choose to be alive, just to satisfy my curiosity. Or perhaps I would, but the idea at least makes me uncomfortable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: