> It surprises me how hyper focused people are on AI risk when we’ve grown numb to the millions of preventable deaths that happen every year.
Companies are bombarding us with AI in every piece of media they can, obviously with a bias on the positive. This focus is an expected counterresponse to said pressure, and it is actually good that we're not just focusing on what they want us to hear (i.e. just the pros and not the cons).
> If anything, AI may help us reduce preventable deaths.
Maybe, but as long as it development is coupled to short-term metrics like DAUs it won't.
Not just focusing only on what they want us to hear is a good thing, but using more noise we knowingly consider low value may actually be worse IMO. Both in terms of the overall discourse but also in terms of how much people end up buying into the positive bias.
I.e. "yeah, I heard many counters to all of the AI positivity but it just seemed to be people screaming back with whatever they could rather than any impactful counterarguments" is a much worse situation because you've lost the wonder "is it really so positive" by not taking the time to bring up the most meaningful negatives when responding.
Fair point. I don't know how to actually respond to this one without an objective measure or at least proxy of a measure on the sentiment of the discourse and it's public perception.
Anecdotically I would say we're just in a reversal/pushback of the narrative and that's why it feels more negative/noisy right now. But I'd also add that (1) it hasn't been a prolongued situation, as it started getting more popular in late 2024 and 2025; and (2) probably won't be permanent.
Fair point. I actually wish Altman/Amodei/Hassabis would stop overhyping the technology and also focus on the broader humanitarian mission.
Development coupled to DAUs… I’m not sure I agree that’s the problem. I would argue AI adoption is more due to utility than addictiveness. Unlike social media companies, they provide direct value to many consumers and professionals across many domains. Just today it helped me write 2k lines of code, think through how my family can negotiate a lawsuit, and plan for Christmas shopping. That’s not doom scrolling, that’s getting sh*t done.
You can say "shit" on the internet, as in "I bet those two thousand lines of code are shit quality",or "I hope ChatGPT will still think for you when your brain has rotted away to shit".
Companies are bombarding us with AI in every piece of media they can, obviously with a bias on the positive. This focus is an expected counterresponse to said pressure, and it is actually good that we're not just focusing on what they want us to hear (i.e. just the pros and not the cons).
> If anything, AI may help us reduce preventable deaths.
Maybe, but as long as it development is coupled to short-term metrics like DAUs it won't.