I don't think it's so strange to expect the reaction to be proportionate to the threat they see? An open letter seems several orders of magnitude shorter to anything that would be effective if it really is a threat to the species. I think of examples of ways people react when there are very high stakes: resigning in protest, going on a hunger strike, demonstrations/raids, setting themselves on fire, etc. But there's none of that, just a low-stakes open letter? Can we even tell that apart from just posturing? Even the comments saying that some of the signers are just interested in delaying others for their own benefit are making better arguments.
I agree there's a lot more than "sign an open letter" they could be doing. I'm mostly objecting to the "they need to engage in terrorism or they're not serious" assertion.
As for resigning in protest, my understanding is that Anthropic was founded by people who quit OpenAI saying it was acting recklessly. That seems like the best skin-in-the-game signal. I find that much more compelling than I would the Unabomber route.
People like Musk and Woz should probably be pouring money into safety research and lobbying, but I don't think a hunger strike from anyone would make a difference, and resigning only makes sense if you work for OpenAI, Google, or a handful of other places where most of these people presumably don't work.
What should I, an employee of a company not developing AI be doing? The only reasonable actions I can see are 1. work on AI safety research, 2. donate money to AI safety research/advocacy, and 3. sign and open letter.
(I did none of these, to be fair, and am even giving a monthly fee to OpenAI to use ChatGPT with GPT-4), but my partner, an AI researcher who is seriously concerned, tried to sign the letter pit was ratelimited at the time], and is considering working in AI safety post-graduation. If she weren't making a PhD-student's salary, she might be donating money as well, though it's not super-clear where to direct that money.)
Yes, engaging in terrorism would be too much for most signers in the list, but the point is more that there is a wide gap between what they're saying and what they're doing. You make another good point that at the very least, they should be putting their money where they mouth is.
Anthropic seem to be competing against OpenAI? And getting funds from Google? So they would probably benefit economically from delaying development, since they are currently behind. Personally I think it's more important to look at what people are doing, than just listening to what they say, as there is a strong tendency to posturing.
If a terrorist bombs one data center, security increases at all the other data centers. Bombing all data centers (and chip fabs, so they can't be rebuilt) simultaneously requires state-level resources.
Going down that line of thought not even a state could realistically bomb the data centers of all states, it's kind of pointless. But I wasn't really arguing that they need to destroy all datasources, but rather that raiding a datacenter would be more appropriate response to the threats the claiming exist. They wouldn't even need to succeed in vandalising one, they'd just have to try.