2 Comments

I'm skeptical of placing too much importance on the historical record. Examples of preventions gone wrong are legible in ways that successful interventions would not be. What are the consequences of developing atomic and hydrogen weapons in secret? Who knows, but it's not difficult to imagine a scenario where a different decision could have led to catastrophe. Yet the lack of a real historical counterfactual limits how persuasive that can be. Prediction uncertainty applies in both directions. You can't retroactively predict the disasters that would have happened if proper design thinking and scenario planning never happened.

Expand full comment

I agree with your arguments. The elan vital comparison is nice.

I have written these two post that can be of some interest to some of you:

In the first I argued that nuclear war plus our primitive social systems, imply we live in an age of acute existential risk, and the substitution of our flawed governance by AI based government is our chance of survival.

https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of

In the second one, I argue that given the kind of specialized AI we are training so far, existential risk from AI is still negligible, and regulation would be premature.

https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against

Expand full comment