
How would this resolve if they agree that it was a net positive that the AI risk community was as vocal as it was because it helped in the development of helpful regulations and interpretability tools, but disagreed that the world would literally have been destroyed if not for them?
@IsaacKing Whose probability/concern needs to be justified? Laypeople? Computer scientists? Computer scientists who responded to the AI Impact survey? Existential safety advocates / the AI existential risk community? Eliezer Yudkowsky?
I mainly ask because I think the probabilities of, say, extinction would range from something like 5% (maybe laypeople and computer scientists) to 50% (average existential safety advocate) to >99.9% (Yudkowsky).
I wonder if a question like, "What probability of AI existential risk will experts in 2050 think was justified for a well-informed observed in 2023?" would be analytically cleaner, though it may be more confusing for the average Manifold user.