
How valuable is it to work on AI alignment TODAY, compared to other problems in AI?
29
Never closes
Much less valuable
Less valuable
As valuable as other problems
More valuable
Much more valuable
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Is AI alignment computable?
50% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
Will I focus on the AI alignment problem for the rest of my life?
45% chance
How difficult will Anthropic say the AI alignment problem is?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will Inner or Outer AI alignment be considered "mostly solved" first?
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance
Sort by:
@robm I'm not familiar enough with either's use of the word but in general, you can say
alignment research aims to make artificial general intelligence (AGI) aligned with human values and human intent.
my understanding is that ai alignment doesn’t only deal with safety but also ensuring the model is aligned to the goals of the user - right now gpt-4 feels less aligned to my goals than it was a couple of month ago
@Soli How is it that all these models are getting worse with time but the promises are getting bigger.
you see gpt 4 gets worse each generation whatever the reason. Each of these Claude models score worse than their predecessor on benchmarks.
People are also trading
Related questions
Is AI alignment computable?
50% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will there be a well accepted formal definition of value alignment for AI by 2030?
25% chance
Will I focus on the AI alignment problem for the rest of my life?
45% chance
How difficult will Anthropic say the AI alignment problem is?
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
Will Inner or Outer AI alignment be considered "mostly solved" first?
Will ARC's Heuristic Arguments research substantially advance AI alignment before 2027?
26% chance
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance