The authors identified four key narratives surrounding AI discourse:
1. Existential risk narrative: warns that future AI advancements could create systems beyond human control - with potentially catastrophic outcomes.
2. Effective accelerationist narrative: supports rapid AI development and argues that the benefits of solving global problems outweigh the (minimal) risks.
3. Real, immediate societal risks narrative: focuses on current threats (including deepfakes and AI's environmental impact) rather than speculative long-term risks.
4. Balanced risks narrative: advocates for integrated AI governance, addressing both existential and immediate risks to create comprehensive policies for harm reduction.
Read more about the effect these narratives have on research and policy:
Kontakt |
|
---|