How To Make AGI Not Kill Everyone: An Optimistic Vision
Artificial General Intelligence (AGI) could be humanity's greatest achievement or final invention. How do we ensure it solves our challenges without making humans irrelevant? Join us as we explore pivotal questions: How can we align superintelligent AI with human values? How do we prevent unintended consequences and maintain human agency? We'll discuss strategies for steering this unprecedented power towards human flourishing. AGI is coming, promising either unparalleled prosperity or existential risk. Let's turn this potential threat into humanity's greatest opportunity!
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.
Nora Ammann
Advanced Research and Invention Agency
Judd Rosenblatt
AE Studio
Eliezer Yudkowsky
Machine Intelligence Research Institute