How To Make AGI Not Kill Everyone
AGI may be humanity’s final invention—for better or worse. We face a unique challenge: building superintelligent systems that reliably do what we want, where even small misalignments could lead to catastrophe. Can we solve the technical challenges of alignment in time? What would success even look like? How do we stay clear on existential risks in the race to AGI? Join us as we tackle why most solutions fall short and which paths might truly lead to survival. AGI is coming, and it’s vital we grasp what’s at stake while we still have time to shape its future.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.
Nora Ammann
Advanced Research and Invention Agency
Judd Rosenblatt
AE Studio
Eliezer Yudkowsky
Machine Intelligence Research Institute