Artificial intelligence, even as it improves many aspects of everyday life, introduces new and unanticipated errors, failures, and accidents into the world. These failures arise not merely from technical issues but more so from the unexpected interactions between an AI system and the social context in which it is operating. From the offensive responses produced by Microsoft’s Tay chatbot to the overpromised capacity of IBM Watson for Healthcare to the fatal accident caused by an Uber self-driving car, the intelligence of AI can fall short in devastating ways. This presentation will discuss several examples of unintelligent AI from an anthropological perspective and provide a set of guiding principles that will help designers anticipate and mitigate potential AI system failures.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.