As technology is woven ever deeper into public and private life, and innovation provides new and sometimes clashing perspectives, it is increasingly critical to understand the practical, moral, and ethical implications of that integration. As AI-capable systems become more pervasive and machines learn more about us, who is responsible for protecting us from what they learn and the actions they take? Can machine learning resolve the Trolley Problem? Or does next gen development include alleviating ethical concerns? How do we, as the creators of this technology, create an ethical but effective governance system to ensure that our creations do not overrule the tenets that underpin our society?