Developing Fair Technology Without Unfair Bias
In this innovative presentation and Q&A, participants will work to understand industry-wide challenges when testing for product fairness in machine learning algorithms. They will gain a comprehension of algorithmic unfairness, and how product fairness testing through qualitative methods can be used to uncover unfair or prejudicial bias and improve model outcomes. Our goal is to pull back the curtain on how Google's AI ethics team, Responsible Innovation, works to ensure that we design models aligned to our AI Principles that are inclusive for everyone and do not perpetuate harm against communities. Our work is sociotechnical and merges the social sciences and sociological context with technology.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.