Artificial Intelligence is increasingly ubiquitous. Algorithms make important decisions that affect our lives, from how we are policed to what ads we see online, and yet the datasets on which they’re built are inconsistent, unrepresentative, and not always appropriately vetted or used. In other words, the problem with bad outcomes isn’t always the machine or the algorithm - it is often the health of the data itself. In an effort to address this problem, there are several initiatives and methods currently being tested to address dataset health. This panel will bring together experts across industry, academia, and government to discuss methods for identifying bad data, and ways to appropriately address problematic inputs.
[Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.]