AI isn't always explainable. But does it matter? AI can have a "black box" effect, so often you aren't able to retrace its steps to understand how a prediction or decision was made. This effect can make AI implementation difficult as it requires a high level of trust and maybe even a leap of faith in the algorithm. Algorithms can be built to improve explainability but often at the expense of accuracy, which isn't always something you want to sacrifice. This leaves you with an important decision - do you prioritize accuracy or explainability? Or is there a way to achieve both? Lean how algorithm explainability can impact deployment as well as the tradeoffs between near-term and long-term implications to you business and culture.
In this session, they will explore this dilemma and walk through the different paths, questions and considerations to weigh before beginning your AI deployment.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.