AR is a new medium for displaying the information. It has great potential to enhance the life quality of users, and it can be particularly useful for hearing impaired people. Recently, emerging AI technology starts to enable computers to understand the various sounds we hear in daily lives. In this session, we would like to share our Looking to Listen (L2L) project that combines AR with machine listening AI for enriching the lives of hearing impaired. L2L aims to visualize the surrounding sound in a real-time so that they can visually understand what’s going on. Dr Yoonchang Han and Subin Lee will explain what kind of sound can be identified, also how acoustic information can be effectively transferred to visual information while making the most of potential of AR.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.