Yoonchang is a music/audio signal processing engineer and co-founder of Cochlear.ai. He studied Electonic Engineering in King's College London, and holds first-class honours MEng degree from Queen Mary, University of London, also PhD in Music information Retrieval from Seoul National University.
Cochlear.ai creates an AI system that understands the semantics of audio. The company has been identified as top 4 AI startup in 2018 in autonomous systems by NVIDIA inception, also a consecutive winner of IEEE DCASE('17,'18) which is the biggest challenge in the acoustic scene/event analysis field.
He was a speaker of NVIDIA GTC'18 "Audio recognition, context-awareness, and its applications" and going to present "Building and Optimizing Cloud Platform for Audio Cognition System" at GTC'19.
At SXSW, he will present "Let AI Hear What's Going On: Machine Listening" on March 13.
Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.