From music analysis to urban audio-visual sound source localization
By M. Fuentes
M. Fuentes talks to ADASP about her research in audio data analysis, between music and sound scene analysis.
Abstract
I am going to talk about my trajectory in research from my PhD to present date. I will be spanning topics from computational rhythm and performance analysis, to self-supervised learning for classification and localization of visual sound sources, and urban traffic audio-only sound source localization. I will close the talk by discussing future research directions and ideas.
Bio
Magdalena Fuentes Lujambio is a Provost’s Postdoctoral Fellow at the Music and Audio Research Lab and the Center for Urban Science and Progress at New York University. Before, she did her Ph.D. at Université Paris Saclay , and her B.Eng. in Electrical Engineering at Universidad de la República, where she also worked as a research and teaching assistant at the Engineering School and the Music School.
Her research interests include Human-Centered Machine Learning, Machine Listening, Self-Supervised Learning, Music Information Retrieval, Environmental Sound Analysis and Sound Source Localization.
More on the speaker’s website.