Companion page for the paper: Identify, Locate and Separate: Audio-Visual Object Extraction in Large Video Collections using Weak Supervision
By S. Parekh

We tackle the problem of audiovisual scene analysis for weakly-labeled data. To this end, we build upon our previous audiovisual representation learning framework to perform object classification in noisy acoustic environments and integrate audio source enhancement capability. This is made possible by a novel use of non-negative matrix factorization for the audio modality. Our approach is founded on the multiple instance learning paradigm. Its effectiveness is established through experiments over a challenging dataset of music instrument performance videos. We also show encouraging visual object localization results.

The paper:

(Parekh et al., 2019): Parekh, S., Ozerov, A., Essid, S., Duong, N., Pérez, P., & Richard, G. (2019, October). IDENTIFY, LOCATE AND SEPARATE: AUDIO-VISUAL OBJECT EXTRACTION IN LARGEVIDEO COLLECTIONS USING WEAK SUPERVISION. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA).


Visit the author’s website for examples and supplementray material