Resources for the paper: User-guided one-shot deep model adaptation for music source separation
By G. Cantisani
In this work, we propose to exploit a temporal segmentation provided by the user indicating when each instrument is active, in order to fine-tune a pre-trained deep model for source separation and adapt it to one specific mixture. This paradigm can be referred to as user-guided one-shot deep model adaptation for music source separation, as the adaptation acts on the target song instance only.
The adaptation is possible thanks to a proposed loss function which aims to minimize the energy of the silent sources while at the same time forcing the perfect reconstruction of the mixture.
The results are promising and show that state-of-the-art source separation models have large margins of improvement especially for those instruments which are underrepresented in the training data. Below you can find some audio examples from the MUSDB18 test set.
The paper
Cantisani, G., Ozerov, A., Essid, S., & Richard, G. (2021, October). User-guided one-shot deep model adaptation for music source separation. 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). https://telecom-paris.hal.science/hal-03219350
Abstract
Machine listening systems often rely on fixed taxonomies to organize and label audio data, key for training and evaluating deep neural networks (DNN... ≥≥
Abstract
Audio-text models trained via contrastive learning offer a practical approach to perform audio classification through natural language prompts, such... ≥≥