Resources for the paper: User-guided one-shot deep model adaptation for music source separation
By G. Cantisani

In this work, we propose to exploit a temporal segmentation provided by the user indicating when each instrument is active, in order to fine-tune a pre-trained deep model for source separation and adapt it to one specific mixture. This paradigm can be referred to as user-guided one-shot deep model adaptation for music source separation, as the adaptation acts on the target song instance only. The adaptation is possible thanks to a proposed loss function which aims to minimize the energy of the silent sources while at the same time forcing the perfect reconstruction of the mixture. The results are promising and show that state-of-the-art source separation models have large margins of improvement especially for those instruments which are underrepresented in the training data. Below you can find some audio examples from the MUSDB18 test set.

The paper

Cantisani, G., Ozerov, A., Essid, S., & Richard, G. (2021, October). User-guided one-shot deep model adaptation for music source separation. 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). https://telecom-paris.hal.science/hal-03219350

Demo


OTHER

AM Contra - Heart Peripheral

Mix
Ground truth
Original model
Adapted model (P-L1:D)

Bobby Nobody - Stitch Up

Mix
Ground truth
Original model
Adapted model (P-L1:D)

BASS

Buitraker - Revo X

Mix
Ground truth
Original model
Adapted model (P-L1:D)

Cristina Vane - So Easy

Mix
Ground truth
Original model
Adapted model (P-L1:D)

DRUMS

Arise - Run Run Run

Mix
Ground truth
Original model
Adapted model (P-L1:D)

Angels In Amplifiers - I'm Alright

Mix
Ground truth
Original model
Adapted model (P-L1:D)

VOCALS

Ben Carrigan - We'll Talk About It All Tonight

Mix
Ground truth
Original model
Adapted model (P-L1:D)

Buitraker - Revo X

Mix
Ground truth
Original model
Adapted model (P-L1:D)

loghi
loghi