References¶
- CFL+18
Estefania Cano, Derry FitzGerald, Antoine Liutkus, Mark D Plumbley, and Fabian-Robert Stöter. Musical source separation: an introduction. IEEE Signal Processing Magazine, 36(1):31–40, 2018.
- CKCJ21
Woosung Choi, Minseok Kim, Jaehwa Chung, and Soonyoung Jung. Lasaft: latent source attentive frequency transformation for conditioned source separation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 171–175. IEEE, 2021.
- CKC+20
Woosung Choi, Minseok Kim, Jaehwa Chung, Daewon Lee, and Soonyoung Jung. Investigating u-nets with various intermediate blocks for spectrogram-based singing voice separation. In Proc. International Society for Music Information Retrieval Conference (ISMIR). 2020.
- CHRP19
Alice Cohen-Hadria, Axel Roebel, and Geoffroy Peeters. Improving singing voice separation using deep u-net and wave-u-net with data augmentation. In 2019 27th European Signal Processing Conference (EUSIPCO), volume, 1–5. 2019. doi:10.23919/EUSIPCO.2019.8902810.
- DefossezUBB19a
Alexandre Défossez, Nicolas Usunier, Léon Bottou, and Francis Bach. Demucs: deep extractor for music sources with extra unlabeled data remixed. arXiv preprint arXiv:1909.01174, 2019.
- DefossezUBB19b
Alexandre Défossez, Nicolas Usunier, Léon Bottou, and Francis Bach. Music source separation in the waveform domain. arXiv preprint arXiv:1911.13254, 2019.
- HKVM20
Romain Hennequin, Anis Khlif, Felix Voituret, and Manuel Moussallam. Spleeter: a fast and efficient music source separation tool with pre-trained models. Journal of Open Source Software, 5(50):2154, 2020. Deezer Research. URL: https://doi.org/10.21105/joss.02154, doi:10.21105/joss.02154.
- KCL+21
Qiuqiang Kong, Yin Cao, Haohe Liu, Keunwoo Choi, and Yuxuan Wang. Decoupling magnitude and phase estimation with deep resunet for music source separation. In ISMIR. Citeseer, 2021.
- LKJX21
Liwei Lin, Qiuqiang Kong, Junyan Jiang, and Gus Xia. A unified model for zero-shot music source separation, transcription and synthesis. In Proceedings of 21st International Conference on Music Information Retrieval, ISMIR. 2021.
- LXWY20
Haohe Liu, Lei Xie, Jian Wu, and Geng Yang. Channel-Wise Subband Input for Better Voice and Accompaniment Separation on High Resolution Music. In Proc. Interspeech 2020, 1241–1245. 2020. URL: http://dx.doi.org/10.21437/Interspeech.2020-2555, doi:10.21437/Interspeech.2020-2555.
- LluisPS19
Francesc Lluís, Jordi Pons, and Xavier Serra. End-to-end music source separation: is it possible in the waveform domain? Proc. Interspeech 2019, pages 4619–4623, 2019.
- MSS20
Ethan Manilow, Prem Seetharman, and Justin Salamon. Open Source Tools & Data for Music Source Separation. https://source-separation.github.io/tutorial, Octover 2020. URL: https://source-separation.github.io/tutorial.
- MFUS21
Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, and Fabian-Robert Stöter. Music demixing challenge 2021. 2021. arXiv:2108.13559.
- RLStoter+17
Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. The MUSDB18 corpus for music separation. December 2017. URL: https://doi.org/10.5281/zenodo.1117372, doi:10.5281/zenodo.1117372.
- RLStoter+18
Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, Derry FitzGerald, and Bryan Pardo. An overview of lead and accompaniment separation in music. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(8):1307–1335, 2018.
- RLS+19
Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. Musdb18-hq - an uncompressed version of musdb18. August 2019. URL: https://doi.org/10.5281/zenodo.3338373, doi:10.5281/zenodo.3338373.
- StoterLI18
Fabian-Robert Stöter, Antoine Liutkus, and Nobutaka Ito. The 2018 signal separation evaluation campaign. In Latent Variable Analysis and Signal Separation: 14th International Conference, LVA/ICA 2018, Surrey, UK, 293–305. 2018.
- StoterULM19
Fabian-Robert Stöter, Stefan Uhlich, Antoine Liutkus, and Yuki Mitsufuji. Open-unmix-a reference implementation for music source separation. Journal of Open Source Software, 4(41):1667, 2019.
- TM20
Naoya Takahashi and Yuki Mitsufuji. D3net: densely connected multidilated densenet for music source separation. arXiv preprint arXiv:2010.01733, 2020.
- UPG+17
Stefan Uhlich, Marcello Porcu, Franck Giron, Michael Enenkl, Thomas Kemp, Naoya Takahashi, and Yuki Mitsufuji. Improving music source separation based on deep neural networks through data augmentation and network blending. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), volume, 261–265. 2017. doi:10.1109/ICASSP.2017.7952158.
- VGFevotte06
Emmanuel Vincent, Rémi Gribonval, and Cédric Févotte. Performance measurement in blind audio source separation. IEEE transactions on audio, speech, and language processing, 14(4):1462–1469, 2006.
- VVG18
Emmanuel Vincent, Tuomas Virtanen, and Sharon Gannot. Audio source separation and speech enhancement. John Wiley & Sons, 2018.