Abstract

Supervised multi-channel audio source separation requires extracting useful spectral, temporal, and spatial features from the mixed signals. the success of many existing systems is therefore largely dependent on the choice of features used for training. In this work, we introduce a novel multi-channel, multiresolution convolutional auto-encoder neural network that works on raw time-domain signals to determine appropriate multiresolution features for separating the singing-voice from stereo music. Our experimental results show that the proposed method can achieve multi-channel audio source separation without the need for hand-crafted features or any pre- or post-processing.

Bibtex


@inproceedings{Grais_2018d,
  author = {{Grais}, E. M. and {Ward}, D. and {Plumbley}, M. D.},
  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},
  title = {Raw Multi-Channel Audio Source Separation using Multi-Resolution Convolutional Auto-Encoders},
  year = {2018},
  pages = {1577-1581},
  keywords = {"maruss"},
  doi = {10.23919/EUSIPCO.2018.8553571},
  issn = {2076-1465},
  month = sep,
  address = {Rome, Italy},
  url = {https://ieeexplore.ieee.org/document/8553571},
  openaccess = {http://epubs.surrey.ac.uk/848607/}
}