Abstract

Most single channel audio source separation approaches produce separated sources accompanied by interference from other sources and other distortions. To tackle this problem, we propose to separate the sources in two stages. In the first stage, the sources are separated from the mixed signal. In the second stage, the interference between the separated sources and the distortions are reduced using deep neural networks (DNNs). We propose two methods that use DNNs to improve the quality of the separated sources in the second stage. In the first method, each separated source is improved individually using its own trained DNN, while in the second method all the separated sources are improved together using a single DNN. To further improve the quality of the separated sources, the DNNs in the second stage are trained discriminatively to further decrease the interference and the distortions of the separated sources. Our experimental results show that using two stages of separation improves the quality of the separated signals by decreasing the interference between the separated sources and distortions compared to separating the sources using a single stage of separation.

Bibtex


@article{Grais_2017b,
  author = {Grais, E. M. and Roma, G. and Simpson, A. J. R. and Plumbley, M. D.},
  journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
  title = {Two-Stage Single-Channel Audio Source Separation Using Deep Neural Networks},
  year = {2017},
  volume = {25},
  number = {9},
  pages = {1773-1783},
  doi = {10.1109/TASLP.2017.2716443},
  issn = {2329-9290},
  month = sep,
  openaccess = {http://epubs.surrey.ac.uk/841432/},
  keywords = {"maruss"}
}