Project

General

Profile

Overview

MaD TwinNet

This project is about our recent work for monaural sound source separation, using the Masker-Denoiser architecture [1] with Twin Networks [2].

The full description of our MaD TwinNet is at our paper, entitled "MaD TwinNet: Masker-Denoiser Architecture with Twin Networks for Monaural Sound Source Separation". Our paper can be found online at arxiv or at the Documents section of this Redmine project .

You can an online demo of the MaD TwinNet at the project's homepage.

The code for the MaD TwinNet is based on the PyTorch framework and can be found at our GitHub repository and at the repository of this Redmine project. Also, at this Redmine project, we have a repo for the code that we used for our demo website. If you find it useful, please let us know.

You can get the pre-trained weights from and the full results from . Also, the pre-trained weights and the results can be found at the files section of this Redmine project.

If you encouter any issue, please let us know by making an issue here or at our repo at GitHub.

Enjoy!


[1] S.-I. Mimilakis, K. Drossos, J.-F. Santos, G. Schuller, T. Virtanen, and Y. Bengio, “Monaural singing voice separation with skip-filtering connections and recurrent inference of time-frequency mask,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), accepted for publication, 2018
[2] D. Serdyuk, R.-N. Ke, A. Sordoni, C. Pal, and Y. Bengio, "Twin Networks: Using the Future as a Regularizer, " arxIv e-print, arXiv:1708.06742, 2017

Issue tracking

open closed Total
Bug 0 0 0
Feature 0 0 0
Support 0 0 0

View all issues

Repositories

Main repository
Paste
mad-twinnet-web-site
Paste

See other repositories