The corresponding participants are: * Antoine Liutkus, Telecom ParisTech, CNRS LTCI, Paris, FRANCE * Zafar Rafii, Northwestern University EECS Department Evanston, IL, USA * Roland Badeau; Telecom ParisTech, CNRS LTCI, Paris, FRANCE * Bryan Pardo, Northwestern University EECS Department Evanston, IL, USA * Gaėl Richard, Telecom ParisTech, CNRS LTCI, Paris, FRANCE (Note that Zafar and Bryan are involved in the separation of the vocals only. Antoine, Roland and Gaėl are involved in all signals.) The algorithm is based on a specialized technique per signal, i.e: vocals, bass, drums and other. The vocals are extracted using the adaptive REPET algorithm, which will be submitted to ICASSP 2012. The drums are extracted using the technique discussed in: - Gaussian Processes for Underdetermined Source Separation, IEEE Trans.on Sig. Proc. 59(7), July 2011 The bass is extracted by first computing its time-varying fundamental frequency, then by designing a soft Time-Frequency mask to remove the corresponding harmonics. The whole process has been designed to be computationally rather efficent: all separated files for one mixture are computed in about a minute, on one 3GhZ core with 8gb RAM.