SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Open Badges Research Article

A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

Mohammad H Radfar1*, Richard M Dansereau2 and Abolghasem Sayadiyan1

Author Affiliations

1 Department of Electrical Engineering, Amirkabir University, Tehran 15875-4413, Iran

2 Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada

For all author emails, please log on.

EURASIP Journal on Audio, Speech, and Music Processing 2007, 2007:084186  doi:10.1155/2007/84186

Published: 16 November 2006


We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA). For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF) of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.