Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/28200
Appears in Collections: | Computing Science and Mathematics Conference Papers and Proceedings |
Author(s): | Gogate, Mandar Adeel, Ahsan Marxer, Ricard Barker, Jon Hussain, Amir |
Title: | DNN Driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation |
Citation: | Gogate M, Adeel A, Marxer R, Barker J & Hussain A (2018) DNN Driven Speaker Independent Audio-Visual Mask Estimation for Speech Separation. In: Proceedings of the Annual Conference of the International Speech Communication Association. Interspeech 2018, 02.09.2018-06.09.2018. Baixas, France: ISCA, pp. 2723-2727. https://doi.org/10.21437/Interspeech.2018-2516 |
Issue Date: | 2-Sep-2018 |
Date Deposited: | 9-Nov-2018 |
Conference Name: | Interspeech 2018 |
Conference Dates: | 2018-09-02 - 2018-09-06 |
Abstract: | Human auditory cortex excels at selectively suppressing background noise to focus on a target speaker. The process of selective attention in the brain is known to contextually exploit the available audio and visual cues to better focus on target speaker while filtering out other noises. In this study, we propose a novel deep neural network (DNN) based audiovisual (AV) mask estimation model. The proposed AV mask estimation model contextually integrates the temporal dynamics of both audio and noise-immune visual features for improved mask estimation and speech separation. For optimal AV features extraction and ideal binary mask (IBM) estimation, a hybrid DNN architecture is exploited to leverages the complementary strengths of a stacked long short term memory (LSTM) and convolution LSTM network. The comparative simulation results in terms of speech quality and intelligibility demonstrate significant performance improvement of our proposed AV mask estimation model as compared to audio-only and visual-only mask estimation approaches for both speaker dependent and independent scenarios. |
Status: | VoR - Version of Record |
Rights: | Publisher policy allows this work to be made available in this repository. Published in Proceedings of Interspeech 2018 by ISCA. The original publication is available at: https://doi.org/10.21437/Interspeech.2018-2516. |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2516.pdf | Fulltext - Published Version | 556.07 kB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.