Please use this identifier to cite or link to this item:
Appears in Collections:Computing Science and Mathematics Book Chapters and Sections
Title: An investigation into audiovisual speech correlation in reverberant noisy environments
Author(s): Cifani, Simone
Abel, Andrew
Hussain, Amir
Squartini, Stefano
Piazza, Francesco
Contact Email:
Editor(s): Esposito, A
Vích, R
Citation: Cifani S, Abel A, Hussain A, Squartini S & Piazza F (2009) An investigation into audiovisual speech correlation in reverberant noisy environments. In: Esposito A & Vích R (eds.) Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions: COST Action 2102 International Conference Prague, Czech Republic, October 2008. Lecture Notes in Computer Science, 5641. Berlin, Germany: Springer-Verlag, pp. 331-343.;
Issue Date: 2009
Date Deposited: 6-Feb-2013
Series/Report no.: Lecture Notes in Computer Science, 5641
Abstract: As evidence of a link between the various human communication production domains has become more prominent in the last decade, the field of multimodal speech processing has undergone significant expansion. Many different specialised processing methods have been developed to attempt to analyze and utilize the complex relationship between multimodal data streams. This work uses information extracted from an audiovisual corpus to investigate and assess the correlation between audio and visual features in speech. A number of different feature extraction techniques are assessed, with the intention of identifying the visual technique that maximizes the audiovisual correlation. Additionally, this paper aims to demonstrate that a noisy and reverberant audio environment reduces the degree of audiovisual correlation, and that the application of a beamformer remedies this. Experimental results, carried out in a synthetic scenario, confirm the positive impact of beamforming not only for improving the audio-visual correlation but also in a complete audio-visual speech enhancement scheme. Thus, this work inevitably highlights an important aspect for the development of future promising bimodal speech enhancement systems.
Rights: The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.
DOI Link: 10.1007/978-3-642-03320-9_31
Licence URL(s):

Files in This Item:
File Description SizeFormat 
Abel_2009_An_Investigation_into_Audiovisual_Speech_Correlation.pdfFulltext - Published Version455.12 kBAdobe PDFUnder Embargo until 3000-12-01    Request a copy

Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.

This item is protected by original copyright

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved

If you believe that any material held in STORRE infringes copyright, please contact providing details and we will remove the Work from public display in STORRE and investigate your claim.