Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/25317
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings
Peer Review Status: Refereed
Author(s): Poria, Soujanya
Chaturvedi, Iti
Cambria, Erik
Hussain, Amir
Contact Email: amir.hussain@stir.ac.uk
Title: Convolutional MKL based multimodal emotion recognition and sentiment analysis
Editor(s): Bonchi, F
Domingo-Ferrer, J
Baeza-Yates, R
Zhou, Z-H
Wu, X
Citation: Poria S, Chaturvedi I, Cambria E & Hussain A (2017) Convolutional MKL based multimodal emotion recognition and sentiment analysis. In: Bonchi F, Domingo-Ferrer J, Baeza-Yates R, Zhou Z & Wu X (eds.) Proceedings - IEEE 16th International Conference on Data Mining, ICDM 2016. 2016 IEEE 16th International Conference on Data Mining, Barcelona, Spain, 12.12.2016-15.12.2016. Los Alamitos, CA, USA: IEEE Computer Society, pp. 439-448. https://doi.org/10.1109/ICDM.2016.178
Issue Date: 2-Feb-2017
Date Deposited: 8-May-2017
Conference Name: 2016 IEEE 16th International Conference on Data Mining
Conference Dates: 2016-12-12 - 2016-12-15
Conference Location: Barcelona, Spain
Abstract: Technology has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. Much of the content being posted and consumed online is multimodal. With billions of phones, tablets and PCs shipping today with built-in cameras and a host of new video-equipped wearables like Google Glass on the horizon, the amount of video on the Internet will only continue to increase. It has become increasingly difficult for researchers to keep up with this deluge of multimodal content, let alone organize or make sense of it. Mining useful knowledge from video is a critical need that will grow exponentially, in pace with the global growth of content. This is particularly important in sentiment analysis, as both service and product reviews are gradually shifting from unimodal to multimodal. We present a novel method to extract features from visual and textual modalities using deep convolutional neural networks. By feeding such features to a multiple kernel learning classifier, we significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets.
Status: AM - Accepted Manuscript
Rights: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Files in This Item:
File Description SizeFormat 
convolutional-mkl-based-mulimodal-sentiment-analysis.pdfFulltext - Accepted Version530.7 kBAdobe PDFView/Open



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.