Convolutional MKL based multimodal emotion recognition and sentiment analysis
Conference Publication ResearchOnline@JCUAbstract
Technology has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. Much of the content being posted and consumed online is multimodal. With billions of phones, tablets and PCs shipping today with built-in cameras and a host of new video-equipped wearables like Google Glass on the horizon, the amount of video on the Internet will only continue to increase. It has become increasingly difficult for researchers to keep up with this deluge of multimodal content, let alone organize or make sense of it. Mining useful knowledge from video is a critical need that will grow exponentially, in pace with the global growth of content. This is particularly important in sentiment analysis, as both service and product reviews are gradually shifting from unimodal to multimodal. We present a novel method to extract features from visual and textual modalities using deep convolutional neural networks. By feeding such features to a multiple kernel learning classifier, we significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets.
Journal
N/A
Publication Name
IEEE International Conference on Data Mining (ICDM)
Volume
N/A
ISBN/ISSN
978-1-5090-5473-2
Edition
N/A
Issue
N/A
Pages Count
10
Location
Barcelona, Spain
Publisher
Institute of Electrical and Electronics Engineers
Publisher Url
N/A
Publisher Location
Piscataway, NJ, USA
Publish Date
N/A
Url
N/A
Date
N/A
EISSN
N/A
DOI
10.1109/ICDM.2016.0055