Learning visual concepts in images using temporal convolutional networks
Conference Publication ResearchOnline@JCUAbstract
With the advancement of technology and the rapid rise of social media, people now tend to share their experiences and feelings using images. It is useful to effectively mine opinions and identify sentiments from the images available on various platforms for tasks like social media marketing and user profiling. Visual sentiment concepts extracted from tags of online images are known as adjective noun pairs and the sentiment conveyed by them is usually predicted using opinion mining methods. However, the pre-training work is time consuming and requires huge space. Instead of collecting tags for the images, in this paper we propose to automatically predict the corresponding sentiments using deep convolutional networks. Next, to model the sequence of different sentiments in a single image we consider a recurrent neural network. Such a model is able to remember the context of different sentiments in a single image. We applied our method on Flickr dataset and our approach outperformed baselines in the range of 3-20%.
Journal
N/A
Publication Name
Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018
Volume
N/A
ISBN/ISSN
978-1-5386-9276-9
Edition
N/A
Issue
N/A
Pages Count
5
Location
Bengalaru, India
Publisher
Institute of Electrical and Electronics Engineers
Publisher Url
N/A
Publisher Location
Piscataway, NJ, USA
Publish Date
N/A
Url
N/A
Date
N/A
EISSN
N/A
DOI
10.1109/SSCI.2018.8628703