Stage-oe-small.jpg

Inproceedings3598: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
Zeile 38: Zeile 38:
 
with connotation labels, our byproduct of the model inherently
 
with connotation labels, our byproduct of the model inherently
 
supports cross-modal retrieval.
 
supports cross-modal retrieval.
 +
|Download=Ctp147-mogadalaA.pdf,
 
|Forschungsgruppe=Web Science
 
|Forschungsgruppe=Web Science
 
}}
 
}}

Version vom 10. April 2018, 12:24 Uhr


Discovering Connotations as Labels for Weakly Supervised Image-Sentence Data


Discovering Connotations as Labels for Weakly Supervised Image-Sentence Data



Published: 2018 April

Buchtitel: The Web Conference (Cognitive Computing Track)
Verlag: ACM

Referierte Veröffentlichung

BibTeX

Kurzfassung
We address the task of labeling image-sentence pair at large-scale with varied concepts representing connotations. That is for any given query image-sentence, we aim to annotate them with the connotations that capture intrinsic intension. To achieve it, we pro- pose a Connotation multimodal embedding model (CMEM) with a novel loss function. Its unique characteristics over previous models include (i) can leverage multimodal data as opposed to only visual information, (ii) robust to outlier labels in a multi-label scenario and (iii) works well with large-scale weakly supervised data. With extensive quantitative evaluation, we exhibit the effectiveness of CMEM for detection of multiple labels over other state-of-the-art approaches. Also, we show that in addition to annotation of images with connotation labels, our byproduct of the model inherently supports cross-modal retrieval.

Download: Media:Ctp147-mogadalaA.pdf



Forschungsgruppe

Web Science


Forschungsgebiet

Information Retrieval, Maschinelles Lernen, Künstliche Intelligenz, WWW Systeme