Stage-oe-small.jpg

Inproceedings3583: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
(Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorNachname=Both |ErsterAutorVorname=Fabian }} {{Publikation Author |Rank=2 |Author=Steffen Thoma }} {{Publikation Author |Ran…“)
 
Zeile 13: Zeile 13:
 
{{Inproceedings
 
{{Inproceedings
 
|Referiert=True
 
|Referiert=True
|Title=Cross-modal Knowledge Transfer: Improving theWord Embedding of Apple by Looking at Oranges
+
|Title=Cross-modal Knowledge Transfer: Improving the Word Embedding of Apple by Looking at Oranges
 
|Year=2017
 
|Year=2017
 
|Month=Dezember
 
|Month=Dezember

Version vom 21. September 2017, 11:48 Uhr


Cross-modal Knowledge Transfer: Improving the Word Embedding of Apple by Looking at Oranges


Cross-modal Knowledge Transfer: Improving the Word Embedding of Apple by Looking at Oranges



Published: 2017 Dezember

Buchtitel: K-CAP2017, The 9th International Conference on Knowledge Capture
Verlag: ACM
Organisation: International Conference on Knowledge Capture

Referierte Veröffentlichung

BibTeX

Kurzfassung
Capturing knowledge via learned latent vector representations of words, images and knowledge graph (KG) entities has shown state-of-the-art performance in computer vision, computational linguistics and KG tasks. Recent results demonstrate that the learning of such representations across modalities can be beneficial, since each modality captures complementary information. However, those approaches are limited to concepts with cross-modal alignments in the training data which are only available for just a few concepts. Especially for visual objects exist far fewer embeddings than for words or KG entities. We investigate whether a word embedding (e.g., for "apple'") can still capture information from other modalities even if there is no matching concept within the other modalities (i.e., no images or KG entities of apples but of oranges). The empirical results of our knowledge transfer approach demonstrate that word embeddings do benefit from extrapolating information across modalities even for concepts that are not represented in the other modalities. Interestingly, this applies most to concrete concepts (e.g., dragonfly) while abstract concepts (e.g., animal) benefit most if aligned concepts are available in the other modalities.



Forschungsgruppe

Web Science


Forschungsgebiet