Stage-oe-small.jpg

Inproceedings332: Unterschied zwischen den Versionen

Aus Aifbportal
Wechseln zu:Navigation, Suche
 
(4 dazwischenliegende Versionen von 2 Benutzern werden nicht angezeigt)
Zeile 5: Zeile 5:
 
{{Publikation Author
 
{{Publikation Author
 
|Rank=2
 
|Rank=2
|Author=Tobias Weller
+
|Author=Adrian Oberföll
 
}}
 
}}
 
{{Publikation Author
 
{{Publikation Author
Zeile 12: Zeile 12:
 
}}
 
}}
 
{{Inproceedings
 
{{Inproceedings
|Referiert=Nein
+
|Referiert=Ja
|Title=Making Neural Networks FAIR
+
|Title=Right for the Right Reasons: Making Image Classification Intuitively Explainable
|Year=2020
+
|Year=2021
|Month=November
+
|Month=März
|Booktitle=Knowledge Graphs and Semantic Web Second Iberoamerican Conference and First Indo-American Conference, KGSWC 2020, Mérida, Mexico, November 26–27, 2020, Proceedings
+
|Booktitle=Advances in Information Retrieval. ECIR 2021. Lecture Notes in Computer Science
|Pages=29-44
+
|Publisher=Springer, Cham
|Publisher=Springer
+
|Volume=12657
|Editor=Villazón-Terrazas et al.
 
|Series=Communications in Computer and Information Science
 
|Number=1232
 
 
}}
 
}}
 
{{Publikation Details
 
{{Publikation Details
|Abstract=Research on neural networks has gained significant momentum over the past few years. Because training is a resource-intensive process and training data cannot always be made available to everyone, there has been a trend to reuse pre-trained neural networks. As such, neural networks themselves have become research data. In this paper, we first present the neural network ontology FAIRnets Ontology, an ontology to make existing neural network models findable, accessible, interoperable, and reusable according to the FAIR principles. Our ontology allows us to model neural networks on a meta-level in a structured way, including the representation of all network layers and their characteristics. Secondly, we have modeled over 18,400 neural networks from GitHub based on this ontology, which we provide to the public as a knowledge graph called FAIRnets, ready to be used for recommending suitable neural networks to data scientists.
+
|Abstract=The effectiveness of Convolutional Neural Networks (CNNs) in classifying image data has been thoroughly demonstrated. In order to explain the classification to humans, methods for visualizing classification evidence have been developed in recent years. These explanations reveal that sometimes images are classified correctly, but for the wrong reasons, i.e., based on incidental evidence. Of course, it is desirable that images are classified correctly for the right reasons, i.e., based on the actual evidence. To this end, we propose a new explanation quality metric to measure object aligned explanation in image classification which we refer to as the ObAlEx metric. Using object detection approaches, explanation approaches, and ObAlEx, we quantify the focus of CNNs on the actual evidence. Moreover, we show that additional training of the CNNs can improve the focus of CNNs without decreasing their accuracy.
|ISBN=978-3-030-65383-5
+
|ISBN=978-3-030-72240-1
|Download=KGSWC_MakingNeuralNetworksFAIR_CameraReady.pdf
+
|Download=ObAlEx.pdf
|Link=https://arxiv.org/abs/1907.11569
+
|Link=https://doi.org/10.1007/978-3-030-72240-1_32
 +
|DOI Name=10.1007/978-3-030-72240-1_32
 
|Forschungsgruppe=Web Science
 
|Forschungsgruppe=Web Science
 
}}
 
}}
Zeile 35: Zeile 33:
 
{{Forschungsgebiet Auswahl
 
{{Forschungsgebiet Auswahl
 
|Forschungsgebiet=Künstliche Intelligenz
 
|Forschungsgebiet=Künstliche Intelligenz
 +
}}
 +
{{Forschungsgebiet Auswahl
 +
|Forschungsgebiet=Maschinelles Lernen
 
}}
 
}}

Aktuelle Version vom 2. Juli 2021, 11:57 Uhr


Right for the Right Reasons: Making Image Classification Intuitively Explainable


Right for the Right Reasons: Making Image Classification Intuitively Explainable



Published: 2021 März

Buchtitel: Advances in Information Retrieval. ECIR 2021. Lecture Notes in Computer Science
Ausgabe: 12657
Verlag: Springer, Cham

Referierte Veröffentlichung

BibTeX

Kurzfassung
The effectiveness of Convolutional Neural Networks (CNNs) in classifying image data has been thoroughly demonstrated. In order to explain the classification to humans, methods for visualizing classification evidence have been developed in recent years. These explanations reveal that sometimes images are classified correctly, but for the wrong reasons, i.e., based on incidental evidence. Of course, it is desirable that images are classified correctly for the right reasons, i.e., based on the actual evidence. To this end, we propose a new explanation quality metric to measure object aligned explanation in image classification which we refer to as the ObAlEx metric. Using object detection approaches, explanation approaches, and ObAlEx, we quantify the focus of CNNs on the actual evidence. Moreover, we show that additional training of the CNNs can improve the focus of CNNs without decreasing their accuracy.

ISBN: 978-3-030-72240-1
Download: Media:ObAlEx.pdf
Weitere Informationen unter: Link
DOI Link: 10.1007/978-3-030-72240-1_32



Forschungsgruppe

Web Science


Forschungsgebiet

Maschinelles Lernen, Künstliche Intelligenz, Semantic Web