Stage-oe-small.jpg

Methods for Strengthening Explainable AI in Industrial Applications

Aus Aifbportal
Wechseln zu:Navigation, Suche

Methods for Strengthening Explainable AI in Industrial Applications

Veranstaltungsart:
Graduiertenkolloquium




With the growing adoption of Artificial Intelligence (AI) and the successful application of deep learning methods in various domains, AI, and particularly deep learning, increasingly influences people's lives. However, depending on the use case, wrong decisions can be costly and dangerous (e.g., an AI medical diagnosis system misclassifies patients' dis-eases). The emerging topic of Explainable Artificial Intelligence (xAI) offers approaches and algorithms that introduce transparency into black-box models by producing explana-tions of AI Systems' inner workings and decisions. Specifically, in industrial use cases, where complex problems and decision-making processes are widespread, enabling transparent automation and decision support is crucial. However, while research in xAI is trending, applying xAI in industrial use cases is challenging. For many data types (e.g., images or tabular data), xAI methods are well studied. Nevertheless, support for time se-ries, which are ubiquitous in industrial settings, is missing. Further, to use any xAI method in deployment, understanding the explainers' quality, strengths, and weaknesses is of ut-most importance to prevent ambiguous and incorrect explanations. Well-performing xAI methods can help users to understand the reasons behind a deep learners' prediction and enable the recognition of spurious correlations learned by a deep learner or missing information in the collected data. Especially in industrial settings, where only a limited amount of (often) noisy data is available, reverting incorrect model decisions and explana-tions provides the opportunity to include domain knowledge of users. Providing human feedback on the explanation enables a deep learner to infer the missing context and close this gap. To address these application obstacles of xAI in industrial settings, we introduce methods for xAI on time series, the evaluation of xAI, and xAI-based model revisions.

(Jacqueline Höllig)




Start: 19. April 2024 um 14:00
Ende: 19. April 2024 um 15:30


Im Gebäude 05.20, Raum: 1C-04

Veranstaltung vormerken: (iCal)


Veranstalter: Forschungsgruppe(n) Web Science
Information: Media:2024-04-19_Hoellig_FZI_Graduiertenkolloquium.pdf