Stage-oe-small.jpg

Article3240

Aus Aifbportal
Version vom 3. Mai 2021, 12:17 Uhr von Ka5438 (Diskussion | Beiträge) (Die Seite wurde neu angelegt: „{{Publikation Erster Autor |ErsterAutorNachname=Dessì |ErsterAutorVorname=Danilo }} {{Publikation Author |Rank=1 |Author=Danilo Dessì }} {{Publikation Author…“)
(Unterschied) ← Nächstältere Version | Aktuelle Version (Unterschied) | Nächstjüngere Version → (Unterschied)
Wechseln zu:Navigation, Suche


An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments


An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments



Veröffentlicht: 2021

Journal: MDPI Electronics
Nummer: 7


Volume: 10


Nicht-referierte Veröffentlichung

BibTeX




Kurzfassung
Today, increasing numbers of people are interacting online and a lot of textual comments are being produced due to the explosion of online communication. However, a paramount inconvenience within online environments is that comments that are shared within digital platforms can hide hazards, such as fake news, insults, harassment, and, more in general, comments that may hurt someone’s feelings. In this scenario, the detection of this kind of toxicity has an important role to moderate online communication. Deep learning technologies have recently delivered impressive performance within Natural Language Processing applications encompassing Sentiment Analysis and emotion detection across numerous datasets. Such models do not need any pre-defined hand-picked features, but they learn sophisticated features from the input datasets by themselves. In such a domain, word embeddings have been widely used as a way of representing words in Sentiment Analysis tasks, proving to be very effective. Therefore, in this paper, we investigated the use of deep learning and word embeddings to detect six different types of toxicity within online comments. In doing so, the most suitable deep learning layers and state-of-the-art word embeddings for identifying toxicity are evaluated. The results suggest that Long-Short Term Memory layers in combination with mimicked word embeddings are a good choice for this task.

ISSN: 2079-9292
Download: Media:2021 - An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments.pdf
Weitere Informationen unter: Link
DOI Link: 10.3390/electronics10070779



Forschungsgruppe

Information Service Engineering


Forschungsgebiet