期刊论文详细信息
Electronics
An Assessment of Deep Learning Models and Word Embeddings for Toxicity Detection within Online Textual Comments
DiegoReforgiato Recupero1  Harald Sack2  Danilo Dessì2 
[1] Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy;FIZ Karlsruhe – Leibniz Institute for Information Infrastructure, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany;
关键词: deep learning;    word embeddings;    toxicity detection;    binary classification;   
DOI  :  10.3390/electronics10070779
来源: DOAJ
【 摘 要 】

Today, increasing numbers of people are interacting online and a lot of textual comments are being produced due to the explosion of online communication. However, a paramount inconvenience within online environments is that comments that are shared within digital platforms can hide hazards, such as fake news, insults, harassment, and, more in general, comments that may hurt someone’s feelings. In this scenario, the detection of this kind of toxicity has an important role to moderate online communication. Deep learning technologies have recently delivered impressive performance within Natural Language Processing applications encompassing Sentiment Analysis and emotion detection across numerous datasets. Such models do not need any pre-defined hand-picked features, but they learn sophisticated features from the input datasets by themselves. In such a domain, word embeddings have been widely used as a way of representing words in Sentiment Analysis tasks, proving to be very effective. Therefore, in this paper, we investigated the use of deep learning and word embeddings to detect six different types of toxicity within online comments. In doing so, the most suitable deep learning layers and state-of-the-art word embeddings for identifying toxicity are evaluated. The results suggest that Long-Short Term Memory layers in combination with mimicked word embeddings are a good choice for this task.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次