IEEE Access | |
TSViz: Demystification of Deep Learning Models for Time-Series Analysis | |
Sheraz Ahmed1  Mohsin Munir1  Shoaib Ahmed Siddiqui1  Dominique Mercier1  Andreas Dengel1  | |
[1] German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany; | |
关键词: Deep learning; representation learning; convolutional neural networks; time-series analysis; time-series forecasting; feature importance; | |
DOI : 10.1109/ACCESS.2019.2912823 | |
来源: DOAJ |
【 摘 要 】
This paper presents a novel framework for the demystification of convolutional deep learning models for time-series analysis. This is a step toward making informed/explainable decisions in the domain of time series, powered by deep learning. There have been numerous efforts to increase the interpretability of image-centric deep neural network models, where the learned features are more intuitive to visualize. Visualization in the domain of time series is significantly challenging, as there is no direct interpretation of the filters and inputs compared with the imaging modality. In addition, a little or no concentration has been devoted to the development of such tools in the domain of time series in the past. TSViz provides possibilities to explore and analyze the network from different dimensions at different levels of abstraction, which includes the identification of the parts of the input that were responsible for a particular prediction (including per filter saliency), importance of the different filters present in the network, notion of diversity present in the network through filter clustering, understanding of the main sources of variation learned by the network through inverse optimization, and analysis of the network's robustness against adversarial noise. As a sanity check for the computed influence values, we demonstrate our results on pruning of neural networks based on the computed influence information. These representations allow the user to better understand the network so that the acceptability of these deep models for time-series analysis can be enhanced. This is extremely important in domains, such as finance, industry 4.0, self-driving cars, health care, and counter-terrorism, where reasons for reaching a particular prediction are equally important as the prediction itself. We assess the proposed framework for interpretability with a set of desirable properties essential for any method in this direction.
【 授权许可】
Unknown