Zur Kurzanzeige

Interpreting and visualizing the decisions of Deep Neural Networks

dc.contributor.advisorWörgötter, Florentin Prof. Dr.
dc.contributor.authorAamir, Aisha
dc.date.accessioned2023-04-20T15:15:10Z
dc.date.available2023-04-27T00:50:10Z
dc.date.issued2023-04-20
dc.identifier.urihttp://resolver.sub.uni-goettingen.de/purl?ediss-11858/14637
dc.identifier.urihttp://dx.doi.org/10.53846/goediss-9848
dc.format.extentXXX Seitende
dc.language.isoengde
dc.subject.ddc510de
dc.titleInterpreting and visualizing the decisions of Deep Neural Networksde
dc.typedoctoralThesisde
dc.contributor.refereeWörgötter, Florentin Prof. Dr.
dc.date.examination2022-04-29de
dc.description.abstractengMachine learning and its associated algorithms involving deep neural networks have gained widespread admiration in the computer vision domain. In this regard, significant progress has been made in automating certain application-dependent tasks, especially in the fields of medicine, autonomous driving and robotics. Moreover, considerable improvements have already been made and work is underway to make automated systems secure and robust against failures. Nonetheless, researchers are struggling to find ways to give reasons and explanations for why a machine learning model made a certain decision. In particular, deep neural networks are considered to be “black-box” in this regard, and the reason is that their distributed encoding of information makes it even more challenging to interpret their decision-making capability. In view of the above challenges, this dissertation aims to establish methods to visualize and interpret the decisions of these complex machine learning models in an image classification task. We opt for three types of post hoc methods, i.e., global, hybrid and local interpretability to understand and assess the reasons and type of image features that are vital for a decision in an image classification task. Hence, we call our approach “visualizing and interpreting the decision of deep neural networks". On a global level, we investigate and assess the deep network architecture as a whole, keeping in view the internal connections between adjacent layers, filters and functioning of different hidden layers. Hence, we have proposed a visualization method in the form of a Caffe2Unity plugin to construct and visualize a complete AlexNet architecture in a virtual reality environment. This novel approach allows the user to become part of the virtual network and gives liberty to explore and visualize the internal states of the network. Exploring and visualizing the network in a virtual environment for global assessment, working and understanding of deep neural networks benefits both novices and experts among our target audience. Using a hybrid approach, we gave a local interpretable module within our global virtual model that allowed the user to visualize and interpret the network in real-time. We permitted the user to add an occlusion block on an image and visualize the results, iii as well as verify the decision of the network via our reframed integrated Shapley values approach. In this way, we achieved our goal of finding a good reason to determine which part of the image the network considers important for making its decision. At the local interpretable level, we proposed a layer-wise approach using influence scores to gain deeper insights into the pre-trained models' decision-making capability. We used the layer-wise influence score to determine what each layer has learned and which training data is most influential in the decision. By finding a contrast between the influential image and the network's decision, we also identified the biased nature of the network towards the texture of the images. The proposed methods analyze different kinds of explainable and interpretable perspectives to study and unlock the “black-box” nature of deep neural networks in image classification tasks. We can augment the visualizing and interpretability approaches in many other applications, particularly in understanding action predictions in robotics and object scene understanding.de
dc.contributor.coRefereeGrabowski, Jens Prof. Dr.
dc.contributor.thirdRefereeDamm, Carsten Prof. Dr.
dc.contributor.thirdRefereeHogrefe, Dieter Prof. Dr.
dc.contributor.thirdRefereeMay, Wolfgang Prof. Dr.
dc.contributor.thirdRefereeSinz, Fabian, Prof. Dr.
dc.subject.engDeep Neural Networksde
dc.subject.engExplainable and Interpretable AIde
dc.subject.engVisualizations of DNNsde
dc.subject.engVirtual Realityde
dc.subject.engInterpretability of DNNs Decisionsde
dc.identifier.urnurn:nbn:de:gbv:7-ediss-14637-5
dc.affiliation.instituteFakultät für Mathematik und Informatikde
dc.subject.gokfullInformatik (PPN619939052)de
dc.description.embargoed2023-04-27de
dc.identifier.ppn1843371863
dc.notes.confirmationsentConfirmation sent 2023-04-20T15:15:01de


Dateien

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige