dc.contributor.advisor | Wörgötter, Florentin Prof. Dr. | |
dc.contributor.author | Aamir, Aisha | |
dc.date.accessioned | 2023-04-20T15:15:10Z | |
dc.date.available | 2023-04-27T00:50:10Z | |
dc.date.issued | 2023-04-20 | |
dc.identifier.uri | http://resolver.sub.uni-goettingen.de/purl?ediss-11858/14637 | |
dc.identifier.uri | http://dx.doi.org/10.53846/goediss-9848 | |
dc.format.extent | XXX Seiten | de |
dc.language.iso | eng | de |
dc.subject.ddc | 510 | de |
dc.title | Interpreting and visualizing the decisions of Deep Neural Networks | de |
dc.type | doctoralThesis | de |
dc.contributor.referee | Wörgötter, Florentin Prof. Dr. | |
dc.date.examination | 2022-04-29 | de |
dc.description.abstracteng | Machine learning and its associated algorithms involving deep neural networks have
gained widespread admiration in the computer vision domain. In this regard,
significant progress has been made in automating certain application-dependent tasks,
especially in the fields of medicine, autonomous driving and robotics. Moreover,
considerable improvements have already been made and work is underway to make
automated systems secure and robust against failures. Nonetheless, researchers are
struggling to find ways to give reasons and explanations for why a machine learning
model made a certain decision. In particular, deep neural networks are considered to
be “black-box” in this regard, and the reason is that their distributed encoding of
information makes it even more challenging to interpret their decision-making
capability.
In view of the above challenges, this dissertation aims to establish methods to visualize
and interpret the decisions of these complex machine learning models in an image
classification task. We opt for three types of post hoc methods, i.e., global, hybrid and
local interpretability to understand and assess the reasons and type of image features
that are vital for a decision in an image classification task. Hence, we call our approach
“visualizing and interpreting the decision of deep neural networks".
On a global level, we investigate and assess the deep network architecture as a whole,
keeping in view the internal connections between adjacent layers, filters and
functioning of different hidden layers. Hence, we have proposed a visualization
method in the form of a Caffe2Unity plugin to construct and visualize a complete
AlexNet architecture in a virtual reality environment. This novel approach allows the
user to become part of the virtual network and gives liberty to explore and visualize
the internal states of the network. Exploring and visualizing the network in a virtual
environment for global assessment, working and understanding of deep neural
networks benefits both novices and experts among our target audience.
Using a hybrid approach, we gave a local interpretable module within our global
virtual model that allowed the user to visualize and interpret the network in real-time.
We permitted the user to add an occlusion block on an image and visualize the results,
iii
as well as verify the decision of the network via our reframed integrated Shapley
values approach. In this way, we achieved our goal of finding a good reason to
determine which part of the image the network considers important for making its
decision.
At the local interpretable level, we proposed a layer-wise approach using influence
scores to gain deeper insights into the pre-trained models' decision-making capability.
We used the layer-wise influence score to determine what each layer has learned and
which training data is most influential in the decision. By finding a contrast between
the influential image and the network's decision, we also identified the biased nature
of the network towards the texture of the images.
The proposed methods analyze different kinds of explainable and interpretable
perspectives to study and unlock the “black-box” nature of deep neural networks in
image classification tasks. We can augment the visualizing and interpretability
approaches in many other applications, particularly in understanding action
predictions in robotics and object scene understanding. | de |
dc.contributor.coReferee | Grabowski, Jens Prof. Dr. | |
dc.contributor.thirdReferee | Damm, Carsten Prof. Dr. | |
dc.contributor.thirdReferee | Hogrefe, Dieter Prof. Dr. | |
dc.contributor.thirdReferee | May, Wolfgang Prof. Dr. | |
dc.contributor.thirdReferee | Sinz, Fabian, Prof. Dr. | |
dc.subject.eng | Deep Neural Networks | de |
dc.subject.eng | Explainable and Interpretable AI | de |
dc.subject.eng | Visualizations of DNNs | de |
dc.subject.eng | Virtual Reality | de |
dc.subject.eng | Interpretability of DNNs Decisions | de |
dc.identifier.urn | urn:nbn:de:gbv:7-ediss-14637-5 | |
dc.affiliation.institute | Fakultät für Mathematik und Informatik | de |
dc.subject.gokfull | Informatik (PPN619939052) | de |
dc.description.embargoed | 2023-04-27 | de |
dc.identifier.ppn | 1843371863 | |
dc.notes.confirmationsent | Confirmation sent 2023-04-20T15:15:01 | de |