Explaining Deep Neural Networks by Leveraging Intrinsic Methods
Explaining Deep Neural Networks by Leveraging Intrinsic Methods
Despite their impact on the society, deep neural networks are often regarded as black-box models due to their intricate structures and the absence of explanations for their decisions. This opacity poses a significant challenge to AI systems wider adoption and trustworthiness. This thesis addresses this issue by contributing to the …