Ask a Question

Prefer a chat interface with context about you and your work?

Generating visual explanations from deep networks using implicit neural representations

Generating visual explanations from deep networks using implicit neural representations

Explaining deep learning models in a way that humans can easily understand is essential for responsible artificial intelligence applications. Attribution methods constitute an important area of explainable deep learning. The attribution problem involves finding parts of the network's input that are the most responsible for the model's output. In this …