Incremental Residual Concept Bottleneck Models

Type: Preprint

Publication Date: 2024-04-13

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2404.08978

Abstract

Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts and use the concepts to make predictions, enhancing the transparency of the decision-making process. Multimodal pre-trained models can match visual representations with textual concept embeddings, allowing for obtaining the interpretable concept bottleneck without the expertise concept annotations. Recent research has focused on the concept bank establishment and the high-quality concept selection. However, it is challenging to construct a comprehensive concept bank through humans or large language models, which severely limits the performance of CBMs. In this work, we propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness. Specifically, the residual concept bottleneck model employs a set of optimizable vectors to complete missing concepts, then the incremental concept discovery module converts the complemented vectors with unclear meanings into potential concepts in the candidate concept bank. Our approach can be applied to any user-defined concept bank, as a post-hoc processing method to enhance the performance of any CBMs. Furthermore, to measure the descriptive efficiency of CBMs, the Concept Utilization Efficiency (CUE) metric is proposed. Experiments show that the Res-CBM outperforms the current state-of-the-art methods in terms of both accuracy and efficiency and achieves comparable performance to black-box models across multiple datasets.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat Concept Bottleneck Models Without Predefined Concepts 2024 Simon Schrodi
Julian Schur
Max Argus
Thomas Brox
+ PDF Chat V2C-CBM: Building Concept Bottlenecks with Vision-to-Concept Tokenizer 2025 Hangzhou He
Lei Zhu
Xinliang Zhang
Shuang Zeng
Qian Chen
Yanye Lu
+ Post-hoc Concept Bottleneck Models 2022 Mert Yüksekgönül
Maggie Haitian Wang
James Zou
+ PDF Chat Improving Concept Alignment in Vision-Language Concept Bottleneck Models 2024 Nithish Muthuchamy Selvaraj
Xiaobao Guo
Bingquan Shen
Adams Wai‐Kin Kong
Alex C. Kot
+ PDF Chat VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance 2024 Divyansh Srivastava
Ge Yan
Tsui-Wei Weng
+ Concept Bottleneck with Visual Concept Filtering for Explainable Medical Image Classification 2023 Injae Kim
Jongha Kim
Joonmyung Choi
Hyunwoo J. Kim
+ PDF Chat Semi-supervised Concept Bottleneck Models 2024 Lijie Hu
Tianhao Huang
Huanyi Xie
Chenyang Ren
Zhengyu Hu
Lu Yu
Di Wang
+ Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation 2023 Jack Furby
Daniel Cunnington
Dave Braines
Alun Preece
+ Auxiliary Losses for Learning Generalizable Concept-based Models 2023 Ivaxi Sheth
Samira Ebrahimi Kahou
+ PDF Chat Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery 2024 Sukrut Rao
Sanket Mahajan
Moritz Böhle
Bernt Schiele
+ PDF Chat Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary Concepts 2024 Andong Tan
Fengtao Zhou
Hao Chen
+ Concept Bottleneck Model with Additional Unsupervised Concepts 2022 Yoshihide Sawada
Keigo Nakamura
+ PDF Chat Stochastic Concept Bottleneck Models 2024 Moritz Vandenhirtz
Sonia Laguna
Ričards Marcinkevičs
Julia E. Vogt
+ Label-Free Concept Bottleneck Models 2023 Tuomas Oikarinen
Subhro Das
Lam M. Nguyen
Tsui-Wei Weng
+ PDF Chat EQ-CBM: A Probabilistic Concept Bottleneck with Energy-based Models and Quantized Vectors 2024 Sangwon Kim
Dasom Ahn
Byoung Chul Ko
In‐Su Jang
Kwang-Ju Kim
+ Automatic Concept Extraction for Concept Bottleneck-based Video Classification 2022 Jeya Vikranth Jeyakumar
Luke Dickens
Luis García
Yu-Hsi Cheng
Diego Ramirez Echavarria
Joseph Noor
Alessandra Russo
Lance Kaplan
Erik Blasch
Mani Srivastava
+ SurroCBM: Concept Bottleneck Surrogate Models for Generative Post-hoc Explanation 2023 Bo Pan
Zhenke Liu
Yifei Zhang
Liang Zhao
+ PDF Chat Understanding Multimodal Deep Neural Networks: A Concept Selection View 2024 Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
+ Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification 2022 Yue Yang
Artemis Panagopoulou
Shenghao Zhou
Daniel Jin
Chris Callison-Burch
Mark Yatskar
+ PDF Chat AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model 2024 Gabriele Dominici
Pietro Barbiero
Francesco Giannini
Martin Gjoreski
Marc Langhenirich

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors