Optimizing Rank-Based Metrics With Blackbox Differentiation

Type: Article

Publication Date: 2020-06-01

Citations: 58

DOI: https://doi.org/10.1109/cvpr42600.2020.00764

Abstract

Rank-based metrics are some of the most widely used criteria for performance evaluation of computer vision models. Despite years of effort, direct optimization for these metrics remains a challenge due to their non-differentiable and non-decomposable nature. We present an efficient, theoretically sound, and general method for differentiating rank-based metrics with mini-batch gradient descent. In addition, we address optimization instability and sparsity of the supervision signal that both arise from using rank-based metrics as optimization targets. Resulting losses based on recall and Average Precision are applied to image retrieval and object detection tasks. We obtain performance that is competitive with state-of-the-art on standard image retrieval datasets and consistently improve performance of near state-of-the-art object detectors.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - View

Similar Works

Action Title Year Authors
+ Optimizing Rank-based Metrics with Blackbox Differentiation 2019 Michal Rolínek
Vít Musil
Anselm Paulus
Marin Vlastelica
Claudio Michaelis
Georg Martius
+ Optimization of Rank Losses for Image Retrieval 2023 Elias Ramzi
Nicolas Audebert
Clément Rambour
André Araujo
Xavier Bitot
Nicolas Thome
+ PDF Chat Efficient Optimization for Rank-Based Loss Functions 2018 Pritish Mohapatra
Michal Rolínek
C. V. Jawahar
Vladimir Kolmogorov
Manish Kumar
+ Efficient Optimization for Rank-based Loss Functions 2016 Pritish Mohapatra
Michal Rolínek
C. V. Jawahar
Vladimir Kolmogorov
Manish Kumar
+ Efficient Optimization for Rank-based Loss Functions 2016 Pritish Mohapatra
Michal Rolínek
C V Jawahar
Kolmogorov Vladimir
Kumar M. Pawan
+ PDF Chat MetricOpt: Learning to Optimize Black-Box Evaluation Metrics 2021 Chen Huang
Shuangfei Zhai
Pengsheng Guo
Josh Susskind
+ MetricOpt: Learning to Optimize Black-Box Evaluation Metrics 2021 Chen Huang
Shuangfei Zhai
Pengsheng Guo
Josh Susskind
+ PDF Chat Recall@k Surrogate Loss with Large Batches and Similarity Mixup 2022 Yash Patel
Giorgos Tolias
Jiřı́ Matas
+ Recall@k Surrogate Loss with Large Batches and Similarity Mixup 2021 Yash Patel
Giorgos Tolias
Jiřı́ Matas
+ Robust and Decomposable Average Precision for Image Retrieval 2021 Elias Ramzi
Nicolas Thome
Clément Rambour
Nicolas Audebert
Xavier Bitot
+ Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval 2020 Andrew J. Brown
Weidi Xie
Vicky Kalogeiton
Andrew Zisserman
+ Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval. 2020 Andrew Brown
Weidi Xie
Vicky Kalogeiton
Andrew Zisserman
+ Optimize What You Evaluate With: A Simple Yet Effective Framework For Direct Optimization Of IR Metrics 2020 Hai-Tao Yu
+ Query-based Hard-Image Retrieval for Object Detection at Test Time 2022 Edward L. Ayers
Jonathan Sadeghi
John Redford
Romain Mueller
Puneet K. Dokania
+ PDF Chat Query-Based Hard-Image Retrieval for Object Detection at Test Time 2023 Edward L. Ayers
Jonathan Sadeghi
John Redford
Romain Mueller
Puneet K. Dokania
+ Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism 2023 Zhijian Zhuo
Yifei Wang
Jinwen Ma
Yisen Wang
+ Calibration-Aware Margin Loss: Pushing the Accuracy-Calibration Consistency Pareto Frontier for Deep Metric Learning 2023 Qin Zhang
Linghan Xu
Qingming Tang
Jun Fang
Ying Wu
Joe Tighe
Yifan Xing
+ A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection 2020 Kemal Öksüz
Barış Can Çam
Emre Akbaş
Sinan Kalkan
+ A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection 2020 Kemal Öksüz
Barış Can Çam
Emre Akbaş
Sinan Kalkan
+ PDF Chat Learning With Average Precision: Training Image Retrieval With a Listwise Loss 2019 Jérôme Revaud
Jon Almazán
Rafael Sampaio de Rezende
César De Souza

Works That Cite This (34)

Action Title Year Authors
+ PDF Chat Rank-based Decomposable Losses in Machine Learning: A Survey 2023 Shu Hu
Xin Wang
Siwei Lyu
+ A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses 2020 Malik Boudiaf
Jérôme Rony
Imtiaz Masud Ziko
Éric Granger
Marco Pedersoli
Pablo Piantanida
Ismail Ben Ayed
+ Learning Convex Optimization Models 2020 Akshay Agrawal
Shane Barratt
Stephen Boyd
+ PDF Chat Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers 2020 Michal Rolínek
Paul Swoboda
Dominik Zietlow
Anselm Paulus
Vít Musil
Georg Martius
+ Rethinking Ranking-based Loss Functions: Only Penalizing Negative Instances before Positive Ones is Enough. 2021 Zhuo Li
Weiqing Min
Jiajun Song
Yaohui Zhu
Shuqiang Jiang
+ PDF Chat Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones Is Enough 2022 Zhuo Li
Weiqing Min
Jiajun Song
Yaohui Zhu
Liping Kang
Xiaoming Wei
Xiaolin Wei
Shuqiang Jiang
+ PDF Chat Intra-Class Adaptive Augmentation With Neighbor Correction for Deep Metric Learning 2022 Zheren Fu
Zhendong Mao
Bo Hu
An-An Liu
Yongdong Zhang
+ PDF Chat Long-tail Detection with Effective Class-Margins 2022 Jang Hyun Cho
Philipp Krähenbühl
+ PDF Chat Stochastic Optimization of Areas UnderPrecision-Recall Curves with Provable Convergence 2021 Qi Qi
Youzhi Luo
Xu Zhao
Shuiwang Ji
Tianbao Yang
+ PDF Chat Learning Deep Representations via Contrastive Learning for Instance Retrieval 2022 Tao Wu
Tie Luo
Donald C. Wunsch