COCO: a platform for comparing continuous optimizers in a black-box setting

Type: Article

Publication Date: 2020-08-25

Citations: 187

DOI: https://doi.org/10.1080/10556788.2020.1808977

Abstract

We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.

Locations

  • Optimization methods & software - View
  • arXiv (Cornell University) - View - PDF
  • HAL (Le Centre pour la Communication Scientifique Directe) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat COCO: The Experimental Procedure 2016 Nikolaus Hansen
Tea Tušar
Olaf Mersmann
Anne Auger
Dimo Brockhoff
+ PDF Chat COCO: Performance Assessment 2016 Nikolaus Hansen
Anne Auger
Dimo Brockhoff
Dejan Tušar
Tea Tušar
+ COCO: Performance Assessment 2016 Nikolaus Hansen
Anne Auger
Dimo Brockhoff
Dejan Tušar
Tea Tušar
+ PDF Chat Benchmarking large-scale continuous optimizers: The bbob-largescale testbed, a COCO software guide and beyond 2020 Konstantinos Varelas
Ouassim Ait El Hara
Dimo Brockhoff
Nikolaus Hansen
Duc Manh Nguyen
Tea Tušar
Anne Auger
+ MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts 2023 Diederick Vermetten
Furong Ye
Thomas Bäck
Carola Doerr
+ COCO: The Experimental Procedure 2016 Nikolaus Hansen
Tea Tušar
Olaf Mersmann
Anne Auger
Dimo Brockhoff
+ PDF Chat Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites 2021 Dimo Brockhoff
Anne Auger
Nikolaus Hansen
Tea Tušar
+ PDF Chat Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics 2018 Carola Doerr
Furong Ye
Sander van Rijn
Hao Wang
Thomas Bäck
+ PDF Chat Anytime Performance Assessment in Blackbox Optimization Benchmarking 2022 Nikolaus Hansen
Anne Auger
Dimo Brockhoff
Tea Tušar
+ Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites 2016 Dimo Brockhoff
Anne Auger
Nikolaus Hansen
Tea Tušar
+ COCO - COmparing Continuous Optimizers : The Documentation 2011 Nikolaus Hansen
Steffen Finck
Raymond Ros
+ Biobjective Performance Assessment with the COCO Platform 2016 Dimo Brockhoff
Tea Tušar
Dejan Tušar
Tobias Wagner
Nikolaus Hansen
Anne Auger
+ IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics 2020 Hao Wang
Diederick Vermetten
Furong Ye
Carola Doerr
Thomas Bäck
+ PDF Chat IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics 2022 Hao Wang
Diederick Vermetten
Furong Ye
Carola Doerr
Thomas Bäck
+ PDF Chat Black-Box Optimization Revisited: Improving Algorithm Selection Wizards Through Massive Benchmarking 2021 Laurent Meunier
Herilalaina Rakotoarison
Pak Kan Wong
Baptiste Rozière
Jérémy Rapin
Olivier Teytaud
A. Moreau
Carola Doerr
+ PDF Chat Towards a Theory-Guided Benchmarking Suite for Discrete Black-Box Optimization Heuristics: Profiling $(1+\lambda)$ EA Variants on OneMax and LeadingOnes 2018 Carola Doerr
Furong Ye
Sander van Rijn
Hao Wang
Thomas Bäck
+ IOHanalyzer: Performance Analysis for Iterative Optimization Heuristic 2020 Hao Wang
Diederick Vermetten
Furong Ye
Carola Doerr
Thomas Bäck
+ Benchmarking in Optimization: Best Practice and Open Issues 2020 Thomas Bartz–Beielstein
Carola Doerr
Jakob Bossek
Sowmya Chandrasekaran
Tome Eftimov
Andreas Fischbach
Pascal Kerschke
Manuel López‐Ibáñez
Katherine M. Malan
Jason H. Moore
+ COCOpf: An Algorithm Portfolio Framework 2014 P Baudis
+ COCOpf: An Algorithm Portfolio Framework 2014 P Baudis

Works That Cite This (105)

Action Title Year Authors
+ PDF Chat A Comparative Study of Large-Scale Variants of CMA-ES 2018 Konstantinos Varelas
Anne Auger
Dimo Brockhoff
Nikolaus Hansen
Ouassim Ait Elhara
Yann Semet
Rami Kassab
Frédéric Barbaresco
+ CEC Real-Parameter Optimization Competitions: Progress from 2013 to 2018 2019 Urban Škvorc
Tome Eftimov
Peter Korošec
+ PDF Chat LassoBench: A High-Dimensional Hyperparameter Optimization Benchmark Suite for Lasso 2021 Kenan Šehić
Alexandre Gramfort
Joseph Salmon
Luigi Nardi
+ PDF Chat COCO: The Large Scale Black-Box Optimization Benchmarking (bbob-largescale) Test Suite 2019 Ouassim Ait Elhara
Konstantinos Varelas
Duc Manh Nguyen
Tea Tušar
Dimo Brockhoff
Nikolaus Hansen
Anne Auger
+ PDF Chat TREGO: a trust-region framework for efficient global optimization 2022 Youssef Diouane
Victor Picheny
Rodolphe Le Riche
Alexandre Scotto Di Perrotolo
+ PDF Chat COCO: a platform for comparing continuous optimizers in a black-box setting 2020 Nikolaus Hansen
Anne Auger
Raymond Ros
Olaf Mersmann
Tea Tušar
Dimo Brockhoff
+ PDF Chat Benchmarking large scale variants of CMA-ES and L-BFGS-B on the bbob-largescale testbed 2019 Konstantinos Varelas
+ PDF Chat Benchmarking large-scale continuous optimizers: The bbob-largescale testbed, a COCO software guide and beyond 2020 Konstantinos Varelas
Ouassim Ait El Hara
Dimo Brockhoff
Nikolaus Hansen
Duc Manh Nguyen
Tea Tušar
Anne Auger
+ A test-suite of non-convex constrained optimization problems from the real-world and some baseline results 2020 Abhishek Kumar
Guohua Wu
Mostafa Z. Ali
Rammohan Mallipeddi
Ponnuthurai Nagaratnam Suganthan
Swagatam Das
+ PDF Chat Permuted Orthogonal Block-Diagonal Transformation Matrices for Large Scale Optimization Benchmarking 2016 Ouassim Ait Elhara
Anne Auger
Nikolaus Hansen