Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link

Type: Article

Publication Date: 2022-02-15

Citations: 6

DOI: https://doi.org/10.1080/03461238.2022.2037016

Abstract

Thanks to its outstanding performances, boosting has rapidly gained wide acceptance among actuaries. To speed up calculations, boosting is often applied to gradients of the loss function, not to responses (hence the name gradient boosting). When the model is trained by minimizing Poisson deviance, this amounts to apply the least-squares principle to raw residuals. This exposes gradient boosting to the same problems that lead to replace least-squares with Poisson Generalized Linear Models (GLM) to analyze low counts (typically, the number of reported claims at policy level in personal lines). This paper shows that boosting can be conducted directly on the response under Tweedie loss function and log-link, by adapting the weights at each step. Numerical illustrations demonstrate similar or better performances compared to gradient boosting when trees are used as weak learners, with a higher level of transparency since responses are used instead of gradients.

Locations

  • Scandinavian Actuarial Journal - View
  • Dépôt institutionnel de l'Université libre de Bruxelles (Université Libre de Bruxelles) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat From Point to probabilistic gradient boosting for claim frequency and severity prediction 2024 Dominique Chevalier‐Lucia
Marie‐Pier Côté
+ PDF Chat Understanding Gradient Boosting Classifier: Training, Prediction, and the Role of $\gamma_j$ 2024 Hung‐Hsuan Chen
+ Erweiterung inferenzstatistischer Fähigkeiten modellbasierter Gradient-Boosting Algorithmen 2019 Tobias Hepp
+ PDF Chat Zero-Inflated Tweedie Boosted Trees with CatBoost for Insurance Loss Analytics 2024 Banghee So
Emiliano A. Valdez
+ PDF Chat RiskLogitboost Regression for Rare Events in Binary Response: An Econometric Approach 2021 Jessica Pesantez-Narvaez
Montserrat Guillén
Manuela Alcañiz
+ Insurance Premium Prediction via Gradient Tree-Boosted Tweedie Compound Poisson Models 2015 Yi Yang
Wei Qian
Hui Zou
+ PDF Chat Insurance Premium Prediction via Gradient Tree-Boosted Tweedie Compound Poisson Models 2016 Yi Yang
Wei Qian
Hui Zou
+ Enhanced Gradient Boosting for Zero-Inflated Insurance Claims and Comparative Analysis of CatBoost, XGBoost, and LightGBM 2023 Banghee So
+ PDF Chat Comment: Boosting Algorithms: Regularization, Prediction and Model Fitting 2007 Andreas Buja
David Mease
Abraham J. Wyner
+ A Boosted Tweedie Compound Poisson Model for Insurance Premium 2015 Yi Yang
Wei Qian
Hui Zou
+ Gradient and Newton Boosting for Classification and Regression 2018 Fabio Sigrist
+ Gradient and Newton Boosting for Classification and Regression 2018 Fabio Sigrist
+ PDF Chat Gradient and Newton boosting for classification and regression 2020 Fabio Sigrist
+ Boosting insights in insurance tariff plans with tree-based machine learning methods 2019 Roel Henckaerts
Marie‐Pier Côté
Katrien Antonio
Roel Verbelen
+ Boosting insights in insurance tariff plans with tree-based machine learning methods 2019 Roel Henckaerts
Marie‐Pier Côté
Katrien Antonio
Roel Verbelen
+ Approaches to Regularized Regression – A Comparison between Gradient Boosting and the Lasso 2016 Matthias Schmid
Olaf Gefeller
Elisabeth Waldmann
Andreas Mayr
Tobias Hepp
+ PDF Chat Boosting Insights in Insurance Tariff Plans with Tree-Based Machine Learning Methods 2020 Roel Henckaerts
Marie‐Pier Côté
Katrien Antonio
Roel Verbelen
+ GAM(L)A: An econometric model for interpretable Machine Learning 2022 Emmanuel Flachaire
Gilles Hacheme
Sullivan Hué
Sébastien Laurent
+ PDF Chat <i>L</i><sub>2</sub>-Boosting for Economic Applications 2017 Ye Luo
Martin Spindler
+ Cost-sensitive stochastic gradient boosting within a quantitative regression framework 2007 Richard A. Berk
Brian Kriegler