Generalization Bounds via Information Density and Conditional Information Density
Generalization Bounds via Information Density and Conditional Information Density
We present a general approach, based on an exponential inequality, to derive bounds on the generalization error of randomized learning algorithms. Using this approach, we provide bounds on the average generalization error as well as bounds on its tail probability, for both the PAC-Bayesian and single-draw scenarios. Specifically, for the …