Prefer a chat interface with context about you and your work?
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data. A recent class of methods to address this problem uses various ways to construct a new training sample by mixing a pair (or more) of training samples. …