Type: Article
Publication Date: 2023-07-16
Citations: 3
DOI: https://doi.org/10.1109/igarss52108.2023.10282614
Knowledge distillation, a well-known model compression technique, is an active research area in both computer vision and remote sensing communities. In this paper, we evaluate in a remote sensing context various off-the-shelf object detection knowledge distillation methods which have been originally developed on generic computer vision datasets such as Pascal VOC. In particular, methods covering both logit mimicking and feature imitation approaches are applied for vehicle detection using the well-known benchmarks such as xView and VEDAI datasets. Extensive experiments are performed to compare the relative performance and interrelationships of the methods. Experimental results show high variations and confirm the importance of result aggregation and cross validation on remote sensing datasets.
Action | Title | Year | Authors |
---|---|---|---|
+ PDF Chat | Self-training and multi-task learning for limited data: evaluation study on object detection | 2023 |
Hoàng-Ân Lê Minh‐Tan Pham |