Data-Free Knowledge Distillation for Deep Neural Networks

Type: Preprint

Publication Date: 2017-01-01

Citations: 210

DOI: https://doi.org/10.48550/arxiv.1710.07535

View

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Dream Distillation: A Data-Independent Model Compression Framework 2019 Kartikeya Bhardwaj
Naveen Suda
Radu Mărculescu
+ Dream Distillation: A Data-Independent Model Compression Framework 2019 Kartikeya Bhardwaj
Naveen Suda
Radu Mărculescu
+ PDF Chat Few Shot Network Compression via Cross Distillation 2020 Haoli Bai
Jiaxiang Wu
Irwin King
Michael R. Lyu
+ Few Shot Network Compression via Cross Distillation 2019 Haoli Bai
Jiaxiang Wu
Irwin King
Michael R. Lyu
+ A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness 2023 Zongxiong Chen
Jiahui Geng
Derui Zhu
Herbert Woisetschlaeger
Qing Li
Sonja Schimmler
Ruben Mayer
Chunming Rong
+ The Knowledge Within: Methods for Data-Free Model Compression 2019 Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
+ The Knowledge Within: Methods for Data-Free Model Compression 2019 Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
+ PDF Chat Private Model Compression via Knowledge Distillation 2019 Ji Wang
Weidong Bao
Lichao Sun
Xiaomin Zhu
Bokai Cao
Philip S. Yu
+ Private Model Compression via Knowledge Distillation 2018 Ji Wang
Weidong Bao
Lichao Sun
Xiaomin Zhu
Bokai Cao
Philip S. Yu
+ Membership Encoding for Deep Learning 2019 Congzheng Song
Reza Shokri
+ PDF Chat The Knowledge Within: Methods for Data-Free Model Compression 2020 Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
+ A Survey on Dataset Distillation: Approaches, Applications and Future Directions 2023 Jiahui Geng
Zongxiong Chen
Yuandou Wang
Herbert Woisetschlaeger
Sonja Schimmler
Ruben Mayer
Zhiming Zhao
Chunming Rong
+ A Survey on Dataset Distillation: Approaches, Applications and Future Directions 2023 Jiahui Geng
Zongxiong Chen
Yuandou Wang
Herbert Woisetschlaeger
Sonja Schimmler
Ruben Mayer
Zhiming Zhao
Chunming Rong
+ PDF Chat Dataset Distillation: A Comprehensive Review 2023 Ruonan Yu
Songhua Liu
Xinchao Wang
+ Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning 2019 Congzheng Song
Reza Shokri
+ PDF Chat PRANC: Pseudo RAndom Networks for Compacting deep models 2023 Parsa Nooralinejad
Ali Abbasi
Soroush Abbasi Koohpayegani
Kossar Pourahmadi Meibodi
Rana Muhammad Shahroz Khan
Soheil Kolouri
Hamed Pirsiavash
+ Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis 2021 Zi Wang
+ PDF Chat Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis 2021 Zi Wang
+ PRANC: Pseudo RAndom Networks for Compacting deep models 2022 Parsa Nooralinejad
Ali Abbasi
Soheil Kolouri
Hamed Pirsiavash
+ Dataset Distillation: A Comprehensive Review 2023 Ruonan Yu
Songhua Liu
Xinchao Wang

Cited by (130)

Action Title Year Authors
+ On Cross-Layer Alignment for Model Fusion of Heterogeneous Neural Networks 2021 Dang Nguyen
Khai T. Nguyen
Dinh Phung
Hung Bui
Nhat Ho
+ PDF Chat Class-Incremental Domain Adaptation 2020 Jogendra Nath Kundu
Rahul Venkatesh
Naveen Venkat
Ambareesh Revanur
R. Venkatesh Babu
+ ES Attack: Model Stealing Against Deep Neural Networks Without Data Hurdles 2022 Xiaoyong Yuan
Lei Ding
Lan Zhang
Xiaolin Li
Dapeng Wu
+ PDF Chat Distilling and transferring knowledge via cGAN-generated samples for image classification and regression 2022 Xin Ding
Yongwei Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
+ PDF Chat Data-Free Sketch-Based Image Retrieval 2023 Abhra Chaudhuri
Ayan Kumar Bhunia
Yi-Zhe Song
Anjan Dutta
+ PDF Chat Relational Knowledge Distillation 2019 Wonpyo Park
Dongju Kim
Yan Lu
Minsu Cho
+ PDF Chat Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion 2020 Hongxu Yin
Pavlo Molchanov
Jose M. Álvarez
Zhizhong Li
Arun Mallya
Derek Hoiem
Niraj K. Jha
Jan Kautz
+ Knowledge distillation in deep learning and its applications 2021 Abdolmaged Alkhulaifi
Fahad Alsahli
Irfan Ahmad
+ Dataset Condensation with Gradient Matching 2021 Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
+ Data-Free Learning of Student Networks 2019 Hanting Chen
Yunhe Wang
Chang Xu
Zhaohui Yang
Chuanjian Liu
Boxin Shi
Chunjing Xu
Chao Xu
Qi Tian
+ Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack 2021 Xinyi Zhang
Chengfang Fang
Jie Shi
+ Revisiting Knowledge Distillation for Object Detection 2021 Amin Banitalebi-Dehkordi
+ Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation 2021 Kenneth Borup
Lars Nørvang Andersen
+ Zero-Shot Knowledge Distillation in Deep Networks 2019 Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
+ Dataset Condensation with Gradient Matching 2020 Bo Zhao
Konda Reddy Mopuri
Hakan Bilen
+ PDF Chat Distilling and Transferring Knowledge Via Cgan-Generated Samples for Image Classification and Regression 2022 Xin Ding
Yongwei Wang
Zuheng Xu
Z. Jane Wang
William J. Welch
+ AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression 2021 Baozhou Zhu
H. Peter Hofstee
Johan Peltenburg
Jinho Lee
Zaid Al-Ars
+ PDF Chat SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data 2023 Hatef Otroshi Shahreza
Anjith George
Sébastien Marcel
+ PDF Chat Few Sample Knowledge Distillation for Efficient Network Compression 2020 Tian-Hong Li
Jianguo Li
Zhuang Liu
Changshui Zhang
+ PDF Chat Data-Free Model Extraction 2021 Jean-Baptiste Truong
Pratyush Maini
Robert J. Walls
Nicolas Papernot
+ PDF Chat Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning 2021 James Smith
Yen-Chang Hsu
Jonathan Balloch
Yilin Shen
Hongxia Jin
Zsolt Kira
+ PDF Chat Generative Zero-shot Network Quantization 2021 Xiangyu He
Jiahao Lu
Weixiang Xu
Qinghao Hu
Peisong Wang
Jian Cheng
+ Few Shot Network Compression via Cross Distillation 2019 Haoli Bai
Jiaxiang Wu
Irwin King
Michael R. Lyu
+ PDF Chat Zero-shot Adversarial Quantization 2021 Yuang Liu
Wei Zhang
Jun Wang
+ PDF Chat Visualizing Adapted Knowledge in Domain Transfer 2021 Yunzhong Hou
Liang Zheng
+ PDF Chat Applications and Techniques for Fast Machine Learning in Science 2022 A. M. Deiana
Nhan Viet Tran
Joshua Agar
Michaela Blott
Giuseppe Di Guglielmo
J. Duarte
Philip Harris
Scott Hauck
Mia Liu
M. S. Neubauer
+ PDF Chat Enhancing Data-Free Adversarial Distillation with Activation Regularization and Virtual Interpolation 2021 Xiaoyang Qu
Jianzong Wang
Jing Xiao
+ Few Sample Knowledge Distillation for Efficient Network Compression 2018 Tian-Hong Li
Jianguo Li
Zhuang Liu
Changshui Zhang
+ PDF Chat Source-Free Domain Adaptation for Semantic Segmentation 2021 Yuang Liu
Wei Zhang
Jun Wang
+ Generative Zero-shot Network Quantization 2021 Xiangyu He
Qinghao Hu
Peisong Wang
Jian Cheng
+ Explicit and Implicit Knowledge Distillation via Unlabeled Data 2023 Yuzheng Wang
Zuhao Ge
Zhaoyu Chen
Xian Liu
Chuangjia Ma
Yunquan Sun
Lizhe Qi
+ PDF Chat Conditional generative data-free knowledge distillation 2023 Xinyi Yu
Ling Yan
Yang Yang
Libo ZHOU
Linlin Ou
+ Knowledge Distillation as Semiparametric Inference 2021 Tri Dao
Govinda M. Kamath
Vasilis Syrgkanis
Lester Mackey
+ PDF Chat The Knowledge Within: Methods for Data-Free Model Compression 2020 Matan Haroush
Itay Hubara
Elad Hoffer
Daniel Soudry
+ PDF Chat Dense depth distillation with out-of-distribution simulated images 2023 Junjie Hu
Chenyou Fan
Mete Özay
Hualie Jiang
Tin Lun Lam
+ Dual Discriminator Adversarial Distillation for Data-free Model Compression 2021 Haoran Zhao
Xin Sun
Junyu Dong
Hui Yu
Huiyu Zhou
+ Data-Free Model Extraction 2020 Jean-Baptiste Truong
Pratyush Maini
Robert J. Walls
Nicolas Papernot
+ PDF Chat Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning 2023 Zixuan Hu
Li Shen
Zhenyi Wang
Tongliang Liu
Chun Yuan
Dacheng Tao
+ Data-Free Knowledge Distillation for Heterogeneous Federated Learning 2021 Zhuangdi Zhu
Junyuan Hong
Jiayu Zhou
+ PDF Chat A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption 2024 Marcus Rüb
Philipp Tuchel
Axel Sikora
Daniel Mueller-Gritschneder