Wide Compression: Tensor Ring Nets

Type: Article

Publication Date: 2018-06-01

Citations: 108

DOI: https://doi.org/10.1109/cvpr.2018.00972

Download PDF

Abstract

Deep neural networks have demonstrated state-of-the-art performance in a variety of real-world applications. In order to obtain performance gains, these networks have grown larger and deeper, containing millions or even billions of parameters and over a thousand layers. The tradeoff is that these large architectures require an enormous amount of memory, storage, and computation, thus limiting their usability. Inspired by the recent tensor ring factorization, we introduce Tensor Ring Networks (TR-Nets), which significantly compress both the fully connected layers and the convolutional layers of deep neural networks. Our results show that our TR-Nets approach is able to compress LeNet-5 by 11× without losing accuracy, and can compress the state-of-the-art Wide ResNet by 243× with only 2.3% degradation in Cifar10 image classification. Overall, this compression scheme shows promise in scientific computing and deep learning, especially for emerging resource-constrained devices such as smartphones, wearables, and IoT devices.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ Wide Compression: Tensor Ring Nets 2018 Wenqi Wang
Yifan Sun
Brian Eriksson
Wenlin Wang
Vaneet Aggarwal
+ PDF Chat Reduced storage direct tensor ring decomposition for convolutional neural networks compression 2024 Mateusz Gabor
Rafał Zdunek
+ PDF Chat Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition 2022 Marawan Gamal Abdel Hameed
Marzieh S. Tahaei
Ali Mosleh
Vahid Partovi Nia
+ Neural Network Compression Based on Tensor Ring Decomposition 2024 Kun Xie
Can Liu
Xin Wang
Xiaocan Li
Gaogang Xie
Jigang Wen
Kenli Li
+ PDF Chat Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework 2021 Miao Yin
Yang Sui
Siyu Liao
Bo Yuan
+ Towards Compact CNNs via Collaborative Compression 2021 Yuchao Li
Shaohui Lin
Jianzhuang Liu
Qixiang Ye
Mengdi Wang
Fei Chao
Fan Yang
Jincheng Ma
Qi Tian
Rongrong Ji
+ PDF Chat Towards Compact CNNs via Collaborative Compression 2021 Yuchao Li
Shaohui Lin
Jianzhuang Liu
Qixiang Ye
Mengdi Wang
Fei Chao
Fan Yang
Jincheng Ma
Qi Tian
Rongrong Ji
+ Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework 2021 Miao Yin
Yang Sui
Siyu Liao
Bo Yuan
+ Compressing neural network by tensor network with exponentially fewer variational parameters 2023 Yong Qing
Pengfei Zhou
Ke Li
Shi-Ju Ran
+ PDF Chat Hybrid tensor decomposition in neural network compression 2020 Bijiao Wu
Dingheng Wang
Guangshe Zhao
Lei Deng
Guoqi Li
+ Fast and Robust Compression of Deep Convolutional Neural Networks 2020 Jia Wen
Liu Yang
Chenyang Shen
+ Tensorized Spectrum Preserving Compression for Neural Networks. 2018 Jiahao Su
Jingling Li
Bobby Bhattacharjee
Furong Huang
+ PDF Chat HOTCAKE: Higher Order Tucker Articulated Kernels for Deeper CNN Compression 2020 Rui Lin
Ching-Yun Ko
Zhuolun He
Cong Chen
Yuan Cheng
Hao Yu
Graziano Chesi
Ngai Wong
+ STN: Scalable Tensorizing Networks via Structure-Aware Training and Adaptive Compression 2022 Chang Nie
Huan Wang
Lu Zhao
+ PDF Chat PSM-nets: Compressing Neural Networks with Product of Sparse Matrices 2021 Luc Giffon
Stéphane Ayache
Hachem Kadri
Thierry Artières
Ronan Sicre
+ THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression 2023 Minghao Li
Ran Ben Basat
Shay Vargaftik
ChonLam Lao
Kevin S. Xu
Xin‐Ran Tang
Michael Mitzenmacher
Minlan Yu
+ PDF Chat ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition 2018 Zhisheng Zhong
Fangyin Wei
Zhouchen Lin
Chao Zhang
+ ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition 2019 Zhisheng Zhong
Fangyin Wei
Zhouchen Lin
Chao Zhang
+ Nested compression of convolutional neural networks with Tucker-2 decomposition 2022 Rafał Zdunek
Mateusz Gabor
+ PDF Chat An impact of tensor-based data compression methods on deep neural network accuracy 2021 Jakub Grabek
Bogusław Cyganek