Linear Progressive Coding for Semantic Communication using Deep Neural Networks

Type: Article

Publication Date: 2024-03-13

Citations: 0

DOI: https://doi.org/10.1109/ciss59072.2024.10480188

Download PDF

Abstract

We propose a novel linear progressive coding framework for obtaining hierarchical compressed representations (measurements) of data so that we can perform hierarchical-grain-level machine learning tasks timely and accurately using these representations. We first encode data into optimized low-rate coarse linear representations or measurements, which can be quickly communicated to the receiver and used for timely and accurate coarse-level classifications. We then design an additional set of optimized linear measurements or representations of the data so that the receiver can perform accurate finer-level classifications using these newly communicated representations together with the previously received coarse representations. Our proposed method can be considered as optimized hierarchical compressed learning or progressive semantic communications optimized for hierarchical-grain-level machine learning tasks, using low-cost linear measurements. Our experimental results on the MNIST and CIFAR-10 datasets show the linear progressive measurements enable timely performing coarse-level machine learning tasks with a small number of initial measurements, while for finer-level tasks, achieving overall accuracy and efficiency comparable to non-progressive methods.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning 2022 Mohammadreza Alimohammadi
Ilia Markov
Elias Frantar
Dan Alistarh
+ Communication-Efficient Split Learning via Adaptive Feature-Wise Compression 2023 Yongjeong Oh
Jaeho Lee
Christopher G. Brinton
Yo–Seb Jeon
+ Digital-SC: Digital Semantic Communication with Adaptive Network Split and Learned Non-Linear Quantization 2023 Lei Guo
Wei Chen
Yuxuan Sun
Bo Ai
+ PDF Chat Coded Deep Learning: Framework and Algorithm 2025 En‐Hua Yang
Shayan Mohajer Hamidi
+ CosSGD: Communication-Efficient Federated Learning with a Simple Cosine-Based Quantization 2020 Yang He
Hui‐Po Wang
Maximilian Zenk
Mario Fritz
+ Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication 2018 Felix Sattler
Simon Wiedemann
Klaus‐Robert Müller
Wojciech Samek
+ PDF Chat Remote Inference over Dynamic Links via Adaptive Rate Deep Task-Oriented Vector Quantization 2025 Eyal Fishel
May Malka
Shai Ginzach
Nir Shlezinger
+ PDF Chat Accelerating Relative Entropy Coding with Space Partitioning 2024 Jiajun He
Gergely Flamich
JosĂŠ Miguel HernĂĄndez-Lobato
+ Remote Inference Over Dynamic Links via Adaptive Rate Deep Task-Oriented Vector Quantization 2025 Eyal Fishel
May Malka
Nir Shlezinger
Shai Ginzach
+ Pufferfish: Communication-efficient Models At No Extra Cost 2021 Hongyi Wang
Saurabh Agarwal
Dimitris Papailiopoulos
+ PDF Chat Deep Hierarchy Quantization Compression algorithm based on Dynamic Sampling 2023 Wan Jiang
Gang Liu
+ CosSGD: Nonlinear Quantization for Communication-efficient Federated Learning. 2020 Yang He
Maximilian Zenk
Mario Fritz
+ Model compression via distillation and quantization 2018 Antonio Polino
Razvan Pascanu
Dan Alistarh
+ PDF Chat Task-aware Network Coding over Butterfly Network 2023 Jiangnan Cheng
Sandeep Chinchali
Antony Tang
+ PDF Chat Entropy-Based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance 2023 Mackenzie J. Meni
Ryan T. White
Michael L. Mayo
Kevin R. Pilkiewicz
+ Entropy-based Guidance of Deep Neural Networks for Accelerated Convergence and Improved Performance 2023 Mackenzie J. Meni
Ryan T. White
Michael Mayo
Kevin R. Pilkiewicz
+ On the Acceleration of Deep Neural Network Inference using Quantized Compressed Sensing 2021 Meshia CĂŠdric Oveneke
+ THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression 2023 Minghao Li
Ran Ben Basat
Shay Vargaftik
ChonLam Lao
Kevin S. Xu
Xin‐Ran Tang
Michael Mitzenmacher
Minlan Yu
+ QSGD: Communication-Optimal Stochastic Gradient Descent, with Applications to Training Neural Networks 2016 Dan Alistarh
Demjan Grubic
Jerry Li
Ryota Tomioka
Milan Vojnović
+ PDF Chat An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models 2024 Yue Hu
Difan Zou
Dong-Hui Xu

Works That Cite This (0)

Action Title Year Authors