Separable Self-attention for Mobile Vision Transformers

Type: Preprint

Publication Date: 2022-01-01

Citations: 116

DOI: https://doi.org/10.48550/arxiv.2206.02680

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer 2021 Sachin Mehta
Mohammad Rastegari
+ MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer 2021 Sachin Mehta
Mohammad Rastegari
+ MoCoViT: Mobile Convolutional Vision Transformer 2022 Hailong Ma
Xin Xia
Xing Wang
Xuefeng Xiao
Jiashi Li
Min Zheng
+ PDF Chat iFormer: Integrating ConvNet and Transformer for Mobile Application 2025 Chuanyang Zheng
+ PDF Chat SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications 2023 Abdelrahman Shaker
Muhammad Maaz
Hanoona Rasheed
Salman Khan
Ming–Hsuan Yang
Fahad Shahbaz Khan
+ SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications 2023 Abdelrahman Shaker
Muhammad Maaz
Hanoona Rasheed
Salman Khan
Ming–Hsuan Yang
Fahad Shahbaz Khan
+ ExMobileViT: Lightweight Classifier Extension for Mobile Vision Transformer 2023 Gyeongdong Yang
Yungwook Kwon
Hyunjin Kim
+ PDF Chat CAS-ViT: Convolutional Additive Self-attention Vision Transformers for Efficient Mobile Applications 2024 Tianfang Zhang
Lei Li
Yang Zhou
Wentao Liu
Chen Qian
Xiangyang Ji
+ MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models 2022 Chenglin Yang
Siyuan Qiao
Qihang Yu
Xiaoding Yuan
Yukun Zhu
Alan Yuille
Hartwig Adam
Liang-Chieh Chen
+ Lightweight Vision Transformer with Cross Feature Attention 2022 Youpeng Zhao
Huadong Tang
Yingying Jiang
A Yong
Qiang Wu
+ MobileViTv3: Mobile-Friendly Vision Transformer with Simple and Effective Fusion of Local, Global and Input Features 2022 Shakti N. Wadekar
Abhishek Chaurasia
+ PDF Chat Mobile-Former: Bridging MobileNet and Transformer 2022 Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
+ Mobile-Former: Bridging MobileNet and Transformer 2021 Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
+ A Simple Approach to Image Tilt Correction with Self-Attention MobileNet for Smartphones 2021 Siddhant Garg
Debi Prasanna Mohanty
Siva Prasad Thota
Sukumar Moharana
+ Vision Transformers for Mobile Applications: A Short Survey 2023 Nahid Alam
Steven Kolawole
Simardeep Sethi
Nishant Bansali
Karina Nguyen
+ PDF Chat Scaling Graph Convolutions for Mobile Vision 2024 William Avery
Mustafa Munir
Radu Mărculescu
+ A Simple Approach to Image Tilt Correction with Self-Attention MobileNet for Smartphones. 2021 Siddhant Garg
Debi Prasanna Mohanty
Siva Prasad Thota
Sukumar Moharana
+ PDF Chat RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone 2024 Mustafa Munir
Md Mostafijur Rahman
Radu Mărculescu
+ MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications 2017 Andrew Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
Marco Andreetto
Hartwig Adam
+ MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications 2017 Andrew Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
Marco Andreetto
Hartwig Adam

Works That Cite This (18)

Action Title Year Authors
+ PDF Chat Grafting Vision Transformers 2024 Jongwoo Park
Kumara Kahatapitiya
Donghyun Kim
Shivchander Sudalairaj
Quanfu Fan
Michael S. Ryoo
+ PDF Chat Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks 2023 Jierun Chen
Shiu-hong Kao
Hao He
Weipeng Zhuo
Wen Song
Chul‐Ho Lee
S.-H. Gary Chan
+ PDF Chat Rethinking Vision Transformers for MobileNet Size and Speed 2023 Yanyu Li
Hu Ju
Yang Wen
Georgios Evangelidis
Kamyar Salahi
Yanzhi Wang
Sergey Tulyakov
Jian Ren
+ PDF Chat Quality-aware Pretrained Models for Blind Image Quality Assessment 2023 Kai Zhao
Kun Yuan
Ming Sun
Mading Li
Xing Wen
+ PDF Chat A survey of techniques for optimizing transformer inference 2023 Krishna Teja Chitty-Venkata
Sparsh Mittal
Murali Emani
Venkatram Vishwanath
Arun K. Somani
+ PDF Chat EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention 2023 Xinyu Liu
Houwen Peng
Ningxin Zheng
Yuqing Yang
Han Hu
Yixuan Yuan
+ PDF Chat EGA-Depth: Efficient Guided Attention for Self-Supervised Multi-Camera Depth Estimation 2023 Yunxiao Shi
Hong Cai
Amin Ansari
Fatih Porikli
+ PDF Chat OnDev-LCT: On-Device Lightweight Convolutional Transformers towards federated learning 2023 Chu Myaet Thwal
Minh N. H. Nguyen
Ye Lin Tun
Seong Tae Kim
My T. Thai
Choong Seon Hong
+ PDF Chat Dynamic Perceiver for Efficient Visual Recognition 2023 Yizeng Han
Dongchen Han
Zeyu Liu
Yulin Wang
Xuran Pan
Yifan Pu
Chao Deng
Junlan Feng
Shiji Song
Gao Huang
+ PDF Chat MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications 2023 Mustafa Munir
William Avery
Radu Mărculescu

Works Cited by This (0)

Action Title Year Authors