Listening While Speaking and Visualizing: Improving ASR Through Multimodal Chain

Type: Article

Publication Date: 2019-12-01

Citations: 5

DOI: https://doi.org/10.1109/asru46091.2019.9003899

Abstract

Previously, a machine speech chain, which is based on sequence-to-sequence deep learning, was proposed to mimic speech perception and production behavior. Such chains separately processed listening and speaking by automatic speech recognition (ASR) and text-to-speech synthesis (TTS) and simultaneously enabled them to teach each other in semi-supervised learning when they received unpaired data. Unfortunately, this speech chain study is limited to speech and textual modalities. In fact, natural communication is actually multimodal and involves both auditory and visual sensory systems. Although the said speech chain reduces the requirement of having a full amount of paired data, in this case we still need a large amount of unpaired data. In this research, we take a further step and construct a multimodal chain and design a closely knit chain architecture that combines ASR, TTS, image captioning, and image production models into a single framework. The framework allows the training of each component without requiring a large number of parallel multimodal data. Our experimental results also show that an ASR can be further trained without speech and text data and cross-modal data augmentation remains possible through our proposed chain, which improves the ASR performance.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) - View

Similar Works

Action Title Year Authors
+ Listening while Speaking and Visualizing: Improving ASR through Multimodal Chain 2019 Johanes Effendi
Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning. 2019 Johanes Effendi
Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ PDF Chat Augmenting Images for ASR and TTS Through Single-Loop and Dual-Loop Multimodal Chain Framework 2020 Johanes Effendi
Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ Augmenting Images for ASR and TTS through Single-loop and Dual-loop Multimodal Chain Framework 2020 Johanes Effendi
Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ Listening while Speaking: Speech Chain by Deep Learning 2017 Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ PDF Chat Listening while speaking: Speech chain by deep learning 2017 Andros Tjandra
Sakriani Sakti
Satoshi Nakamura
+ PDF Chat Multimodal Grounding for Sequence-to-sequence Speech Recognition 2019 Ozan Çağlayan
Ramon Sanabria
Shruti Palaskar
Loic Barraul
Florian Metze
+ Multimodal Grounding for Sequence-to-Sequence Speech Recognition 2018 Ozan Çağlayan
Ramon Sanabria
Shruti Palaskar
Loïc Barrault
Florian Metze
+ Multimodal Grounding for Sequence-to-Sequence Speech Recognition 2018 Ozan Çağlayan
Ramon Sanabria
Shruti Palaskar
Loïc Barrault
Florian Metze
+ PDF Chat Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition 2022 Xichen Pan
Pei-yu Chen
Yichen Gong
Helong Zhou
Xinbing Wang
Zhouhan Lin
+ Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition 2022 Xichen Pan
Pei-yu Chen
Yichen Gong
Helong Zhou
Xinbing Wang
Zhouhan Lin
+ PDF Chat SynesLM: A Unified Approach for Audio-visual Speech Recognition and Translation via Language Model and Synthetic Data 2024 Yichen Lu
Jiaqi Song
Xuankai Chang
Hengwei Bian
Soumi Maiti
Shinji Watanabe
+ PDF Chat Visatronic: A Multimodal Decoder-Only Model for Speech Synthesis 2024 Akshita Gupta
Tatiana Likhomanenko
Karren Yang
Ruixin Bai
Zakaria Aldeneh
Navdeep Jaitly
+ PDF Chat Improving Multimodal Speech Recognition by Data Augmentation and Speech Representations 2022 Dan Oneață
Horia Cucu
+ Improving Multimodal Speech Recognition by Data Augmentation and Speech Representations 2022 Dan Oneață
Horia Cucu
+ PDF Chat Speech ReaLLM -- Real-time Streaming Speech Recognition with Multimodal LLMs by Teaching the Flow of Time 2024 Frank Seide
Morrie Doulaty
Yangyang Shi
Yashesh Gaur
Junteng Jia
Chunyang Wu
+ MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition 2023 Xize Cheng
Linjun Li
Tao Jin
Rongjie Huang
Lin Wang
Zehan Wang
Huangdai Liu
Ye Wang
Aoxiong Yin
Zhou Zhao
+ Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions 2019 Tejas Srinivasan
Ramon Sanabria
Florian Metze
+ Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions 2019 Tejas Srinivasan
Ramon Sanabria
Florian Metze
+ VILAS: Exploring the Effects of Vision and Language Context in Automatic Speech Recognition 2023 Minglun Han
Feilong Chen
Ziyi Ni
Linghui Meng
Jing Shi
Shuang Xu
Bo Xu