Ask a Question

Prefer a chat interface with context about you and your work?

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential …