3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Radiance Fields

Type: Article

Publication Date: 2023-10-01

Citations: 2

DOI: https://doi.org/10.1109/iccv51070.2023.00902

Abstract

Motion magnification helps us visualize subtle, imperceptible motion. However, prior methods only work for 2D videos captured with a fixed camera. We present a 3D motion magnification method that can magnify subtle motions from scenes captured by a moving camera, while supporting novel view rendering. We represent the scene with time-varying radiance fields and leverage the Eulerian principle for motion magnification to extract and amplify the variation of the embedding of a fixed point over time. We study and validate our proposed principle for 3D motion magnification using both implicit and tri-plane-based radiance fields as our underlying 3D scene representation. We evaluate the effectiveness of our method on both synthetic and real-world scenes captured under various camera setups.

Locations

  • arXiv (Cornell University) - View - PDF
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV) - View

Similar Works

Action Title Year Authors
+ 3D Motion Magnification: Visualizing Subtle Motions with Time Varying Radiance Fields 2023 Brandon Y. Feng
Hadi Alzayer
Michael Rubinstein
William T. Freeman
Jiabin Huang
+ Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels 2021 Abdullah Abuolaim
Mahmoud Afifi
Michael S. Brown
+ PDF Chat Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels 2022 Abdullah Abuolaim
Mahmoud Afifi
Michael S. Brown
+ PDF Chat Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels 2021 Abdullah Abuolaim
Mahmoud Afifi
Michael S. Brown
+ PDF Chat VividDream: Generating 3D Scene with Ambient Dynamics 2024 Yao-Chih Lee
Yi-Ting Chen
Andrew Wang
Ting-Hsuan Liao
Brandon Y. Feng
Jia‐Bin Huang
+ Fast View Synthesis of Casual Videos 2023 Yao-Chih Lee
Zhoutong Zhang
Kevin Blackburn-Matzen
Simon Niklaus
Jianming Zhang
Jia‐Bin Huang
Feng Liu
+ PDF Chat Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video 2021 Edgar Tretschk
Ayush Tewari
Vladislav Golyanik
Michael Zollhöfer
Christoph Lassner
Christian Theobalt
+ Learning-based Axial Motion Magnification 2023 Kwon Byung-Ki
Oh Hyun-Bin
Jun-Seong Kim
Tae-Hyun Oh
+ Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video 2020 Edgar Tretschk
Ayush Tewari
Vladislav Golyanik
Michael Zollhöfer
Christoph Lassner
Christian Theobalt
+ A Portable Multiscopic Camera for Novel View and Time Synthesis in Dynamic Scenes 2022 Tianjia Zhang
Yuen-Fui Lau
Qifeng Chen
+ PDF Chat A Portable Multiscopic Camera for Novel View and Time Synthesis in Dynamic Scenes 2022 Tianjia Zhang
Yuen-Fui Lau
Qifeng Chen
+ PDF Chat Dynamic Neural Radiance Field From Defocused Monocular Video 2024 Xianrui Luo
Huiqiang Sun
Juewen Peng
Zhiguo Cao
+ PDF Chat SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields 2024 Jung-Ho Lee
Dogyoon Lee
Minhyeok Lee
Dong‐Hyung Kim
Sangyoun Lee
+ PDF Chat Modeling Ambient Scene Dynamics for Free-view Synthesis 2024 Meng-Li Shih
Jia‐Bin Huang
Chang-Il Kim
Rajvi Shah
Johannes Kopf
Chen Gao
+ PDF Chat Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video 2024 Byeongjun Park
Changick Kim
+ PDF Chat DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video 2024 Huiqiang Sun
LI Xing-yi
Liao Shen
Xinyi Ye
Ke Xian
Zhiguo Cao
+ PDF Chat D-NeRF: Neural Radiance Fields for Dynamic Scenes 2021 Albert Pumarola
Enric Corona
Gerard Pons‐Moll
Francesc Moreno-Noguer
+ Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video 2023 Byeongjun Park
Changick Kim
+ PDF Chat Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling 2024 Xinhang Liu
Yu‐Wing Tai
Chi-Keung Tang
Pedro Miraldo
Suhas Lohit
Moitreya Chatterjee
+ CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video 2024 Xingyu Miao
Yang Bai
Haoran Duan
Yawen Huang
Fan Wan
Yang Long
Yefeng Zheng

Works Cited by This (38)

Action Title Year Authors
+ PDF Chat Video Acceleration Magnification 2017 Yichao Zhang
Silvia L. Pintea
Jan van Gemert
+ PDF Chat The Unreasonable Effectiveness of Deep Features as a Perceptual Metric 2018 Richard Zhang
Phillip Isola
Alexei A. Efros
Eli Shechtman
Oliver Wang
+ PDF Chat Learning-Based Video Motion Magnification 2018 Tae-Hyun Oh
Ronnachai Jaroensri
Chang-Il Kim
Mohamed Elgharib
Frédo Durand
William T. Freeman
Wojciech Matusik
+ PDF Chat DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation 2019 Jeong Joon Park
Pete Florence
Julian Straub
Richard Newcombe
Steven Lovegrove
+ PDF Chat Consistent video depth estimation 2020 Xuan Luo
Jia‐Bin Huang
Richard Szeliski
Kevin Matzen
Johannes Kopf
+ NeRF++: Analyzing and Improving Neural Radiance Fields 2020 Kai Zhang
Gernot Riegler
Noah Snavely
Vladlen Koltun
+ PDF Chat NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis 2020 Ben Mildenhall
Pratul P. Srinivasan
Matthew Tancik
Jonathan T. Barron
Ravi Ramamoorthi
Ren Ng
+ PDF Chat Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video 2021 Edgar Tretschk
Ayush Tewari
Vladislav Golyanik
Michael Zollhöfer
Christoph Lassner
Christian Theobalt
+ PDF Chat D-NeRF: Neural Radiance Fields for Dynamic Scenes 2021 Albert Pumarola
Enric Corona
Gerard Pons‐Moll
Francesc Moreno-Noguer
+ PDF Chat Robust Consistent Video Depth Estimation 2021 Johannes Kopf
Xuejian Rong
Jia‐Bin Huang