Two-Stream Convolutional Networks for Dynamic Texture Synthesis

Type: Article

Publication Date: 2018-06-01

Citations: 49

DOI: https://doi.org/10.1109/cvpr.2018.00701

Download PDF

Abstract

We introduce a two-stream model for dynamic texture synthesis. Our model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow prediction. Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics. To generate a novel texture, a randomly initialized input sequence is optimized to match the feature statistics from each stream of an example texture. Inspired by recent work on image style transfer and enabled by the two-stream model, we also apply the synthesis approach to combine the texture appearance from one texture with the dynamics of another to generate entirely novel dynamic textures. We show that our approach generates novel, high quality samples that match both the framewise appearance and temporal evolution of input texture. Finally, we quantitatively evaluate our texture synthesis approach with a thorough user study.

Locations

  • arXiv (Cornell University) - View - PDF
  • Institutional Repository (York University) - View - PDF

Similar Works

Action Title Year Authors
+ Two-Stream Convolutional Networks for Dynamic Texture Synthesis 2017 Matthew Tesfaldet
Marcus A. Brubaker
Konstantinos G. Derpanis
+ Two-Stream Convolutional Networks for Dynamic Texture Synthesis. 2017 Matthew Tesfaldet
Marcus A. Brubaker
Konstantinos G. Derpanis
+ Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport. 2020 Eric Risser
+ Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport 2020 Eric Risser
+ Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses 2017 Eric Risser
Pierre Wilmot
Connelly Barnes
+ PDF Chat Style Transfer Via Texture Synthesis 2017 Michael Elad
Peyman Milanfar
+ PDF Chat StyleMaster: Stylize Your Video with Artistic Generation and Translation 2024 Zixuan Ye
Huijuan Huang
X. L. Wang
Pengfei Wan
Di Zhang
Wenhan Luo
+ PDF Chat U-Attention to Textures: Hierarchical Hourglass Vision Transformer for Universal Texture Synthesis 2022 Shouchang Guo
Valentin Deschaintre
Douglas C. Noll
Arthur Roullier
+ U-Attention to Textures: Hierarchical Hourglass Vision Transformer for Universal Texture Synthesis 2022 Shouchang Guo
Valentin Deschaintre
Douglas C. Noll
Arthur Roullier
+ GramGAN: Deep 3D Texture Synthesis From 2D Exemplars 2020 Tiziano Portenier
Siavash Arjomand Bigdeli
Orçun Göksel
+ GramGAN: Deep 3D Texture Synthesis From 2D Exemplars 2020 Tiziano Portenier
Siavash Arjomand Bigdeli
Orçun Göksel
+ PDF Chat GIST: Towards Photorealistic Style Transfer via Multiscale Geometric Representations 2024 RenĂĄn A. Rojas-GĂłmez
Minh N. Do
+ DeepMovie: Using Optical Flow and Deep Neural Networks to Stylize Movies 2016 Alexander G. Anderson
Cory P. Berg
Daniel P. Mossing
Bruno A. Olshausen
+ Texture Networks: Feed-forward Synthesis of Textures and Stylized Images 2016 Dmitry Ulyanov
V. Lebedev
Andrea Vedaldi
Victor Lempitsky
+ Texture Synthesis Using Shallow Convolutional Networks with Random Filters 2016 Ivan Ustyuzhaninov
Wieland Brendel
Leon A. Gatys
Matthias Bethge
+ PDF Chat Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis 2017 Dmitry Ulyanov
Andrea Vedaldi
Victor Lempitsky
+ Transposer: Universal Texture Synthesis Using Feature Maps as Transposed Convolution Filter 2020 Guilin Liu
Rohan Taori
Ting-Chun Wang
Zhiding Yu
Shiqiu Liu
Fitsum A. Reda
Karan Sapra
Andrew Tao
Bryan Catanzaro
+ TextureGAN: Controlling Deep Image Synthesis with Texture Patches 2017 Wenqi Xian
Patsorn Sangkloy
Varun Agrawal
Amit Raj
Jingwan Lu
Fang Chen
Fisher Yu
James Hays
+ PDF Chat TextureGAN: Controlling Deep Image Synthesis with Texture Patches 2018 Wenqi Xian
Patsorn Sangkloy
Varun Agrawal
Amit Raj
Jingwan Lu
Fang Chen
Fisher Yu
James Hays
+ PDF Chat A spatiotemporal style transfer algorithm for dynamic visual stimulus generation 2024 Antonino Greco
Markus Siegel

Works That Cite This (26)

Action Title Year Authors
+ PDF Chat Dynamic Variational Autoencoders for Visual Process Modeling 2020 Alexander Sagel
Hao Shen
+ Learning Energy-based Spatial-Temporal Generative ConvNets for Dynamic Patterns 2019 Jianwen Xie
Song‐Chun Zhu
Ying Wu
+ PDF Chat Learning Energy-Based Spatial-Temporal Generative ConvNets for Dynamic Patterns 2019 Jianwen Xie
Song‐Chun Zhu
Ying Wu
+ PDF Chat Kernelized Similarity Learning and Embedding for Dynamic Texture Synthesis 2022 Shiming Chen
Peng Zhang
Guo-Sen Xie
Qinmu Peng
Zehong Cao
Wei Yuan
Xinge You
+ PDF Chat Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns 2020 Jianwen Xie
Ruiqi Gao
Zilong Zheng
Song‐Chun Zhu
Ying Wu
+ Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns 2019 Jianwen Xie
Ruiqi Gao
Zilong Zheng
Song‐Chun Zhu
Ying Wu
+ Similarity-DT: Kernel Similarity Embedding for Dynamic Texture Synthesis. 2019 Shiming Chen
Peng Zhang
Xinge You
Qinmu Peng
Xin Liu
Zehong Cao
+ Intelligent Home 3D: Automatic 3D-House Design from Linguistic Descriptions Only 2020 Qi Chen
Qi Wu
Rui Tang
Yuhan Wang
Shuai Wang
Mingkui Tan
+ Conditional Generative ConvNets for Exemplar-based Texture Synthesis 2019 Ziming Wang
Menghan Li
Gui-Song Xia
+ PDF Chat Endless Loops: Detecting and Animating Periodic Patterns in Still Images 2021 Tavi Halperin
Hanit Hakim
Orestis Vantzos
Gershon Hochman
Netai Benaim
Lior Sassy
Michael Kupchik
Ofir Bibi
Ohad Fried