Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation

Type: Preprint

Publication Date: 2024-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2401.14257

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Control3D: Towards Controllable Text-to-3D Generation 2023 Chen Yang
Yingwei Pan
Yehao Li
Ting Yao
Tao Mei
+ PDF Chat Control3D: Towards Controllable Text-to-3D Generation 2023 Yang Chen
Yingwei Pan
Yehao Li
Ting Yao
Tao Mei
+ A Unified Approach for Text- and Image-guided 4D Scene Generation 2023 Yufeng Zheng
Xueting Li
Koki Nagano
Sifei Liu
Karsten Kreis
Otmar Hilliges
Shalini De Mello
+ PDF Chat SeMv-3D: Towards Semantic and Mutil-view Consistency simultaneously for General Text-to-3D Generation with Triplane Priors 2024 Xiao Cai
Pengpeng Zeng
Lianli Gao
Junchen Zhu
Jiaxin Zhang
Sitong Su
Heng Tao Shen
Jingkuan Song
+ PDF Chat SketchDream: Sketch-based Text-To-3D Generation and Editing 2024 Fenglin Liu
Hongbo Fu
Yu‐Kun Lai
Lin Gao
+ PDF Chat SketchDream: Sketch-based Text-to-3D Generation and Editing 2024 Fenglin Liu
Hongbo Fu
Yu‐Kun Lai
Lin Gao
+ Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields 2023 Jingbo Zhang
Xiaoyu Li
Ziyu Wan
Can Wang
Jing Liao
+ PDF Chat VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation 2024 Yang Chen
Yingwei Pan
Haibo Yang
Ting Yao
Tao Mei
+ PDF Chat Grounded Compositional and Diverse Text-to-3D with Pretrained Multi-View Diffusion Model 2024 Xiaolong Li
Jiawei Mo
Ying Wang
Chethan M. Parameshwara
Xiaohan Fei
Ashwin Swaminathan
Chris Taylor
Zhuowen Tu
Paolo Favaro
Stefano Soatto
+ PDF Chat Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields 2024 Jingbo Zhang
Xiaoyu Li
Ziyu Wan
Can Wang
Jing Liao
+ Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion 2023 Yuanxun Lu
Jingyang Zhang
Shiwei Li
Tian Fang
David McKinnon
Yanghai Tsin
Long Quan
Xun Cao
Yao Yao
+ PDF Chat Focus on Neighbors and Know the Whole: Towards Consistent Dense Multiview Text-to-Image Generator for 3D Creation 2024 Bonan Li
Zicheng Zhang
Xingyi Yang
Xinchao Wang
+ Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation 2023 Chaohui Yu
Qiang Zhou
Jingliang Li
Zhe Zhang
Zhibin Wang
Fan Wang
+ Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation 2023 Chaohui Yu
Qiang Zhou
Jingliang Li
Zhe Zhang
Zhibin Wang
Fan Wang
+ PDF Chat Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior 2024 Cheng Chen
Xiaofeng Yang
Yang Fan
Chengzeng Feng
Zhoujie Fu
Chuan-Sheng Foo
Guosheng Lin
Fayao Liu
+ PDF Chat DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion 2024 Yuanze Lin
Ronald A. Clark
Philip H. S. Torr
+ EfficientDreamer: High-Fidelity and Robust 3D Creation via Orthogonal-view Diffusion Prior 2023 Minda Zhao
Chaoyi Zhao
Xinyue Liang
Lincheng Li
Zeng Zhao
Zhipeng Hu
Changjie Fan
Yu Xin
+ PDF Chat Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation 2024 Yuanbo Yang
Jiahao Shao
Xinyang Li
Yujun Shen
Andreas C. Geiger
Yiyi Liao
+ PDF Chat ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models 2024 Lukas Höllein
Aljaž Božič
Norman Müller
David Novotný
Hung-Yu Tseng
Christian Richardt
Michael Zollhöfer
Matthias Nießner
+ PDF Chat 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation 2024 Frank Zhang
Yibo Zhang
Quan Zheng
Rui Ma
Wei Hua
Hujun Bao
Weiwei Xu
Changqing Zou

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors