A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models

Type: Preprint

Publication Date: 2019-01-01

Citations: 0

DOI: https://doi.org/10.48550/arxiv.1911.03373

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models 2019 Chris Kedzie
Kathleen McKeown
+ A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models 2019 Chris Kedzie
Kathleen McKeown
+ PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation 2020 Bin Bi
Chenliang Li
Chen Wu
Ming Yan
Wei Wang
Songfang Huang
Fei Huang
Luo Si
+ Neural Language Generation: Formulation, Methods, and Evaluation 2020 Cristina Gârbacea
Qiaozhu Mei
+ Improving Compositional Generalization with Self-Training for Data-to-Text Generation 2021 Sanket Vaibhav Mehta
Jinfeng Rao
Yi Tay
Mihir Kale
Ankur P. Parikh
Emma Strubell
+ Semantic Noise Matters for Neural Natural Language Generation 2019 Ondřej Dušek
David M. Howcroft
Verena Rieser
+ Semantic Noise Matters for Neural Natural Language Generation 2019 Ondřej Dušek
David M. Howcroft
Verena Rieser
+ Semantic Noise Matters for Neural Natural Language Generation 2019 Ondřej Dušek
David M. Howcroft
Verena Rieser
+ DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation 2022 Yuxi Feng
Xiaoyuan Yi
Xiting Wang
Laks V. S. Lakshmanan
Xing Xie
+ AugNLG: Few-shot Natural Language Generation using Self-trained Data Augmentation 2021 Xinnuo Xu
Guoyin Wang
Young-Bum Kim
Sung‐Jin Lee
+ PDF Chat Mechanistic Behavior Editing of Language Models 2024 Joykirat Singh
Subhabrata Dutta
Tanmoy Chakraborty
+ ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation 2020 Dongling Xiao
Han Zhang
Yukun Li
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
+ ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation 2020 Dongling Xiao
Han Zhang
Yukun Li
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
+ Survey of Hallucination in Natural Language Generation 2022 Ziwei Ji
Nayeon Lee
Rita Frieske
Tiezheng Yu
Dan Su
Yan Xu
Etsuko Ishii
Yejin Bang
Andrea Madotto
Pascale Fung
+ Informed Sampling for Diversity in Concept-to-Text NLG 2021 Giulio Zhou
Γεράσιμος Λάμπουρας
+ Survey of Hallucination in Natural Language Generation 2022 Ziwei Ji
Nayeon Lee
Rita Frieske
Tiezheng Yu
Dan Su
Yan Xu
Etsuko Ishii
Ye Jin Bang
Andrea Madotto
Pascale Fung
+ DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation 2023 Yuxi Feng
Xiaoyuan Yi
Xiting Wang
V.S. Laks Lakshmanan
Xing Xie
+ PDF Chat Informed Sampling for Diversity in Concept-to-Text NLG 2020 Giulio Zhou
Γεράσιμος Λάμπουρας
+ Informed Sampling for Diversity in Concept-to-Text NLG 2020 Giulio Zhou
Γεράσιμος Λάμπουρας
+ GanLM: Encoder-Decoder Pre-training with an Auxiliary Discriminator 2022 Jian Yang
Shuming Ma
Dong Li
Shaohan Huang
Haoyang Huang
Yuwei Yin
Dongdong Zhang
Liqun Yang
Zhoujun Li
Furu Wei

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors