Controllable Generation from Pre-trained Language Models via Inverse Prompting
Controllable Generation from Pre-trained Language Models via Inverse Prompting
Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic text. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, which limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better …