Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as …