Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMs

Type: Preprint

Publication Date: 2024-09-02

Citations: 0

DOI: https://doi.org/10.48550/arxiv.2409.01552

Abstract

Large language models (LLMs) have shown success in generating high-quality responses. In order to achieve better alignment with LLMs with human preference, various works are proposed based on specific optimization process, which, however, is not suitable to Black-Box LLMs like GPT-4, due to inaccessible parameters. In Black-Box LLMs case, their performance is highly dependent on the quality of the provided prompts. Existing methods to enhance response quality often involve a prompt refinement model, yet these approaches potentially suffer from semantic inconsistencies between the refined and original prompts, and typically overlook the relationship between them. To address these challenges, we introduce a self-instructed in-context learning framework that empowers LLMs to deliver more effective responses by generating reliable derived prompts to construct informative contextual environments. Our approach incorporates a self-instructed reinforcement learning mechanism, enabling direct interaction with the response model during derived prompt generation for better alignment. We then formulate querying as an in-context learning task, using responses from LLMs combined with the derived prompts to establish a contextual demonstration for the original prompt. This strategy ensures alignment with the original query, reduces discrepancies from refined prompts, and maximizes the LLMs' in-context learning capability. Extensive experiments demonstrate that the proposed method not only generates more reliable derived prompts but also significantly enhances LLMs' ability to deliver more effective responses, including Black-Box models such as GPT-4.

Locations

  • arXiv (Cornell University) - View - PDF

Similar Works

Action Title Year Authors
+ PDF Chat QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning 2024 Yilun Kong
Hangyu Mao
Qi Zhao
Bin Zhang
Jingqing Ruan
Shen Li
Yongzhe Chang
Xueqian Wang
Rui Zhao
Dacheng Tao
+ PACE: Improving Prompt with Actor-Critic Editing for Large Language Model 2023 Yihong Dong
Kangcheng Luo
Xue Jiang
Zhi Jin
Ge Li
+ Towards Goal-oriented Large Language Model Prompting: A Survey 2024 Haochen Li
Jonathan Leung
Zhiqi Shen
+ PDF Chat A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications 2024 Pranab Sahoo
Ayush Singh
Sriparna Saha
Vinija Jain
Samrat Mondal
Aman Chadha
+ PDF Chat Efficient Prompting Methods for Large Language Models: A Survey 2024 Kaiyan Chang
Songcheng Xu
Chenglong Wang
Yingfeng Luo
Tong Xiao
Jingbo Zhu
+ PDF Chat Towards Hierarchical Multi-Agent Workflows for Zero-Shot Prompt Optimization 2024 Yuchi Liu
Jaskirat Singh
Gaowen Liu
Ali Payani
Zheng Liang
+ PDF Chat Causal Prompting: Debiasing Large Language Model Prompting based on Front-Door Adjustment 2024 Congzhi Zhang
Linhai Zhang
Deyu Zhou
Guoqiang Xu
+ PDF Chat Intent-based Prompt Calibration: Enhancing prompt optimization with synthetic boundary cases 2024 Elad Levi
Eli Brosh
Matan Friedmann
+ Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision 2023 Zhiqing Sun
Yikang Shen
Qinhong Zhou
Hongxin Zhang
Zhenfang Chen
David Cox
Yiming Yang
Chuang Gan
+ Context-Tuning: Learning Contextualized Prompts for Natural Language Generation 2022 Tianyi Tang
Junyi Li
Wayne Xin Zhao
+ Fairness-guided Few-shot Prompting for Large Language Models 2023 Zhenqiang Ma
Changqing Zhang
Yatao Bian
Lemao Liu
Zhirui Zhang
Peilin Zhao
Shu Zhang
Huazhu Fu
Qinghua Hu
Bingzhe Wu
+ Refining the Responses of LLMs by Themselves 2023 Tianqiang Yan
Tiansheng Xu
+ Automatic Prompt Rewriting for Personalized Text Generation 2023 Cheng Li
Mingyang Zhang
Qiaozhu Mei
Weize Kong
Michael Bendersky
+ Enable Language Models to Implicitly Learn Self-Improvement From Data 2023 Ziqi Wang
Le Hou
Tianjian Lu
Yuexin Wu
Yunxuan Li
Hongkun Yu
Heng Ji
+ PDF Chat Automatic Prompt Selection for Large Language Models 2024 Viet-Tung Do
Van Khanh Hoang
Duy‐Hung Nguyen
Shahab Sabahi
Jeff Yang
Hajime Hotta
Minh-Tien Nguyen
Lê Thái Hùng
+ How Does In-Context Learning Help Prompt Tuning? 2023 Simeng Sun
Yang Liu
Dan Iter
Chenguang Zhu
Mohit Iyyer
+ PDF Chat CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization 2024 Zheyan Qu
Lu Yin
Zitong Yu
Wenbo Wang
Xing Zhang
+ PDF Chat Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization 2024 Xingchen Wan
Ruoxi Sun
Hootan Nakhost
Sercan Ö. Arık
+ Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL 2023 Hao Sun
+ A Practical Survey on Zero-shot Prompt Design for In-context Learning 2023 Y. R. Li

Works That Cite This (0)

Action Title Year Authors

Works Cited by This (0)

Action Title Year Authors