Ask a Question

Prefer a chat interface with context about you and your work?

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for safety alignment of LLMs are solely interpreted by semantics. This assumption, however, does not …