Prefer a chat interface with context about you and your work?
Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs
Jailbreak attack can be used to access the vulnerabilities of Large Language Models (LLMs) by inducing LLMs to generate the harmful content. And the most common method of the attack is to construct semantically ambiguous prompts to confuse and mislead the LLMs. To access the security and reveal the intrinsic …