Ask a Question

Prefer a chat interface with context about you and your work?

From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection

From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection

Tool-calling has changed Large Language Model (LLM) applications by integrating external tools, significantly enhancing their functionality across diverse tasks. However, this integration also introduces new security vulnerabilities, particularly in the tool scheduling mechanisms of LLM, which have not been extensively studied. To fill this gap, we present ToolCommander, a novel …