Prompt injection is a system-level risk that lets attackers manipulate LLM behavior by exploiting how models interpret mixed instructions across prompts, content, and tools. Mitigation requires layered controls like least privilege, adversarial testing, and strict separation of untrusted content.






























































































