Securing LLM Systems Against Prompt Injection

Originally published at: https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

This post explains prompt injection and shows how the NVIDIA AI Red Team identified vulnerabilities where prompt injection can be used to exploit three plug-ins included in the LangChain library.