Originally published at: https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/
This post explains prompt injection and shows how the NVIDIA AI Red Team identified vulnerabilities where prompt injection can be used to exploit three plug-ins included in the LangChain library.
jwitsoe
1
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Best Practices for Securing LLM-Enabled Applications | 0 | 395 | November 15, 2023 | |
Mitigating Stored Prompt Injection Attacks Against LLM Applications | 0 | 363 | August 4, 2023 | |
Tips for Building a RAG Pipeline with NVIDIA AI LangChain AI Endpoints | 10 | 485 | August 28, 2024 | |
Building Safer LLM Apps with LangChain Templates and NVIDIA NeMo Guardrails | 1 | 145 | May 31, 2024 | |
How to Safeguard AI Agents for Customer Service with NVIDIA NeMo Guardrails | 1 | 21 | January 16, 2025 | |
Boost Llama 3.3 70B Inference Throughput 3x with NVIDIA TensorRT-LLM Speculative Decoding | 3 | 155 | February 3, 2025 | |
Demystifying AI Inference Deployments for Trillion Parameter Large Language Models | 3 | 188 | April 17, 2025 | |
Building LLM-Powered Production Systems with NVIDIA NIM and Outerbounds | 1 | 26 | October 2, 2024 | |
Building a Simple VLM-Based Multimodal Information Retrieval System with NVIDIA NIM | 2 | 20 | February 26, 2025 | |
Dont understand how to finish - DLI Course ‘Building RAG Agents for LLMs’ | 37 | 1543 | January 27, 2025 |