Build-a-log-analysis-multi-agent-self-corrective-rag-system-with-nvidia-nemotron/

Hi forum.

Regarding the following article: Build a Log Analysis Multi-Agent Self-Corrective RAG System with NVIDIA Nemotron, Build a Log Analysis Multi-Agent Self-Corrective RAG System with NVIDIA Nemotron | NVIDIA Technical Blog .

I am an AI beginner but still would like to try the concept of this article. I’ve installed Python 3.12 for Windows. I follow the article’s Quick-start guide. I’ve git cloned GitHub - NVIDIA/GenerativeAIExamples: Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture. . I’ve used the Pi command to download requirements of the requirements file from the GenerativeAIExamples/community/log_analysis_multi_agent_rag folder. I enter the command:

python example.py C:\Logs\error.log --question “What caused the memory has been paged out errors?”.

Notice the slashes. I’ve changed these from “/” to “”. But the call does not work. The response from the call is the following:

"GenerativeAIExamples\community\log_analysis_multi_agent_rag\utils.py:4: LangChainDeprecationWarning: As of langchain-core 0.3.0, LangChain uses pydantic v2 internally. The langchain_core.pydantic_v1 module was a compatibility shim for pydantic v1, and should no longer be used. Please update the code to import from Pydantic directly.

For example, replace imports like: from langchain_core.pydantic_v1 import BaseModel
with: from pydantic import BaseModel
or the v1 compatibility namespace if you are working in a code base that has not been fully upgraded to pydantic 2 yet. from pydantic.v1 import BaseModel

from binary_score_models import GradeAnswer,GradeDocuments,GradeHallucinations
C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_nvidia_ai_endpoints_common.py:212: UserWarning: Found nvidia/llama-3.3-nemotron-super-49b-v1.5 in available_models, but type is unknown and inference may fail.
warnings.warn(
C:\Users\user\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_nvidia_ai_endpoints\chat_models.py:814: UserWarning: Model ‘nvidia/llama-3.3-nemotron-super-49b-v1.5’ is not known to support structured output. Your output may fail at inference time.
warnings.warn(
—RETRIEVE—
Traceback (most recent call last):"

I don’t want to communicate the entire stacktrace. But after the stacktrace these output lines follows:

"Exception: [401] Unauthorized
Authentication failed
Please check or regenerate your API key.
During task with name ‘retrieve’ and id ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ "

I’ve tried to retrieve an API key from Nvidia. But I might fail doing that. If someone got a link it would be usefull.

And if someone knows of the “LangChain” error information from the beginning of the error list it would also be usefull.

I’m eagerly awaiting your response.

BR,
Ebbe Jensen

Hi @jensen.ebbe – you can get the API key here: Try NVIDIA NIM APIs

We’ll check out the langchain warning and see if it’s causing any other issues.