ChatNVIDIA: Exception: [403] Forbidden Invalid UAM response

I just finished DLI Course " Building RAG Agents with LLMs". Now I want to test some code on my own PC.

I installed exactly the same version of “langchain(0.2.14)” and “langchain-nvidia-ai-endpoints(0.2.1)” as in DLI Course.

Then I test some simple code such as list model and invoke with llm.

import os
os.environ[“NVIDIA_API_KEY”] = “nvapi-sjfsjfksjkfjsdfjsdjflksfklskjflsjkf” #my api_key

from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

ChatNVIDIA.get_available_models() # this can run succecssfully

chat_llm = ChatNVIDIA(model=“meta/llama3-8b-instruct”)
prompt = ChatPromptTemplate.from_messages([
(“system”, “Only respond in rhymes”),
(“user”, “{input}”)
])
rhyme_chain = prompt | chat_llm | StrOutputParser()
print(rhyme_chain.invoke({“input” : “Tell me about birds!”}))

Then I got errors:

Exception: [403] Forbidden
Invalid UAM response

I am certain that my api_key is correct. Because when I replace my gpi-key with some some random letters, I got errors:

Exception: [401] Unauthorized
Authentication failed
Please check or regenerate your API key.

I hope someone can help me solving this problem. Thanks a lot!

1 Like

I got the same error. I generated an API Key for the model I am using by going to Try NVIDIA NIM APIs and the corresponding model. The error is gone Hope it helps!.

Hi! What did you do to fix this issue? I am getting Error 403 Invalid UAM Response. Please I need help. I did try to create a new API code and replace the former API code I had but still got the same issue!

I generated a new API key for the model, in your case it is meta/llama3-8b-instruct, and used that.

for me the same error: model=“meta/llama-3.3-70b-instruct”, I fixed it by generating new key

I think they’re messing around—sometimes it works, and other times it doesn’t.

2 Likes

true. i am just sending the request again and it works and sometimes it doesnt

Sorry to hear this. Have passed this information on to the right team and are working to ensure a seamless experience in the future.