I just finished DLI Course " Building RAG Agents with LLMs". Now I want to test some code on my own PC.
I installed exactly the same version of “langchain(0.2.14)” and “langchain-nvidia-ai-endpoints(0.2.1)” as in DLI Course.
Then I test some simple code such as list model and invoke with llm.
import os
os.environ[“NVIDIA_API_KEY”] = “nvapi-sjfsjfksjkfjsdfjsdjflksfklskjflsjkf” #my api_key
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
ChatNVIDIA.get_available_models() # this can run succecssfully
chat_llm = ChatNVIDIA(model=“meta/llama3-8b-instruct”)
prompt = ChatPromptTemplate.from_messages([
(“system”, “Only respond in rhymes”),
(“user”, “{input}”)
])
rhyme_chain = prompt | chat_llm | StrOutputParser()
print(rhyme_chain.invoke({“input” : “Tell me about birds!”}))
Then I got errors:
Exception: [403] Forbidden
Invalid UAM response
I am certain that my api_key is correct. Because when I replace my gpi-key with some some random letters, I got errors:
Exception: [401] Unauthorized
Authentication failed
Please check or regenerate your API key.
I hope someone can help me solving this problem. Thanks a lot!