ChatNVIDIA - HTTPError: 404 Client Error: Not Found

Hello,

I am working through the: Talk to your data in your native language dlc course.

And am getting a 404 error:

HTTPError: 404 Client Error: Not Found for url: https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions/2fddadfb-7e76-4c8a-9b82-f7d3fab94471

Can anyone see what I am doing wrong?

I’m a novice. And it’s probably something very simple. But I can’t see it.

I have copied in my code, below - most of which is identical to the example here:

#START

from langchain_nvidia_ai_endpoints import ChatNVIDIA

pload = {
“model”:“ai-llama2-70b”,
“nvidia_api_key”:“nvapi–DxNl51yzZb3PZohpA0n5Jwoetf5W3T32TKBWb04nZcDj2ZhgE7_jMGTObYJCi-v”,
“temperature”:0.2,
“top_p”:0.7,
“max_tokens”:1024,
}

llm = ChatNVIDIA(**pload)

LLAMA_PROMPT_TEMPLATE = (
[INST] <>”
“{system_prompt}”
“<>”
“[/INST] {context}
[INST] {question} [/INST]”
)
system_prompt = “You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Please ensure that your responses are positive in nature.”
context=“”
question=‘What is the fastest land animal?’
prompt = LLAMA_PROMPT_TEMPLATE.format(system_prompt=system_prompt, context=context, question=question)

for val in llm.stream(prompt):
tokens_generated += 1
print(val, end=“”, flush=True)

#END

Hi,

The model is meta/llama2-70b not ai-llama2-70b
You exposed your key, you will need to change it.