The model llama3 does not exist calling from ChatNVIDIA langchain class

I am exploring the models from langchain based on what is detailed here.

I get an error trying to use the llama3 models. It details

The model `meta/llama3-8b` does not exist.
The model `meta/llama3-70b` does not exist.

I can it reproduce it easily, validated that I see the associated models from the available ones

 Model(id='ai-llama2-70b', model_type='chat', api_type=None, model_name='meta/llama2-70b', client='ChatNVIDIA', path='2fddadfb-7e76-4c8a-9b82-f7d3fab94471'),
 Model(id='ai-llama3-70b', model_type='chat', api_type=None, model_name='meta/llama3-70b', client='ChatNVIDIA', path='a88f115a-4a47-4381-ad62-ca25dc33dc1b'),
 Model(id='ai-llama3-8b', model_type='chat', api_type=None, model_name='meta/llama3-8b', client='ChatNVIDIA', path='a5a3ad64-ec2c-4bfc-8ef7-5636f26630fe'),

llama2-70b works fine but llama3-70b/llama3-8b don’t.

Should be reproduced using the following code and a valid NVIDIA_API_KEY environment variable:

%pip install --upgrade --quiet langchain-nvidia-ai-endpoints
from langchain_nvidia_ai_endpoints import ChatNVIDIA

model_name = "ai-llama3-8b" # Error
model_name = "ai-llama3-70b" # Error
model_name = "ai-llama2-70b" # OK

llm = ChatNVIDIA(model=model_name, verbose=True, streaming=False, temperature=0.01, max_tokens=50)
result = llm.invoke("Who you are?")

thanks in advance

I updated to version v0.0.12 and now both llama3 options are working!

'name': 'ai-llama3-8b',
'status': 'ACTIVE',
'ownedByDifferentAccount': True,
'apiBodyFormat': 'CUSTOM',
'healthUri': '/v1/models',
'createdAt': '2024-05-01T20:33:23.203Z'},
'name': 'ai-llama3-70b',
'status': 'ACTIVE',
'ownedByDifferentAccount': True,
'apiBodyFormat': 'CUSTOM',
'healthUri': '/v1/models',
'createdAt': '2024-05-01T20:33:22.955Z'},

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.