Hi, I am new to Nvidia API and confused how it calls OpenAI API which I thought is meant for GPT but not models like Llama? can someone explain how this works?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Result of nvidia nims in openai SDK and API inconsistent | 0 | 34 | January 7, 2025 | |
Nvidia / llama-3.1-nemotron-70b-instruct openai api is not working | 1 | 292 | November 10, 2024 | |
Tool Calling GPT-OSS-20b and 120b | 2 | 466 | August 21, 2025 | |
How can we bring VLM of choice? | 2 | 128 | August 23, 2024 | |
Hope, dream | 0 | 213 | February 29, 2024 | |
Not getting response for this model since yesterday meta/llama-3.1-405b-instruct model | 0 | 78 | December 13, 2024 | |
Models with vlm, structured output and tool_calling | 2 | 80 | September 9, 2025 | |
LLM model endpoints data residency | 0 | 88 | July 12, 2024 | |
Llama-3.2-nv-embedqa-1b-v2 402 Payment required | 1 | 40 | June 10, 2025 | |
The model llama3 does not exist calling from ChatNVIDIA langchain class | 2 | 569 | May 6, 2024 |