Hello Devs,
I have setup the nvidia inference endpoint https://integrate.api.nvidia.com/v1 for AI Agents using OpenAI Agents SDK and seems always failing while works perfect with openai endpoint https://api.openai.com/v1
# Initialize the OpenAI client with provided credentials
client = AsyncOpenAI(
api_key="API_KEY",
base_url="https://integrate.api.nvidia.com/v1",
)
# Initializes agent
my_agent = Agent[ExecutionContext](
name="simple_agent",
handoff_description="...",
instructions=("Instructions Here ...."),
model_settings=ModelSettings(tool_choice='required', temperature=0, top_p=0.9),
model=OpenAIChatCompletionsModel(model="openai/gpt-oss-120b", openai_client=client),
tools=[
function_tool(evaluate_scores), function_tool(get_records_data_context),
]
)
OUTPUT ERROR:
run_result = await Runner.run(
^^^^^^^^^^^^^^^^^
File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\agents\run.py", line 199, in run
return await runner.run(
^^^^^^^^^^^^^^^^^
File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\agents\run.py", line 417, in run
turn_result = await self._run_single_turn(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\agents\run.py", line 905, in _run_single_turn
new_response = await cls._get_new_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\agents\run.py", line 1066, in _get_new_response
new_response = await model.get_response(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\dom\anaconda3\envs\genai\Lib\site-packages\agents\models\openai_chatcompletions.py", line 77, in get_response
first_choice = response.choices[0]
~~~~~~~~~~~~~~~~^^^
TypeError: 'NoneType' object is not subscriptable
It seems like the nvidia endpoint has issue with tool_choice='required’, by removing that error disappear but tool calling fails.