Cannot enable --enable-auto-tool-choice and --tool-call-parser

Hello, I am going to use tools/function call of LLM in my project. However, it shows the error that :

openai.BadRequestError: Error code: 400 - {‘object’: ‘error’, ‘message’: ‘“auto” tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set’, ‘type’: ‘BadRequestError’, ‘param’: None, ‘code’: 400}

After that, I check the NIM document and re-build the container with NIM_ENABLE_AUTO_TOOL_CHOICE=1 and NIM_TOOL_CALL_PARSER=llama3_json like below:

docker run -d --name nim
–gpus all --restart unless-stopped
-p 8000:8000
-e NGC_API_KEY=xxxxxxxxxxxxxxxxxxx
-e NIM_ENABLE_AUTO_TOOL_CHOICE=1
-e NIM_TOOL_CALL_PARSER=llama3_json

nvcr.io/nim/meta/llama-3.1-8b-instruct-dgx-spark:latest

Unfortunately, it still get error :

openai.BadRequestError: Error code: 400 - {‘object’: ‘error’, ‘message’: ‘“auto” tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set’, ‘type’: ‘BadRequestError’, ‘param’: None, ‘code’: 400}

@sam.yau You are encountering a vLLM error because the backend was not actually started with --enable-auto-tool-choice and --toll-call-parser, despite setting the NIM environment variables. In LLM-specific NIM images like meta/llama-3.1-8b-instruct-dgx-spark, the image ships with built-in JSON tool-calling support. For this, you should remove NIM_ENABLE_AUTO_TOOL_CHOICE and NIM_TOOL_CALL_PARSER from your docker run command and just use the standard JSON tools with tool_choice=”auto” from the client. If you ever need custom parsers and those env vars, that is only supported with the generic LLM NIM images, not the model-specific one you are using.

Hope this answers your questions. Than you so much for experimenting and using our NIMs.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.