Model shown READY but can't be found

When I use the triton server start command with some test models, all of them are given “READY” status by Triton but then some models can’t be found via the curl localhost:8000/v2/models/<model_name> command

  1. what is the return content after “curl localhost:8000/v2/models/<model_name>” ?
  2. please Verify Triton Is Running Correctly, here is the doc: server/quickstart.md at main · triton-inference-server/server · GitHub

The return content is {“error”:“Request for unknown model: <model_name> is not found”}
Server seems to be running correctly, the models can’t be found via curl only with dynamic batches

after start Tritonserver, can you see the model is ready? like this:
I1109 09:02:47.367984 13635 server.cc:629]
±-----------------±--------±-------+
| Model | Version | Status |
±-----------------±--------±-------+
| Primary_Detector | 1 | READY |
±-----------------±--------±-------+
when the model is ready, you can curl the model, like this:
curl localhost:8000/v2/models/Primary_Detector {“name”:“Primary_Detector”,“versions”:[“1”],“platform”:“tensorrt_plan”,“inputs”:[{“name”:“input_1”,“datatype”:“FP32”,“shape”:[-1,3,368,640]}],“outputs”:[{“name”:“conv2d_bbox”,“datatype”:“FP32”,“shape”:[-1,16,23,40]},{“name”:“conv2d_cov/Sigmoid”,“datatype”:“FP32”,“shape”:[-1,4,23,40]}]}

I do get a READY status for the model, but the model can’t be found via curl

  1. did you use the correct name? like this:
    curl localhost:8000/v2/models/Primary_Detector1
    {“error”:“Request for unknown model: ‘Primary_Detector1’ is not found”}
  2. you can add “–log-verbose=1” when start tritonserver, then you can get more information for server side.

Yes I used the correct name
And the verbose log shows the model has been loaded

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware RTX 3070
Triton/TRT Version - 20.12-py Docker
Driver - 460.91.03
Issue Type - Bug?
Reproduction of bug : models/vision/classification/mnist/model at main · onnx/models · GitHub
MNIST-7 .onnx model works as expected and can be inferenced from but when the .onnx is modified with dynamic batches the model shows up as READY but can’t be curled
Both config files and .onnx files are attached
model.onnx (25.8 KB)

config.pbtxt (295 Bytes)
config.pbtxt (296 Bytes)
model.onnx (26.0 KB)

using your files, I can’t reproduce the issue you said,

please check server’s logs, when curl once, there will be a “/v2/models/model” printing on the server side.

What exactly did you use to run my files coz I’m still getting this issue

  1. I did modify your files, here is the files deployment.
    ll triton_model_repo_new/model/
    drwxrwxr-x 2 1023 1023 4096 Nov 10 06:39 1/
    -rw-rw-r-- 1 1023 1023 292 Nov 10 06:39 config.pbtxt
    ll triton_model_repo_new/model/1/
    -rw-rw-r-- 1 1023 1023 26617 Nov 10 06:23 model.onnx

  2. start tritonserver, here is the command: tritonserver --model-store=triton_model_repo_new --log-verbose 1 --http-port 9000 --grpc-port 9001 --metrics-port 9002

  3. here is the curl command in the same docker: curl localhost:9000/v2/models/model

Could you send the modified files then?

sure, 1.zip (23.8 KB)

Still running into this issue, have you tried it with docker?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

yes, I am testing in deepstream 6.1.1 docker. did the tritonserver print “/v2/models/model” when do curl on client? I suppose you have multiple tritonserver.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.