Inference using Triton Server

Hi to all,
I was able to launch the INCEPTION SSD network plan file via tritonserver and write a python client to query it. However while I can get the correct output for NMS for the bboxes, NMS_1 (Keepcount) always gives me zero value. How is it possible ? The weird thing is that if I run the example script of tensorrt which doesn’t use the triton server I get the correct output of 100 . thanks in advance

Can you please provide the setup info as other topic?
And, share the repo? I suspect this may be not caused by tritonserver itself.

Certainly the installed version of tensorrt is 7.2.1.6 (GA) da https://developer.nvidia.com/nvidia-tensorrt-7x-download , and that of triton is 20.11 GitHub - triton-inference-server/server at r20.11 . These days I have successfully written the client in c++ and I have not encountered this problem and NMS_1 = 100. So can it be a problem in the python client libraries?
Thanks in advance

Maybe… Sorry! without a repo and triage, it’s hard to say.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.