Triton Container: 22.07-py3
Deepstream Container: 6.1-triton
Hardware: Tesla P100 and A4000
We have standalone triton server with following models:
- PreProcess (Dali Backend)
- ObjectDetector (TensorRT Backend)
- PostProcess (Python Backend)
We are using deepstream as a client to do inference on multiple streams. We have printed the raw model output in the PostProcess python backend model code. We observed that the first frame of the batch will always give zeroes as a raw output. Rest of the frames do give correct outputs.
Due to getting zeros on the first frame of a batch we can see the bounding box of any stream will drop in a tiled output at random.
Also we are not facing such issue with 1 and 2 streams consistently.
Kindly let us know have observed this? If yes what could be the possible solution around it.
I have moved this topic from
DeepStream SDK section for better traction.
Thanks, Appreciate it. Looking forward for the response.
Can you tell us how to reproduce this issue?
You will require following models deployed in a stand alone triron:
- Pre-process (Dali backend),
- Inference (TensorRT backend)
- ensemble model encapsulating both the above mentioned models
send back the raw model output back to deepstream. Attach a python probe after the nvinferserver plugin. Print the raw model outputs and you will notice that the first array of the batch will always be zero.
Please let me know if you have any questions.
Can you provide the models and source codes? Or can you reproduce the problem with our sample code?
I am able to solve the issue by setting the RTSP output sink property as below:
# Added below parameter to resolve the issue.
Can you please give some insights on qos flag? Thanks.
It is not deepstream related. Please refer to gstreamer community. Quality Of Service (QoS) (gstreamer.freedesktop.org)
It seems the code is working fine with streammux batch size set to one. But the flickering happens if I provide two or more streams and streammux batch size is equal to # of streams. I will share the code with you with open source model.