In deepstream, how to create a multi-batch secondary model when the number of primary instances is unknown?

I have a primary-gie and a secondary-gie for inference so when multiple primary instances were found, the same amount of secondary inference were triggered. As a result, the secondary-gie took significantly more time than the primary-gie even though the secondary-gie utilizes a smaller model.

I tried creating a secondary model with batch-size 2 but it ended up producing very bad results. So how can I improve the latency of secondary-gie?

My setup is the following:

Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2

Can you share your use case? Why you have this kind of requirement?

Sure. We use DS for traffic light recognition, i.e the primary-gie for detection, and the secondary-gie for getting the sematic information of each traffic light detected. And when multiple traffic lights were detected, the secondary-gie would consume significantly more time.

Please upgrade to DS6.0. Supose SGIE also use batch to do infer. nvinfer is opensource, Can you have a check?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.