There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
You can test the model performance with “trtxec” tool first. If the model itself is too heavy to be fast enough, you may try to skip some frame inferencing by setting “interval” parameter of nvinfer to achieve higher performance. Gst-nvinfer — DeepStream 6.3 Release documentation