TAO trained custom model (DetectNetV2 + RESNET10) deployed in Deepstream is having low fps

Hi Everyone,
I am working on Jetson Xavier NX . I have trained TAO toolkit with custom data and I successfully generated the model after purning ,but when i put it in Deepstream testapp3 , I am getting less speed(fps : 4) after 5th video input.(Deepstream testapp3 will detect maximum 16 video input at a time)
Model size before puning : 43 MB after purning : 16MB

1.I generated the caliberation cache and .etlt model from my PC and put it in xavier,had never done the quantisation after purning.
2.The training samples are aroung 1300 images
3.Tensorrt is 8.2 (xavier)

What could be the possible reason for the low fps ?I had used the same notebook(detectnet_v2) with resnet10 pretrained model. Kindly help me

Thanks in advance

Hi Everyone

Kindly reply for the above query?

Are you working with DeepStream 6.1.1 on your NX board?

Have you testing the model and engine performance with trtexec tool?

Have you “export NVDS_TEST3_PERF_MODE=1” before run the deepstream-test3 sample for performance?

Is your board set to the max power mode before you run the performance test? Performance — DeepStream 6.1.1 Release documentation

Hi,
I have done the same as you mentioned. Can you please provide the How to put the number of batches and batch size in generating the calibration file.If i have 8500 images .

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Are you generating the calibration file with TensorRT? If so, please raise topic in TensorRT forum. Latest Deep Learning (Training & Inference)/TensorRT topics - NVIDIA Developer Forums

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.