Deepstream python app fails at nms plugin for SSD

we are using ssd resnet 18 from TLT and trained the model in T4 and deployed in coinatiner with deepstream 5.1 with TRT-OSS and it was executed proper.

After loading the same container in V100 machine, we are facing the below error

The non TLT based deepstream python apps are executed proper only with the app associated with nms plugin

==========================================
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Assertion failed: status == STATUS_SUCCESS
/home/ubuntu/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp:230
Aborting…

Aborted

========================================

Are models specific to GPU type ? and needs to be retrained ?

• V100 GPU
• DeepStream Version 5.1
• TensorRT 7.2.3
• NVIDIA GPU Driver Version 460.32

Edited:

Could this be an issue because the the tensorRT lib is built with T4 architecture ?
Any thoughts on how to proceed ?

you can refer to deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to get the GPU arch and build the TRT OSS

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.