we are using ssd resnet 18 from TLT and trained the model in T4 and deployed in coinatiner with deepstream 5.1 with TRT-OSS and it was executed proper.
After loading the same container in V100 machine, we are facing the below error
The non TLT based deepstream python apps are executed proper only with the app associated with nms plugin
==========================================
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Assertion failed: status == STATUS_SUCCESS
/home/ubuntu/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp:230
Aborting…
Aborted
========================================
Are models specific to GPU type ? and needs to be retrained ?
• V100 GPU
• DeepStream Version 5.1
• TensorRT 7.2.3
• NVIDIA GPU Driver Version 460.32
Edited:
Could this be an issue because the the tensorRT lib is built with T4 architecture ?
Any thoughts on how to proceed ?