Compatibility of TensorRT optimized engine with deepstream-app

I’ve successfully run the deepstream-app at “deepstream_sdk_v4.0.1_jetson/sources/objectDetector_SSD” by the default SSD UFF file.(It seems the example uses ssd_inception_v2_coco TensorFlow frozen graph.)
And then I have another TensorRT optimized ‘engine’ file of ssd_mobilenet_V2. (It was made through the Demo #3: SSD in GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet).
Basically, I want to use an SSD model in deepstream-app which has different backbone than the inception and is optimized by my own dataset through transfer learning.
However, I got the following error message when I tried to use the engine with the deepstream-app (by editing its config file) :

deepstream-app: nvdsiplugin_ssd.cpp:72: FlattenConcat::FlattenConcat(const void, size_t): Assertion `mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3’ failed.
Aborted (core dumped)
*

I’m not sure if I have to make any customization to the plugin library given in deepstream_sdk_v4.0.1_jetson/sources/objectDetector_SSD when I want to use a TensorRT optimized engine file which is from another inference model different from the SSD_inception_v2_coco.
I’m not familiar with the deepstream-app plugin structure, so hope to get any explanation what is the main cause of this problem and what I should do to use any TensorRT optimized engine in the deepstream pipeline.

Hi,
Please check if the following github code helps:

I succeeded in building a TRT engine file for SSD_mobilenet_V2 using the main.py in GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT.

However, when I used the TRT engine file for the deepstream-app, the same following error occurred:

deepstream-app: nvdsiplugin_ssd.cpp:72: FlattenConcat::FlattenConcat(const void*, size_t): Assertion `mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3’ failed.
Aborted (core dumped)

Note that I succeeded in running deepstream-app with SSD_mobilenet_V2 by the procedure provided in https://devtalk.nvidia.com/default/topic/1066088/deepstream-sdk/how-to-use-ssd_mobilenet_v2/post/5399649/#5399649.
I’m very curious about what is the difference between the TRT engine file made by GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT and https://devtalk.nvidia.com/default/topic/1066088/deepstream-sdk/how-to-use-ssd_mobilenet_v2/post/5399649/#5399649.

Hi,

Do you mean you follow this tutorial and still meet the axis error?
https://devtalk.nvidia.com/default/topic/1066088/deepstream-sdk/how-to-use-ssd_mobilenet_v2/post/5399649/#5399649

If your model is customized, you may need to update the class number.
Thanks.