I’ve successfully run the deepstream-app at “deepstream_sdk_v4.0.1_jetson/sources/objectDetector_SSD” by the default SSD UFF file.(It seems the example uses ssd_inception_v2_coco TensorFlow frozen graph.)
And then I have another TensorRT optimized ‘engine’ file of ssd_mobilenet_V2. (It was made through the Demo #3: SSD in GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet).
Basically, I want to use an SSD model in deepstream-app which has different backbone than the inception and is optimized by my own dataset through transfer learning.
However, I got the following error message when I tried to use the engine with the deepstream-app (by editing its config file) :
deepstream-app: nvdsiplugin_ssd.cpp:72: FlattenConcat::FlattenConcat(const void, size_t): Assertion `mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3’ failed.
Aborted (core dumped)*
I’m not sure if I have to make any customization to the plugin library given in deepstream_sdk_v4.0.1_jetson/sources/objectDetector_SSD when I want to use a TensorRT optimized engine file which is from another inference model different from the SSD_inception_v2_coco.
I’m not familiar with the deepstream-app plugin structure, so hope to get any explanation what is the main cause of this problem and what I should do to use any TensorRT optimized engine in the deepstream pipeline.