MobileNetV2_SSD

So I’m trying to use TensorRT converted detection models in a gstreamer pipeline via gst-nvinfer plugin.

I’ve followed the steps here : GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT to generate the uff file for a ssd_mobilenet_v2_coco_2018_03_29 model.
That par works.

Now, I want to use it with gst-nvinfer in order to try the deepstream-app with this model and see what’s what.
But there are sooo many parameters I just don’t get… :

net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
...
output-blob-names=NMS
#parse-bbox-func-name=NvDsInferParseCustomSSD

How can I guess the output-blob-names ?
Should I have a parse-bbox-func as well ? I guess I would, so nvinfer can read the bounding boxes etc and put that in its metadata right ?

But the TRT_object_detection/libflattenconcat.so at master · AastaNV/TRT_object_detection · GitHub does not contains NvDsInferParseCustomSSD, only the sampleUffSSD does, but then the https://github.com/dusty-nv/jetson-inference/blob/master/plugins/FlattenConcat.cpp code is not the same than the sampleUffSSD, so if I convert my model with sampleUffSSD I end up with the

assert(mConcatAxisID == 1 || mConcatAxisID == 2 || mConcatAxisID == 3)

issue

I mean I understand it’s not that clear, bu that’s actually the issue… it’s all over the place, TensorRT plugins with same name, different code, different version of the same model, differences between Deepstream4 examples and jetson-inference repo

All I’m trying to do is to be able to use gst-nvinfer with my trained MobileNetV2-SSD network… and it just seems unachievable :/