Detectron2 Deepstream

I followed this github link and was able to generate the converted onnx file and the tensorrt engine file.
Even the inference works. (Converted onnx and tensorrt engine for detectron2 model)

Now I wanted to try deepstream so I downloaded the deepstream:62-triton container where the sample tests works.

The moment I try to use my converted onnx, this error is displayed.

I have cloneddeepstream-python-apps inside my container and the samples i.e., deepstream-test1, deepstream-test2, deepstream-test1-usbcam, etc all are working fine.

I even modified my deepstream-test2 so that it works with usbcam.

Now I want to run my detectron2 mask_rcnn model with deepstream.
Since I am very new to all this, I would appreciate any help or tips.

I will attach the converted onnx files and also the config files since those could be wrong too.

Thank You.

deepstream_test2_config_files.zip (6.7 KB)
converted.onnx

• Hardware Platform : deepstream:6.2-triton docker container (x_86-64 machine: ubuntu 20.04)
• DeepStream Version : 6.2
• TensorRT Version : 8.6.1

In you pgie config file, the setting for “proto-file” should be “model-engine-file” :

onnx-file=/opt/nvidia/deepstream/deepstream-6.2/sources/detectron2/converted.onnx
proto-file=/opt/nvidia/deepstream/deepstream-6.2/sources/detectron2/converted_b1_gpu0_fp16.engine

Oh yes I did that by mistake, it was model-engine-file before.

Even then, error is same. The error generates during the build of tensorrt engine.

if you succeed to generate engine by trtexec, you can set the model-engine-file to the engine path, then deepstream will load that engine directly.

I was able to generate the trt engien using the trtexec but it didn’t work.
I will try again.

It kind of worked but there is a new error now.

While I was searching about it, I came across this link.
Here the person solved it by using this_

parse-bbox-func-name=NvDsInferParseCustomONNX
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/sources/libs/nvdsinfer_customparser/libnvds_infercustomparser.so

Now since I am not using onnx any longer, do I need to create some function manually or ?

Any sort of advice is appreciated.

you model is onnx type. you need to implemet parse-bbox-func-name because of “Name of the custom bounding box parsing function. If not specified, Gst-nvinfer uses the internal function for the resnet model provided by the SDK”.

I see.

I am assuming you mean this

I do not see the NvDsInferParseCustomONNX so maybe it is not available in the latest release.

Since I am working with detectron2, my output is in this format
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 6
0 INPUT kFLOAT input_tensor 3x800x800
1 OUTPUT kINT32 num_detections_box_outputs 1
2 OUTPUT kFLOAT detection_boxes_box_outputs 100x4
3 OUTPUT kFLOAT detection_scores_box_outputs 100
4 OUTPUT kINT32 detection_classes_box_outputs 100
5 OUTPUT kFLOAT detection_masks 100x28x28

So I guess I have to create a custom function and try it right?

yes, please understand the meaning of output layers first, then implement a corresponding parsing function, please refer to opt\nvidia\deepstream\deepstream-6.2\sources\objectDetector_Yolo\nvdsinfer_custom_impl_Yolo\nvdsparsebbox_Yolo.cpp

1 Like

Ok, I will look into it and then try to create the customparser function.

Thank You.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.