I followed this github link and was able to generate the converted onnx file and the tensorrt engine file.
Even the inference works. (Converted onnx and tensorrt engine for detectron2 model)
Now I wanted to try deepstream so I downloaded the deepstream:62-triton container where the sample tests works.
The moment I try to use my converted onnx, this error is displayed.
I have cloneddeepstream-python-apps inside my container and the samples i.e., deepstream-test1, deepstream-test2, deepstream-test1-usbcam, etc all are working fine.
I even modified my deepstream-test2 so that it works with usbcam.
Now I want to run my detectron2 mask_rcnn model with deepstream.
Since I am very new to all this, I would appreciate any help or tips.
I will attach the converted onnx files and also the config files since those could be wrong too.
you model is onnx type. you need to implemet parse-bbox-func-name because of “Name of the custom bounding box parsing function. If not specified, Gst-nvinfer uses the internal function for the resnet model provided by the SDK”.
I do not see the NvDsInferParseCustomONNX so maybe it is not available in the latest release.
Since I am working with detectron2, my output is in this format
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 6
0 INPUT kFLOAT input_tensor 3x800x800
1 OUTPUT kINT32 num_detections_box_outputs 1
2 OUTPUT kFLOAT detection_boxes_box_outputs 100x4
3 OUTPUT kFLOAT detection_scores_box_outputs 100
4 OUTPUT kINT32 detection_classes_box_outputs 100
5 OUTPUT kFLOAT detection_masks 100x28x28
So I guess I have to create a custom function and try it right?
yes, please understand the meaning of output layers first, then implement a corresponding parsing function, please refer to opt\nvidia\deepstream\deepstream-6.2\sources\objectDetector_Yolo\nvdsinfer_custom_impl_Yolo\nvdsparsebbox_Yolo.cpp