Deepstream Mask RCNN bad performance

• Hardware Platform (Jetson / GPU) Jetson xavier NX
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6

I have trained a Mask RCNN model with this matterport’s implementation. The model configuration was modified in order to detect 3 classes (background + 2 “real classes”) and to infer over images of 448x448px.

I converted this model to a .uff model following this instructions on a google colab. (In fact, I did use a different commit of this repo but I guess it does not matter). The conversion was done modifying some parameter in order to be consistent with the number of classes and image shape,

The problem is that when I run the .uff (with my custom deepstream pipeline) file the performance is very poor.

This is the expected output by the model (This is what i get with my host PC and the .h5 model)

This is what I get with my custom deepstream pipeline and the .uff model.

Could you give me any kind of hint in order to make a correct model conversion or?

How could I debug the model conversion?

If needed, I could share my models, my configs, my google colabs notebooks for the conversion, whatever.

Thanks!

Hi @aurelm95
I think you can debug with steps below:

1. check if uff can generate the same output as .h5 with the same input

1.1 dump the input and output of .h5 inference
1.2 input the .h5 input to trtexec+uff (reference options below) and check if it can get the almost same output as .h5 inference

/usr/src/tensorrt/bin/trtexec
...
  --uffInput=<name>,X,Y,Z     Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
  --uffNHWC                   Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)
...
  --loadInputs=spec           Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0')
                              Input values spec ::= Ival[","spec]
                                           Ival ::= name":"file
...
  --dumpOutput                Print the output tensor(s) of the last inference iteration (default = disabled)
...

If this is confirmed, it indicates the uff model you exported is good.

2. debug the DeepStream + UFF accuracy
referring to DeepStream SDK FAQ - #21 by mchi

Thanks for the reply. I am able to dump the output of the .uff model given a specific binary data. However, I don’t really know how to do this with the model from the matterport’s implementation . I keep working on this.

Despite that, based on the videos I shared in the original question, I strongly believe that the output will not be similar. I guess this is because conversion from .h5 to .uff was not done correctly.

Is there any documentation about what should I modify from these files in order to convert a different model than the one trained with the coco database?

If there is not such documentation, how should I start to figure out what should I modify?

Thanks!

Actually, Tensorrt is deprecating UFF and Caffe as Release Notes :: NVIDIA Deep Learning TensorRT Documentation .
I would recommend you to export it to ONNX instead of UFF. There is public API to export to ONNX model.

Okay I will try that!

By the way, If now I get the .engine model through the .onnx file in stead of the .uff one, will me “custom parser mask function” work or I will need to re-program it ?

for the ONNX exported from the same pythorch model as uff, the parser can be leveraged

1 Like

Since the uff parser is deprecated, will nvidia release an updated method for converting matterport’s mask rcnn model to .engine?

Thanks!

as I said, converting to ONNX, TensorRT can build engine from ONNX if all the ops are supported by TensorRT

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.