Pre processing for MaskRCNN

• Hardware (RTX2070)
• Network Type (Mask_rcnn)
• TLT Version (TAO v3.21.08-py3)

Hello,

I’ve been able to successfully use the tao toolkit to train maskrcnn over our own data. However, at the moment we fail to get good results with the exported tensorrt engine. The tao mask_rcnn inference command creates correct output with good detections, but the same engine loaded directly with the tensorrt c++ sdk doesn’t seems to give any correct results. This is not a problem with tensorrt, in another project we previously used the c++ sdk successfully with an engine created from an ONNX model.

I’ve seen on this example of deepstream integration Training Instance Segmentation Models Using Mask R-CNN on the NVIDIA TAO Toolkit | NVIDIA Technical Blog that there is some values in the config files corresponding to what seems to be a normalization, however there is no explanation on this subject.

net-scale-factor=0.017507
offsets=123.675;116.280;103.53

So, is there any documentation about the preprocessing occuring in tao-toolkit, for mask-RCNN? As the tao container only contains .pyc files, it is not even possible to search in the code what happens during the processing step. And that makes it impossible to integrate the generated tensorrt engine.

Please refer to Interpreting output of MaskRCNN from TLT to TRT - #7 by Morganh

Implementing this preprocessing seems to have solved the issue. However, it would be needed to have that kind of processing documented somewhere. Not having this kind of info openly available is a huge drawback for anyone wanting to make use of TAO, potentially showing it as not mature enough for industrial use-cases.

The preprocessing in described in deepstream config files. For example,
deepstream_tao_apps/pgie_peopleSegNetv2_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

Yes, we are planning to expose this in triton apps.
Currently, detectnet_v2 and classification are available in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton (from Integrating TAO CV Models with Triton Inference Server — TAO Toolkit 3.22.05 documentation)
LPRnet is available in NVIDIA-AI-IOT/tao-toolkit-triton-apps at dev-morgan/add-lprnet-triton (github.com)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.