• Hardware (RTX2070)
• Network Type (Mask_rcnn)
• TLT Version (TAO v3.21.08-py3)
I’ve been able to successfully use the tao toolkit to train maskrcnn over our own data. However, at the moment we fail to get good results with the exported tensorrt engine. The tao mask_rcnn inference command creates correct output with good detections, but the same engine loaded directly with the tensorrt c++ sdk doesn’t seems to give any correct results. This is not a problem with tensorrt, in another project we previously used the c++ sdk successfully with an engine created from an ONNX model.
I’ve seen on this example of deepstream integration https://developer.nvidia.com/blog/training-instance-segmentation-models-using-maskrcnn-on-tao-toolkit/ that there is some values in the config files corresponding to what seems to be a normalization, however there is no explanation on this subject.
So, is there any documentation about the preprocessing occuring in tao-toolkit, for mask-RCNN? As the tao container only contains .pyc files, it is not even possible to search in the code what happens during the processing step. And that makes it impossible to integrate the generated tensorrt engine.