Tao-convert failed to export the mask_rcnn model to Tensorrt

• Hardware 2080ti
• Network Type Mask_rcnn
I am training a model on RTX3090 and want to deploy a TensorRT model on RTX2080TI
command:

./tao-converter -k nvidia_tlt  -d 3,1344,1344 -o generate_detections,mask_fcn_logits/BiasAdd -e ./trt.fp16.engine -t fp16 -i nchw -m 8 ./model.step-250000.etlt

error:

[INFO] [MemUsageChange] Init CUDA: CPU +322, GPU +0, now: CPU 333, GPU 3243 (MiB)
[INFO] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 333 MiB, GPU 3243 MiB
[INFO] [MemUsageSnapshot] End constructing builder kernel library: CPU 468 MiB, GPU 3277 MiB
[ERROR] UffParser: Validator error: pyramid_crop_and_resize_box: Unsupported operation _MultilevelCropAndResize_TRT
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] 4: [network.cpp::validate::2633] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine

I am sure that the key is correct and that tao-convert is installed correctly because I successfully converted other models with it.

It is related to missing plugin.
See MaskRCNN — TAO Toolkit 3.22.05 documentation, MaskRCNN requires the generateDetectionPlugin , multilevelCropAndResizePlugin , resizeNearestPlugin and multilevelProposeROI plugins, which are available in the TensorRT open source repo .

Please refer to deepstream_tao_apps/TRT-OSS/x86 at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to build the new plugin( libnvinfer_plugin.so)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.