MaskRcnn conversion to TRT engine fails with TRT 8.0.1 on using tao-converter binary for jetson jetpack 4.6

• Hardware (Jetson AGX – Jetpack 4.6 → Nvidia Volta → Ubuntu 18.04)
• Network Type (MaskRCNN)
• TLT Version: TAO Toolkit 3.0-21.11
• How to reproduce the issue?

I am using Jetpack 4.6 which comes with tensor rt 8.0.1, there is tao-converter binary available for this but it fails when I try to convert MaskRcnn.

I Downloaded the tao-converter binary for Jetpack 4.6 from this link https://developer.nvidia.com/jp46-20210820t231431z-001zip

Ran the binary with parameters as mentioned:

./tao-converter -k nvidia_tlt \
> -d 3,448,832 \
> -i nchw \
> -t fp16 \
> -b 1 \
> -m 1 \
> -o generate_detections, mask_fcn_logits/BiasAdd \
> -e ../trtis_model_repo_sample_1/maskrcnn/1/maskrcnn_model.plan \
> ./models/maskrcnn/model.step-10200.etlt

It crashes with this error:

[INFO] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 371, GPU 14886 (MiB)
[ERROR] UffParser: Validator error: pyramid_crop_and_resize_mask: Unsupported operation _MultilevelCropAndResize_TRT
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] 4: [network.cpp::validate::2411] Error Code 4: Internal Error (Network must have at least one output)
[ERROR] Unable to create engine
Segmentation fault (core dumped)

The same model is converted successfully with trt 7. Seems like required plugins for Mask RCNN can be found only with trt 7.
Am I missing some steps to convert maskrcnn tao model in trt 8 in jetson 4.6 ?

Thank you!

Please backup your /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1 and then try to use libnvinfer_plugin.so.8.0.1 mentioned in

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.