Please provide the following information when requesting support.
• Hardware (T4/V100/Xavier/Nano/etc): 1x Nvidia A5000 GPU
• Network Type: Mask-RCNN
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here): tao 5.0
The Mask R-CNN export function does not create an ONNX file, despite this being stated in the mask_rcnn.ipynb example notebook. I only get an .uff file as output. For my project, I need to run inference using ONNX and this being a big part of the TAO 5.0 marketing was the reason I tried TAO 5.0.
Is there any option I need to activate for Mask RCNN to activate ONNX export? Thanks!
It is still uff model. You use .uff model and then follow TRTEXEC with Mask RCNN - NVIDIA Docs to generate tensorrt engine.
Ok, for our software we need the model in ONNX format so that we are able to decide if inference is done on CPU or on GPU. Is there any way to convert it from uff to ONNX? Thanks
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Currently, Mask_rcnn does not support export to ONNX file yet. Suggest you to use trtexec to generate tensorrt engine for running inference on GPU.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.