How to convert uff model to onnx format

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 7.2.0
GPU Type: V100
Nvidia Driver Version:
CUDA Version: 12.0
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.10
TensorFlow Version (if applicable): 2.0
PyTorch Version (if applicable): 1.10
Baremetal or Container (if container which image + tag):

Hi i am trying to convert a uff model to onnx format can you let me knw the process or procedure , i do not have the base tensorflow / pytorch model with me …The model architecture is customized object detection model

Thanks in advance

Load the model in pytorch, then use the following export command:

arr= torch.rand((200, 200)).cuda()
torch.onnx.export(model, 
                  args=(arr),  
                  f=PATH_OUTPUT_ONNX, 
                  input_names=['arrInput'], 
                  output_names=['arrOut'], 
                  export_params=True, 
                  opset_version=16)

Hi ,

As mentioned previously i donot have the pytorch model with me i only have the uff model i want to convert uff–> onnx

Hi,
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

But this example if for if i am having onnx model with me , i am having only uff model either i have to convert it to tensorrt and then to onnx or directly to onnx . Is there any example which u can share which performs the following process

Hi,

Please try tf2onnx in case it helps you (uff → .pb → onnx).

If you still need further assistance, please reach out to the ONNX related platform to get better help.

Thank you.