How convert pytorch model that have mutiple parallel inputs to tensorrt ?

Description

At present, I only see a lot of single input examples.GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter
But now the network I need to convert has multiple parallel inputs.
(the site is :https://github.com/autonomousvision/transfuser),I failed to convert using the conventional method.The network structure is shown in the figure below. How should I convert it.

Environment

TensorRT Version: 8.4
GPU Type: jetson nx
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: ubuntu20.2
Python Version (if applicable): python3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!