Crosscompile onnx2tensorrt for using plugin on DriveOS 5.2.0’s TensorRT6.3.1

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version

Target Operating System

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)

SDK Manager Version

Host Machine Version
native Ubuntu 18.04


I am working on deploying onnx model to xavier machine. Since there are some custom operators in our original model, we need to implement plugins for running these model. We’ve tested in x86 environment with TensorRT6.0 opensource software. Through implementing plugins in onnx parser onnx2tensorrt and recompile it. We managed converting our onnx model to x86 tensorRT engine.

However, we can’t find suitable onnx2tensorrt for arm64 TensorRT 6.3.1 in DriveOS 5.2.0. And we are working on modifying opensource onnx2tensorrt and cross compiling it for using on xavier machine. But we met some problems.

I’m wondering is there a onnx2tensorrt opensource project for TensorRT6.3.1? Is there a more elegant way dealing with custom ops in onnx model?

thank you.

anyone can help?

Dear @user42507,
There is no opensource TRT 6.3.1 version. Could you share which layers were needed plugin implementation?

There are two kind of operations we want to implement in the form of a plugin:

  1. A bilinear upsample layer with specialized quantization in every step
  2. Basic postprocessing for detection (i.g. generating bboxes using cuda), keypoint detection (i.g. generating keypoint from heatmap using cuda), segmentation( i.g. max operation in channel axis)

Can we modify the opensource version of onnx parser of TensorRT6.0.1 to fit in running with TensorRT6.3.1.

Dear @user42507,
Yes. Please check that. Let us know if you see any issue.