DenseFusion unable to export pytorch model with TensorRT

I am currently trying to run DenseFusion model on my Jetson AGX Xavier with TensorRT in order to improve inference time. Here is the git repo for your reference

The only task now is I want to convert the PoseNet which is a sub model in DenseFusion with Pytorch to TensorRT to boost up inference time in my Xavier. Here is my conversion code for your reference.

I was successfull to export to ONNX file, howerver when I want to convert from ONNX to TensorRT, it showed an error.
RuntimeError: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow.

Anyone has ideas how to overcome this issue?

Hi,

Do you run the ONNX model with trtexec like below?

/usr/src/tensorrt/bin/trtexec --onnx=[file]

Thanks

Hi,
Thanks for your reply.
I did try to run trtexec also failed to execute. Here is the output error.

   &&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/tomo/Desktop/thinh/pose-net/densefusion_training/Conversion/dense-estimator.onnx
[11/25/2020-13:48:36] [I] === Model Options ===
[11/25/2020-13:48:36] [I] Format: ONNX
[11/25/2020-13:48:36] [I] Model: /home/tomo/Desktop/thinh/pose-net/densefusion_training/Conversion/dense-estimator.onnx
[11/25/2020-13:48:36] [I] Output:
[11/25/2020-13:48:36] [I] === Build Options ===
[11/25/2020-13:48:36] [I] Max batch: 1
[11/25/2020-13:48:36] [I] Workspace: 16 MB
[11/25/2020-13:48:36] [I] minTiming: 1
[11/25/2020-13:48:36] [I] avgTiming: 8
[11/25/2020-13:48:36] [I] Precision: FP32
[11/25/2020-13:48:36] [I] Calibration: 
[11/25/2020-13:48:36] [I] Safe mode: Disabled
[11/25/2020-13:48:36] [I] Save engine: 
[11/25/2020-13:48:36] [I] Load engine: 
[11/25/2020-13:48:36] [I] Builder Cache: Enabled
[11/25/2020-13:48:36] [I] NVTX verbosity: 0
[11/25/2020-13:48:36] [I] Inputs format: fp32:CHW
[11/25/2020-13:48:36] [I] Outputs format: fp32:CHW
[11/25/2020-13:48:36] [I] Input build shapes: model
[11/25/2020-13:48:36] [I] Input calibration shapes: model
[11/25/2020-13:48:36] [I] === System Options ===
[11/25/2020-13:48:36] [I] Device: 0
[11/25/2020-13:48:36] [I] DLACore: 
[11/25/2020-13:48:36] [I] Plugins:
[11/25/2020-13:48:36] [I] === Inference Options ===
[11/25/2020-13:48:36] [I] Batch: 1
[11/25/2020-13:48:36] [I] Input inference shapes: model
[11/25/2020-13:48:36] [I] Iterations: 10
[11/25/2020-13:48:36] [I] Duration: 3s (+ 200ms warm up)
[11/25/2020-13:48:36] [I] Sleep time: 0ms
[11/25/2020-13:48:36] [I] Streams: 1
[11/25/2020-13:48:36] [I] ExposeDMA: Disabled
[11/25/2020-13:48:36] [I] Spin-wait: Disabled
[11/25/2020-13:48:36] [I] Multithreading: Disabled
[11/25/2020-13:48:36] [I] CUDA Graph: Disabled
[11/25/2020-13:48:36] [I] Skip inference: Disabled
[11/25/2020-13:48:36] [I] Inputs:
[11/25/2020-13:48:36] [I] === Reporting Options ===
[11/25/2020-13:48:36] [I] Verbose: Disabled
[11/25/2020-13:48:36] [I] Averages: 10 inferences
[11/25/2020-13:48:36] [I] Percentile: 99
[11/25/2020-13:48:36] [I] Dump output: Disabled
[11/25/2020-13:48:36] [I] Profile: Disabled
[11/25/2020-13:48:36] [I] Export timing to JSON file: 
[11/25/2020-13:48:36] [I] Export output to JSON file: 
[11/25/2020-13:48:36] [I] Export profile to JSON file: 
[11/25/2020-13:48:36] [I] 
----------------------------------------------------------------
Input filename:   /home/tomo/Desktop/thinh/pose-net/densefusion_training/Conversion/dense-estimator.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.7
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[11/25/2020-13:48:38] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[11/25/2020-13:48:38] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: GatherElements. Attempting to import as plugin.
[11/25/2020-13:48:38] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: GatherElements, plugin_version: 1, plugin_namespace: 
[11/25/2020-13:48:38] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin GatherElements version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[11/25/2020-13:48:38] [E] Failed to parse onnx file
[11/25/2020-13:48:38] [E] Parsing model failed
[11/25/2020-13:48:38] [E] Engine creation failed
[11/25/2020-13:48:38] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/tomo/Desktop/thinh/pose-net/densefusion_training/Conversion/dense-estimator.onnx

Hi,

The error is caused by a non-supported GatherElements layer.
Based on the supported matrix below, we only support Gather layer but no GatherElements and GatherND:
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

Thanks.