Can't convert from torch-onnx to tensorrt

Hi,

I am trying to convert a torch model to tensorrt via onnx in two steps.

  1. Convert torch to onnx using detectron2 script.
  2. Convert onnx to tensorrt using /usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.trt
    Step1 looks ok, but in step2 I get the following output
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.trt
[09/22/2020-20:58:12] [I] === Model Options ===
[09/22/2020-20:58:12] [I] Format: ONNX
[09/22/2020-20:58:12] [I] Model: model.onnx
[09/22/2020-20:58:12] [I] Output:
[09/22/2020-20:58:12] [I] === Build Options ===
[09/22/2020-20:58:12] [I] Max batch: 1
[09/22/2020-20:58:12] [I] Workspace: 16 MB
[09/22/2020-20:58:12] [I] minTiming: 1
[09/22/2020-20:58:12] [I] avgTiming: 8
[09/22/2020-20:58:12] [I] Precision: FP32
[09/22/2020-20:58:12] [I] Calibration: 
[09/22/2020-20:58:12] [I] Safe mode: Disabled
[09/22/2020-20:58:12] [I] Save engine: alexnet.trt
[09/22/2020-20:58:12] [I] Load engine: 
[09/22/2020-20:58:12] [I] Builder Cache: Enabled
[09/22/2020-20:58:12] [I] NVTX verbosity: 0
[09/22/2020-20:58:12] [I] Inputs format: fp32:CHW
[09/22/2020-20:58:12] [I] Outputs format: fp32:CHW
[09/22/2020-20:58:12] [I] Input build shapes: model
[09/22/2020-20:58:12] [I] Input calibration shapes: model
[09/22/2020-20:58:12] [I] === System Options ===
[09/22/2020-20:58:12] [I] Device: 0
[09/22/2020-20:58:12] [I] DLACore: 
[09/22/2020-20:58:12] [I] Plugins:
[09/22/2020-20:58:12] [I] === Inference Options ===
[09/22/2020-20:58:12] [I] Batch: 1
[09/22/2020-20:58:12] [I] Input inference shapes: model
[09/22/2020-20:58:12] [I] Iterations: 10
[09/22/2020-20:58:12] [I] Duration: 3s (+ 200ms warm up)
[09/22/2020-20:58:12] [I] Sleep time: 0ms
[09/22/2020-20:58:12] [I] Streams: 1
[09/22/2020-20:58:12] [I] ExposeDMA: Disabled
[09/22/2020-20:58:12] [I] Spin-wait: Disabled
[09/22/2020-20:58:12] [I] Multithreading: Disabled
[09/22/2020-20:58:12] [I] CUDA Graph: Disabled
[09/22/2020-20:58:12] [I] Skip inference: Disabled
[09/22/2020-20:58:12] [I] Inputs:
[09/22/2020-20:58:12] [I] === Reporting Options ===
[09/22/2020-20:58:12] [I] Verbose: Disabled
[09/22/2020-20:58:12] [I] Averages: 10 inferences
[09/22/2020-20:58:12] [I] Percentile: 99
[09/22/2020-20:58:12] [I] Dump output: Disabled
[09/22/2020-20:58:12] [I] Profile: Disabled
[09/22/2020-20:58:12] [I] Export timing to JSON file: 
[09/22/2020-20:58:12] [I] Export output to JSON file: 
[09/22/2020-20:58:12] [I] Export profile to JSON file: 
[09/22/2020-20:58:12] [I] 
----------------------------------------------------------------
Input filename:   model.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.6
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[09/22/2020-20:58:15] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: AliasWithName. Attempting to import as plugin.
[09/22/2020-20:58:15] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: AliasWithName, plugin_version: 1, plugin_namespace: 
[09/22/2020-20:58:15] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin AliasWithName version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[09/22/2020-20:58:15] [E] Failed to parse onnx file
[09/22/2020-20:58:15] [E] Parsing model failed
[09/22/2020-20:58:15] [E] Engine creation failed
[09/22/2020-20:58:15] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.trt

Can you please help? Thanks

Hi,

The error indicates there is an non-supported AliasWithName operation in the model.
If this is a training stage operation, you can try to remove it directly.

Thanks.

@AastaLLL, thanks and sorry for late comeback. I was trying to optimize via other route. No success yet.
I also tried torch-onnx-tensorrt for the object detection model (discarding my key-point head). Getting same issue. Can you please give me an idea on how to remove it during training? Thanks.

Hi,

Sorry for the late update.

We are trying to deploy the Detectron2 model with TensorRT.
For the following progress, please find this topic:

Thanks.

Hi @cogbot, @AastaLLL

I’m also facing problems with deploying a detectron2 trained model (FasterRCNN-Resnet-50-FPN) on Jetson AGX Xavier.
Could you please share whether you found any solutions or workarounds so far?

Thanks.

Hi ani.karapetyan,

Please help to open a new topic for your issue. Thanks