importConvTranspose assertion error for 3D transpose conv

My workflow looks like this: pytorch -> onnx -> tensorrt.

I use Transpose 3D Convolutions in my model and they seem to cause a problem in the conversion. This is the error:

ERROR: /home/.../onnx-tensorrt/builtin_op_importers.cpp:501 In function importConvTranspose:
[8] Assertion failed: output_padding.nbDims == 2 || (output_padding.nbDims == 3 && output_padding.d[0] == 0)

System info:

  1. Ubuntu 18.04
  2. GeForce gtx 1070
  3. Driver Version: 440.48.02
  4. TensorRT version 6.0.1
  5. Cuda version 10.0
  6. Pytorch version 1.2.0+cu92

This is the call to the pytorch layer:

nn.ConvTranspose3d(in_planes, in_planes, kernel_size=3, padding=1, output_padding=1, stride=2, bias=False)

Hi,

Can you try parsing your model with TensorRT 7 for more up to date ONNX op support?

You can use our NGC container for simple testing if you don’t want to change the environment on your host machine: https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

Something like these commands would test parsing your ONNX model with TensorRT 7, assuming model.onnx is in your current directory and you have docker/nvidia-docker installed:

nvidia-docker run -it -v $PWD:/mnt --workdir=/mnt nvcr.io/nvidia/tensorrt:20.01-py3
trtexec --explicitBatch --onnx=model.onnx

Thank you for your quick reply. I upgrade to TensorRT 7 on my host machine and it fails in the initial convolution:

./trtexec --explicitBatch --onnx=/mypath/checkpoint_1.onnx
&&&& RUNNING TensorRT.trtexec # ./trtexec --explicitBatch --onnx=/mypath/checkpoint_1.onnx
[02/27/2020-15:40:53] [I] === Model Options ===
[02/27/2020-15:40:53] [I] Format: ONNX
[02/27/2020-15:40:53] [I] Model: /mypath/checkpoint_1.onnx
[02/27/2020-15:40:53] [I] Output:
[02/27/2020-15:40:53] [I] === Build Options ===
[02/27/2020-15:40:53] [I] Max batch: explicit
[02/27/2020-15:40:53] [I] Workspace: 16 MB
[02/27/2020-15:40:53] [I] minTiming: 1
[02/27/2020-15:40:53] [I] avgTiming: 8
[02/27/2020-15:40:53] [I] Precision: FP32
[02/27/2020-15:40:53] [I] Calibration: 
[02/27/2020-15:40:53] [I] Safe mode: Disabled
[02/27/2020-15:40:53] [I] Save engine: 
[02/27/2020-15:40:53] [I] Load engine: 
[02/27/2020-15:40:53] [I] Inputs format: fp32:CHW
[02/27/2020-15:40:53] [I] Outputs format: fp32:CHW
[02/27/2020-15:40:53] [I] Input build shapes: model
[02/27/2020-15:40:53] [I] === System Options ===
[02/27/2020-15:40:53] [I] Device: 0
[02/27/2020-15:40:53] [I] DLACore: 
[02/27/2020-15:40:53] [I] Plugins:
[02/27/2020-15:40:53] [I] === Inference Options ===
[02/27/2020-15:40:53] [I] Batch: Explicit
[02/27/2020-15:40:53] [I] Iterations: 10
[02/27/2020-15:40:53] [I] Duration: 3s (+ 200ms warm up)
[02/27/2020-15:40:53] [I] Sleep time: 0ms
[02/27/2020-15:40:53] [I] Streams: 1
[02/27/2020-15:40:53] [I] ExposeDMA: Disabled
[02/27/2020-15:40:53] [I] Spin-wait: Disabled
[02/27/2020-15:40:53] [I] Multithreading: Disabled
[02/27/2020-15:40:53] [I] CUDA Graph: Disabled
[02/27/2020-15:40:53] [I] Skip inference: Disabled
[02/27/2020-15:40:53] [I] Inputs:
[02/27/2020-15:40:53] [I] === Reporting Options ===
[02/27/2020-15:40:53] [I] Verbose: Disabled
[02/27/2020-15:40:53] [I] Averages: 10 inferences
[02/27/2020-15:40:53] [I] Percentile: 99
[02/27/2020-15:40:53] [I] Dump output: Disabled
[02/27/2020-15:40:53] [I] Profile: Disabled
[02/27/2020-15:40:53] [I] Export timing to JSON file: 
[02/27/2020-15:40:53] [I] Export output to JSON file: 
[02/27/2020-15:40:53] [I] Export profile to JSON file: 
[02/27/2020-15:40:53] [I] 
----------------------------------------------------------------
Input filename:   /mypath/checkpoint_1.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.2
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/27/2020-15:40:54] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 1 [Conv]:
ERROR: builtin_op_importers.cpp:422 In function importConv:
[8] Assertion failed: inputs.at(0).is_tensor()
[02/27/2020-15:40:54] [E] Failed to parse onnx file
[02/27/2020-15:40:54] [E] Parsing model failed
[02/27/2020-15:40:54] [E] Engine creation failed
[02/27/2020-15:40:54] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --explicitBatch --onnx=/mypath/checkpoint_1.onnx

EDIT: Note that this is a multi-input model.

Hm, I don’t see this error very often:

“[8] Assertion failed: inputs.at(0).is_tensor()”

That’s usually from a bad ONNX model.

Just curious, what happens you if you export this model with PyTorch 1.4 and ONNX opset 11, and then try parsing again with TensorRT 7 the same way?

You can pip install torch==1.4 onnx==1.6 inside the TensorRT Docker container mentioned above for convenience if you want to avoid altering your host env, or just use a Python Virtual Environment.

If for some reason you don’t have a .pt checkpoint to load the weights of your model, random weights for the model should be fine for the sake of just testing the ONNX->TensorRT parser.

NVES_R, thank you very much for your help. It is much appreciated.

I tried your suggestion, but I get the same error. For reference, this is the model I’m trying to convert: https://github.com/JiaRenChang/PSMNet

The link to my pytorch model: https://drive.google.com/open?id=1sAmLJ1wQ2mLz8aVtHz2EH3JgFEiHJl59

Converting that model to an onnx, produces this model: https://drive.google.com/open?id=1-FFTYeFDB0Oqj6QXin0uphLhjVJyGaJ1

Running

onnx.checker.check_model(original_model)

seems to be working.

This is the output:

./trtexec --explicitBatch  --onnx=/mypath/checkpoint_1.onnx
&&&& RUNNING TensorRT.trtexec # ./trtexec --explicitBatch --onnx=/mypath/checkpoint_1.onnx
[02/28/2020-08:06:01] [I] === Model Options ===
[02/28/2020-08:06:01] [I] Format: ONNX
[02/28/2020-08:06:01] [I] Model: /mypath/checkpoint_1.onnx
[02/28/2020-08:06:01] [I] Output:
[02/28/2020-08:06:01] [I] === Build Options ===
[02/28/2020-08:06:01] [I] Max batch: explicit
[02/28/2020-08:06:01] [I] Workspace: 16 MB
[02/28/2020-08:06:01] [I] minTiming: 1
[02/28/2020-08:06:01] [I] avgTiming: 8
[02/28/2020-08:06:01] [I] Precision: FP32
[02/28/2020-08:06:01] [I] Calibration: 
[02/28/2020-08:06:01] [I] Safe mode: Disabled
[02/28/2020-08:06:01] [I] Save engine: 
[02/28/2020-08:06:01] [I] Load engine: 
[02/28/2020-08:06:01] [I] Inputs format: fp32:CHW
[02/28/2020-08:06:01] [I] Outputs format: fp32:CHW
[02/28/2020-08:06:01] [I] Input build shapes: model
[02/28/2020-08:06:01] [I] === System Options ===
[02/28/2020-08:06:01] [I] Device: 0
[02/28/2020-08:06:01] [I] DLACore: 
[02/28/2020-08:06:01] [I] Plugins:
[02/28/2020-08:06:01] [I] === Inference Options ===
[02/28/2020-08:06:01] [I] Batch: Explicit
[02/28/2020-08:06:01] [I] Iterations: 10
[02/28/2020-08:06:01] [I] Duration: 3s (+ 200ms warm up)
[02/28/2020-08:06:01] [I] Sleep time: 0ms
[02/28/2020-08:06:01] [I] Streams: 1
[02/28/2020-08:06:01] [I] ExposeDMA: Disabled
[02/28/2020-08:06:01] [I] Spin-wait: Disabled
[02/28/2020-08:06:01] [I] Multithreading: Disabled
[02/28/2020-08:06:01] [I] CUDA Graph: Disabled
[02/28/2020-08:06:01] [I] Skip inference: Disabled
[02/28/2020-08:06:01] [I] Inputs:
[02/28/2020-08:06:01] [I] === Reporting Options ===
[02/28/2020-08:06:01] [I] Verbose: Disabled
[02/28/2020-08:06:01] [I] Averages: 10 inferences
[02/28/2020-08:06:01] [I] Percentile: 99
[02/28/2020-08:06:01] [I] Dump output: Disabled
[02/28/2020-08:06:01] [I] Profile: Disabled
[02/28/2020-08:06:01] [I] Export timing to JSON file: 
[02/28/2020-08:06:01] [I] Export output to JSON file: 
[02/28/2020-08:06:01] [I] Export profile to JSON file: 
[02/28/2020-08:06:01] [I] 
----------------------------------------------------------------
Input filename:   /mypath/checkpoint_1.onnx
ONNX IR version:  0.0.4
Opset version:    11
Producer name:    pytorch
Producer version: 1.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/28/2020-08:06:01] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 1 [Conv]:
ERROR: builtin_op_importers.cpp:422 In function importConv:
[8] Assertion failed: inputs.at(0).is_tensor()
[02/28/2020-08:06:01] [E] Failed to parse onnx file
[02/28/2020-08:06:01] [E] Parsing model failed
[02/28/2020-08:06:01] [E] Engine creation failed
[02/28/2020-08:06:01] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --explicitBatch --onnx=/mypath/checkpoint_1.onnx

I’m not sure why it says producer version: 1.3. I have checked the model saving and onnx conversion code, which both uses pytorch version 1.4.

EDIT:

It seems like it’s this part of the Pytorch code that messes with TensorRT:

#matching
        cost = Variable(
            torch.FloatTensor(
                refimg_fea.size()[0],
                refimg_fea.size()[1]*2, self.maxdisp/4,
                refimg_fea.size()[2],
                refimg_fea.size()[3]).zero_()
        ).cuda()

        for i in range(self.maxdisp/4):
            if i > 0 :
             cost[:, :refimg_fea.size()[1], i, :,i:]   = refimg_fea[:,:,:,i:]
             cost[:, refimg_fea.size()[1]:, i, :,i:] = targetimg_fea[:,:,:,:-i]
            else:
             cost[:, :refimg_fea.size()[1], i, :,:]   = refimg_fea
             cost[:, refimg_fea.size()[1]:, i, :,:]   = targetimg_fea
        cost = cost.contiguous()

Which is being parsed as an empty constant tensor in onnx:

%516 : Float(1, 64, 20, 128, 128) = onnx::Constant[value=<Tensor>]()

EDIT 2:

So I’ve changed the code to use pytorch

repeat

and

cat

operations. The onnx parser now successfully parses the entire graph, but TensorRT throws me this error:

terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: pads

Seems to be related to this issue: https://github.com/onnx/onnx-tensorrt/issues/378

Hi @copah,

Yes that Attribute not found: pads error was from a change in ONNX representation of the Pad op in opset11. This has been fixed upstream in OSS ONNX parser - so you can build it and should be able to parse your model afterwards: https://github.com/onnx/onnx-tensorrt/issues/378#issuecomment-593786957

If using a Docker container, you should be able to build the master branch of parser easily with this script: https://github.com/rmccorm4/tensorrt-utils/tree/master/OSS

Thank you.

What is the difference betweehn the the onnx-tensorrt and trtexec software and which should I use?

onnx-tensorrt is the backend for the ONNX parser. If you re-build trtexec along with the ONNX parser (which the script linked above will do by default), then the newly built trtexec will have the updated ONNX parser changes.

If you don’t re-build trtexec it will still be using the ONNX parser backend that it was previously built with.

You should still be able to use the TensorRT C++/Python APIs instead with the newly built onnx parser lib instead if you don’t re-build trtexec.

onnx2trt is similar to trtexec but with less features and isn’t as closely maintained, so trtexec is generally preferred.