Converting onnx to engine format model dimensions error!

i 've downloaded a ddd_3dop.pth model trained by pytorch , i am sure it used a DCNv2 operation when trained, i 've converted it into onnx formats, and converted its plugins into DCNv2 “names” in order to be compatible with my own plugins written in C++ programs, my enviroment is jetson agx xavier, and tensorrt tool is 7.1.3.0, and pytorch version is 1.6.0 and cuda is available as well, here is my details of Xavier:

i 've already done initLibNvInferPlugins some registeration operations in my program, and the program can recognise my “dcnV2” plugins as i assumed, however, the errors occurred when i tried to convert “onnx” model to “.engine” model.

[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).
[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).
[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).

i have no idea why this happened ! i applied the same program in my serve computer which installed tensorrt8.2GA, and pytorch is 1.7.0 cuda version, and it can be successfully transferred into ".engine"model, it is weired…

here are details logs:

[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).
[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).
[08/10/2022-16:56:49] [E] [TRT] Add_139: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,256,2,2] and [1,256,24,80]).

BY the way, i have noticed that there is a related topic which looks similar to my question, your nvidia team suggested to upgrade the tensorrt7.1 to tensorrt8.0, i am not sure whether it’s an only feasible way to solve the problem, but i don’t to do that because of …lots of programs being impacted in my system…

i am looking forward to your reply, yours sincerely.

Hi,

Would you mind sharing the ONNX model with us?
So we can check if this issue is solved in TensorRT v8 for you.

Thanks.

1 Like

@NVIDIA No problem, i created a link to the onnx file, plz check it, thanks a lot…

Hi,

We try to download the model but don’t get the correct permission.
Could you help to enable it?

Thanks.

ddd_3dop.onnx - Google Drive, ,thanks you very much

Hello,i 've enabled the link to be downloaded, thanks a lot, if you had any news, plz let me know.

@AastaLLL hello, any news about my ddd_3dop.onnx, has it been solved in the following tensorrt version?

Hi,

Sorry for the late update.
We test your model on a TensorRT 8 environment and it fails with the DCNv2 plugin missing.

$ /usr/src/tensorrt/bin/trtexec --onnx=./ddd_3dop.onnx 
&&&& RUNNING TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=./ddd_3dop.onnx
...
[04/24/2022-17:55:47] [I] Start parsing network model
[04/24/2022-17:55:47] [I] [TRT] ----------------------------------------------------------------
[04/24/2022-17:55:47] [I] [TRT] Input filename:   ./ddd_3dop.onnx
[04/24/2022-17:55:47] [I] [TRT] ONNX IR version:  0.0.6
[04/24/2022-17:55:47] [I] [TRT] Opset version:    9
[04/24/2022-17:55:47] [I] [TRT] Producer name:    pytorch
[04/24/2022-17:55:47] [I] [TRT] Producer version: 1.6
[04/24/2022-17:55:47] [I] [TRT] Domain:           
[04/24/2022-17:55:47] [I] [TRT] Model version:    0
[04/24/2022-17:55:47] [I] [TRT] Doc string:       
[04/24/2022-17:55:47] [I] [TRT] ----------------------------------------------------------------
[04/24/2022-17:55:48] [I] [TRT] No importer registered for op: DCNv2. Attempting to import as plugin.
[04/24/2022-17:55:48] [I] [TRT] Searching for plugin: DCNv2, plugin_version: 1, plugin_namespace: 
[04/24/2022-17:55:48] [E] [TRT] ModelImporter.cpp:773: While parsing node number 135 [DCNv2 -> "550"]:
[04/24/2022-17:55:48] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---
[04/24/2022-17:55:48] [E] [TRT] ModelImporter.cpp:775: input: "543"
input: "548"
input: "549"
input: "dla_up.ida_0.proj_1.conv.weight"
input: "dla_up.ida_0.proj_1.conv.bias"
output: "550"
name: "DCNv2_135"
op_type: "DCNv2"
attribute {
  name: "deformable_groups"
  i: 1
  type: INT
}
attribute {
  name: "dilation"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "padding"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "stride"
  ints: 1
  ints: 1
  type: INTS
}

[04/24/2022-17:55:48] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[04/24/2022-17:55:48] [E] [TRT] ModelImporter.cpp:778: ERROR: builtin_op_importers.cpp:4890 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[04/24/2022-17:55:48] [E] Failed to parse onnx file
[04/24/2022-17:55:48] [I] Finish parsing network model
[04/24/2022-17:55:48] [E] Parsing model failed
[04/24/2022-17:55:48] [E] Failed to create engine from model or file.
[04/24/2022-17:55:48] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=./ddd_3dop.onnx

Would you mind upgrading the device to JetPack 4.6.2 or 5.0.2 and giving it a try?
Thanks.

Hello, i 've tried to convert this model on tensorrt8.0ga on jetson nx, jetpack4.6.1, it works well, thanks very much, it is probable the tensorrt version incompatible issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.