TF2 SSD MobileNet V2 model to TensorRT engine file


We’re trying to use a TF2 SSD MobileNet V2 model in a DS pipe but conversion fails. We are using the Jetson Xavier NX eMMC version with L4T R32.6.1.

Current model was used:
Transfer learning was done using pipeline.config in attachment.
The onnx model is generated using opset 11.
pipeline.config (4.6 KB)

We have the following error when running the DS pipe:

WARNING: [TRT]: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: ModelImporter.cpp:720: While parsing node number 264 [TopK -> "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0"]:
ERROR: [TRT]: ModelImporter.cpp:721: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:722: input: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Select:0"
input: "Unsqueeze__600:0"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0"
output: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:1"
name: "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2"
op_type: "TopK"
attribute {
  name: "sorted"
  i: 1
  type: INT

ERROR: [TRT]: ModelImporter.cpp:723: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4292 In function importTopK:
[8] Assertion failed: ( && "This version of TensorRT only supports input K as an initializer."
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.

What could be wrong?

Kind regards,


[8] Assertion failed: ( && "This version of TensorRT only supports input K as an initializer."

The onnx2trt meet some error when converting the TopK layer.
Based on the log, the value of K should be defined as a constant rather than weight.

You can modify the ONNX model with our graphsurgeon tool below:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.