I am trying to convert the ONNX SSD mobilnet v3 model into TensorRT Engine. I am getting the below error

I converted the tf ssd mobilnet v3 frozen graph into onnx model on jetson xavier. It is working well but when I tried to convert the ONNX model into TensorRT Engine. I am getting the below error.

Building an engine.  This would take a while...
(Use "-v" or "--verbose" to enable verbose logging.)
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
ERROR: Failed to parse the ONNX file.
In node -1 (importResize): UNSUPPORTED_NODE: Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
ERROR: failed to build the TensorRT engine!
  • I used this script for conversion.
    tensorrt_demos/onnx_to_tensorrt.py at master · jkjung-avt/tensorrt_demos · GitHub

  • I had upgraded the jetpack into 4.6, but still, the model conversion is not working.

  • Even I tried converting the tf model into UFF. I am able to convert the TF frozen graph into UFF. But UFF to tensorRT conversion is not working. I saw somewhere on the Nvidia forum, UFF to TensorRT won’t work on Jetson Xavier.

Hi,

Which ONNX opset version do you use?

It’s expected to be opset=13 for TensorRT 8.0.
If the used version is different, could you give it a try?

Thanks.

Hi,

I will upgrade the ONNX opset version into 13 and try the conversion.

Thanks,

I am getting some errors while downgrading the model into a lower version. I have contacted the ONNX support team and waiting for a reply.

@AastaLLL , Could you share with me another way to convert the TF model into a TensorRT engine?

Hi,

We want to reproduce this issue internally.
Would you mind sharing the ONNX model as well as the TensorFlow model with us?

Thanks.

Could you share me the support email id . I will share the files with you

Hi,

Would you mind sharing it with a private message directly.
Thanks.

I sent the files. Please check and tell me

Thanks.

We receive the file and check the model internally.
Will share more information with you later.

Thank you so much.

Hi,

We have tested your model with JetPack 4.6 and below is the error we met:

Unsupported ONNX data type: UINT8 (2)
[12/21/2021-01:53:55] [E] [TRT] ModelImporter.cpp:726: ERROR: image_tensor:0:230 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype) && "Failed to convert ONNX date type to TensorRT data type."

This is a known issue since TensorRT expects the input to be float but ONNX uses integer.
Please find the following comment for the workaround with our Graphsurgeon tool:

Thanks.

I will try it

I converted the ONNX model float type. when I ran the below command, I was getting the below error.

/usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.engine

/usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.engine
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.engine
[12/23/2021-14:31:02] [I] === Model Options ===
[12/23/2021-14:31:02] [I] Format: ONNX
[12/23/2021-14:31:02] [I] Model: /home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx
[12/23/2021-14:31:02] [I] Output:
[12/23/2021-14:31:02] [I] === Build Options ===
[12/23/2021-14:31:02] [I] Max batch: explicit
[12/23/2021-14:31:02] [I] Workspace: 16 MiB
[12/23/2021-14:31:02] [I] minTiming: 1
[12/23/2021-14:31:02] [I] avgTiming: 8
[12/23/2021-14:31:02] [I] Precision: FP32
[12/23/2021-14:31:02] [I] Calibration:
[12/23/2021-14:31:02] [I] Refit: Disabled
[12/23/2021-14:31:02] [I] Sparsity: Disabled
[12/23/2021-14:31:02] [I] Safe mode: Disabled
[12/23/2021-14:31:02] [I] Restricted mode: Disabled
[12/23/2021-14:31:02] [I] Save engine: //home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.engine
[12/23/2021-14:31:02] [I] Load engine:
[12/23/2021-14:31:02] [I] NVTX verbosity: 0
[12/23/2021-14:31:02] [I] Tactic sources: Using default tactic sources
[12/23/2021-14:31:02] [I] timingCacheMode: local
[12/23/2021-14:31:02] [I] timingCacheFile:
[12/23/2021-14:31:02] [I] Input(s)s format: fp32:CHW
[12/23/2021-14:31:02] [I] Output(s)s format: fp32:CHW
[12/23/2021-14:31:02] [I] Input build shapes: model
[12/23/2021-14:31:02] [I] Input calibration shapes: model
[12/23/2021-14:31:02] [I] === System Options ===
[12/23/2021-14:31:02] [I] Device: 0
[12/23/2021-14:31:02] [I] DLACore:
[12/23/2021-14:31:02] [I] Plugins:
[12/23/2021-14:31:02] [I] === Inference Options ===
[12/23/2021-14:31:02] [I] Batch: Explicit
[12/23/2021-14:31:02] [I] Input inference shapes: model
[12/23/2021-14:31:02] [I] Iterations: 10
[12/23/2021-14:31:02] [I] Duration: 3s (+ 200ms warm up)
[12/23/2021-14:31:02] [I] Sleep time: 0ms
[12/23/2021-14:31:02] [I] Streams: 1
[12/23/2021-14:31:02] [I] ExposeDMA: Disabled
[12/23/2021-14:31:02] [I] Data transfers: Enabled
[12/23/2021-14:31:02] [I] Spin-wait: Disabled
[12/23/2021-14:31:02] [I] Multithreading: Disabled
[12/23/2021-14:31:02] [I] CUDA Graph: Disabled
[12/23/2021-14:31:02] [I] Separate profiling: Disabled
[12/23/2021-14:31:02] [I] Time Deserialize: Disabled
[12/23/2021-14:31:02] [I] Time Refit: Disabled
[12/23/2021-14:31:02] [I] Skip inference: Disabled
[12/23/2021-14:31:02] [I] Inputs:
[12/23/2021-14:31:02] [I] === Reporting Options ===
[12/23/2021-14:31:02] [I] Verbose: Disabled
[12/23/2021-14:31:02] [I] Averages: 10 inferences
[12/23/2021-14:31:02] [I] Percentile: 99
[12/23/2021-14:31:02] [I] Dump refittable layers:Disabled
[12/23/2021-14:31:02] [I] Dump output: Disabled
[12/23/2021-14:31:02] [I] Profile: Disabled
[12/23/2021-14:31:02] [I] Export timing to JSON file:
[12/23/2021-14:31:02] [I] Export output to JSON file:
[12/23/2021-14:31:02] [I] Export profile to JSON file:
[12/23/2021-14:31:02] [I]
[12/23/2021-14:31:02] [I] === Device Information ===
[12/23/2021-14:31:02] [I] Selected Device: Xavier
[12/23/2021-14:31:02] [I] Compute Capability: 7.2
[12/23/2021-14:31:02] [I] SMs: 6
[12/23/2021-14:31:02] [I] Compute Clock Rate: 1.109 GHz
[12/23/2021-14:31:02] [I] Device Global Memory: 7773 MiB
[12/23/2021-14:31:02] [I] Shared Memory per SM: 96 KiB
[12/23/2021-14:31:02] [I] Memory Bus Width: 256 bits (ECC disabled)
[12/23/2021-14:31:02] [I] Memory Clock Rate: 1.109 GHz
[12/23/2021-14:31:02] [I]
[12/23/2021-14:31:02] [I] TensorRT version: 8001
[12/23/2021-14:31:05] [I] [TRT] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 371, GPU 5209 (MiB)
[12/23/2021-14:31:05] [I] Start parsing network model
[12/23/2021-14:31:05] [I] [TRT] ----------------------------------------------------------------
[12/23/2021-14:31:05] [I] [TRT] Input filename:   /home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx
[12/23/2021-14:31:05] [I] [TRT] ONNX IR version:  0.0.8
[12/23/2021-14:31:05] [I] [TRT] Opset version:    10
[12/23/2021-14:31:05] [I] [TRT] Producer name:    
[12/23/2021-14:31:05] [I] [TRT] Producer version:
[12/23/2021-14:31:05] [I] [TRT] Domain:          
[12/23/2021-14:31:05] [I] [TRT] Model version:    0
[12/23/2021-14:31:05] [I] [TRT] Doc string:      
[12/23/2021-14:31:05] [I] [TRT] ----------------------------------------------------------------
[12/23/2021-14:31:06] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/23/2021-14:31:06] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[12/23/2021-14:31:06] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[12/23/2021-14:31:06] [E] [TRT] ModelImporter.cpp:720: While parsing node number 10 [Loop -> "unused_loop_output___72"]:
[12/23/2021-14:31:06] [E] [TRT] ModelImporter.cpp:721: --- Begin node ---
[12/23/2021-14:31:06] [E] [TRT] ModelImporter.cpp:722: input: "trip_count__49"
input: "copy__51/Preprocessor/map/while/LogicalAnd:0"
input: "Preprocessor/map/while/iteration_counter:0"
input: "Preprocessor/map/while/iteration_counter:0"
output: "unused_loop_output___72"
output: "unused_loop_output___73"
output: "Preprocessor/map/TensorArrayStack/TensorArrayGatherV3:0"
output: "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3:0"
name: "generic_loop_Loop__75"
op_type: "Loop"
attribute {
  name: "body"
  g {
    node {
      input: "Preprocessor/map/while/ResizeImage/stack_1:0"
      output: "sub_graph_ending_node_Identity__57:0"
      name: "sub_graph_ending_node_Identity__57"
      op_type: "Identity"
    }
    node {
      input: "Preprocessor/map/strided_slice__80:0"
      output: "Preprocessor/map/while/Less__88:0"
      name: "Preprocessor/map/while/Less__88"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
    }
    node {
      input: "Preprocessor/map/strided_slice__80:0"
      output: "Preprocessor/map/while/Less_1__86:0"
      name: "Preprocessor/map/while/Less_1__86"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
    }
    node {
      input: "Preprocessor/map/while/Identity_1:0"
      output: "Unsqueeze__64:0"
      name: "Unsqueeze__64"
      op_type: "Unsqueeze"
      attribute {
        name: "axes"
        ints: 0
        type: INTS
      }
    }
    node {
      input: "Preprocessor/map/while/Identity_1:0"
      input: "Preprocessor/map/while/add_1/y:0"
      output: "sub_graph_ending_node_Identity__55:0"
      name: "Preprocessor/map/while/add_1"
      op_type: "Add"
    }
    node {
      input: "sub_graph_ending_node_Identity__55:0"
      output: "Preprocessor/map/while/Less_1__85:0"
      name: "Preprocessor/map/while/Less_1__85"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
    }
    node {
      input: "Preprocessor/map/while/Less_1__85:0"
      input: "Preprocessor/map/while/Less_1__86:0"
      output: "Preprocessor/map/while/Less_1:0"
      name: "Preprocessor/map/while/Less_1"
      op_type: "Less"
    }
    node {
      input: "Preprocessor/map/while/Identity:0"
      input: "Preprocessor/map/while/add_1/y:0"
      output: "sub_graph_ending_node_Identity__54:0"
      name: "Preprocessor/map/while/add"
      op_type: "Add"
    }
    node {
      input: "sub_graph_ending_node_Identity__54:0"
      output: "Preprocessor/map/while/Less__87:0"
      name: "Preprocessor/map/while/Less__87"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
    }
    node {
      input: "Preprocessor/map/while/Less__87:0"
      input: "Preprocessor/map/while/Less__88:0"
      output: "Preprocessor/map/while/Less:0"
      name: "Preprocessor/map/while/Less"
      op_type: "Less"
    }
    node {
      input: "Preprocessor/map/while/Less:0"
      input: "Preprocessor/map/while/Less_1:0"
      output: "sub_graph_ending_node_Identity__53:0"
      name: "Preprocessor/map/while/LogicalAnd"
      op_type: "And"
    }
    node {
      input: "Preprocessor/sub:0"
      input: "Unsqueeze__64:0"
      output: "Gather__65:0"
      name: "Gather__65"
      op_type: "Gather"
    }
    node {
      input: "Gather__65:0"
      output: "Transpose__89:0"
      name: "Transpose__89"
      op_type: "Transpose"
      attribute {
        name: "perm"
        ints: 0
        ints: 3
        ints: 1
        ints: 2
        type: INTS
      }
    }
    node {
      input: "Gather__65:0"
      output: "Shape__92:0"
      name: "Shape__92"
      op_type: "Shape"
    }
    node {
      input: "Shape__92:0"
      input: "const_starts__95"
      input: "const_ends__96"
      input: "const_axes__97"
      output: "Slice__98:0"
      name: "Slice__98"
      op_type: "Slice"
    }
    node {
      input: "Slice__98:0"
      output: "Cast__99:0"
      name: "Cast__99"
      op_type: "Cast"
      attribute {
        name: "to"
        i: 1
        type: INT
      }
    }
    node {
      input: "const_fold_opt__6083"
      input: "Cast__99:0"
      output: "Div__101:0"
      name: "Div__101"
      op_type: "Div"
    }
    node {
      input: "one__102"
      input: "Div__101:0"
      output: "Concat__103:0"
      name: "Concat__103"
      op_type: "Concat"
      attribute {
        name: "axis"
        i: 0
        type: INT
      }
    }
    node {
      input: "Transpose__89:0"
      input: "Concat__103:0"
      output: "Resize__104:0"
      name: "Resize__104"
      op_type: "Resize"
      attribute {
        name: "mode"
        s: "linear"
        type: STRING
      }
    }
    node {
      input: "Resize__104:0"
      output: "Preprocessor/map/while/ResizeImage/resize/Squeeze:0"
      name: "Preprocessor/map/while/ResizeImage/resize/Squeeze"
      op_type: "Squeeze"
      attribute {
        name: "axes"
        ints: 0
        type: INTS
      }
    }
    name: "tf2onnx__52"
    initializer {
      dims: 3
      data_type: 6
      name: "Preprocessor/map/while/ResizeImage/stack_1:0"
      raw_data: "@\001\000\000@\001\000\000\003\000\000\000"
    }
    initializer {
      data_type: 6
      name: "Preprocessor/map/while/add_1/y:0"
      raw_data: "\001\000\000\000"
    }
    initializer {
      dims: 1
      data_type: 7
      name: "const_starts__95"
      raw_data: "\001\000\000\000\000\000\000\000"
    }
    initializer {
      dims: 1
      data_type: 7
      name: "const_ends__96"
      raw_data: "\003\000\000\000\000\000\000\000"
    }
    initializer {
      dims: 1
      data_type: 7
      name: "const_axes__97"
      raw_data: "\000\000\000\000\000\000\000\000"
    }
    initializer {
      dims: 2
      data_type: 1
      name: "const_fold_opt__6083"
      raw_data: "\000\000\240C\000\000\240C"
    }
    initializer {
      dims: 2
      data_type: 1
      name: "one__102"
      raw_data: "\000\000\200?\000\000\200?"
    }
    doc_string: "graph for generic_loop_Loop__75 body"
    input {
      name: "i__58"
      type {
        tensor_type {
          elem_type: 7
          shape {
          }
        }
      }
    }
    input {
      name: "cond__60"
      type {
        tensor_type {
          elem_type: 9
          shape {
          }
        }
      }
    }
    input {
      name: "Preprocessor/map/while/Identity:0"
      type {
        tensor_type {
          elem_type: 6
          shape {
          }
        }
      }
    }
    input {
      name: "Preprocessor/map/while/Identity_1:0"
      type {
        tensor_type {
          elem_type: 6
          shape {
          }
        }
      }
    }
    output {
      name: "sub_graph_ending_node_Identity__53:0"
      type {
        tensor_type {
          elem_type: 9
          shape {
          }
        }
      }
    }
    output {
      name: "sub_graph_ending_node_Identity__54:0"
      type {
        tensor_type {
          elem_type: 6
          shape {
          }
        }
      }
    }
    output {
      name: "sub_graph_ending_node_Identity__55:0"
      type {
        tensor_type {
          elem_type: 6
          shape {
          }
        }
      }
    }
    output {
      name: "Preprocessor/map/while/ResizeImage/resize/Squeeze:0"
      type {
        tensor_type {
          elem_type: 1
          shape {
            dim {
              dim_param: "unk__6629"
            }
            dim {
              dim_param: "unk__6630"
            }
            dim {
              dim_param: "unk__6631"
            }
          }
        }
      }
    }
    output {
      name: "sub_graph_ending_node_Identity__57:0"
      type {
        tensor_type {
          elem_type: 6
          shape {
            dim {
              dim_param: "unk__6632"
            }
          }
        }
      }
    }
  }
  type: GRAPH
}

[12/23/2021-14:31:06] [E] [TRT] ModelImporter.cpp:723: --- End node ---
[12/23/2021-14:31:06] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:3422 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
[12/23/2021-14:31:06] [E] Failed to parse onnx file
[12/23/2021-14:31:06] [I] Finish parsing network model
[12/23/2021-14:31:06] [E] Parsing model failed
[12/23/2021-14:31:06] [E] Engine creation failed
[12/23/2021-14:31:06] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen_Datatype_Converted.engine

Zoomed out of item.

I converted the ONNX model to opset 13 version. I am getting the below error. ```TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.engine
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.engine
[12/24/2021-11:37:06] [I] === Model Options ===
[12/24/2021-11:37:06] [I] Format: ONNX
[12/24/2021-11:37:06] [I] Model: /home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.onnx
[12/24/2021-11:37:06] [I] Output:
[12/24/2021-11:37:06] [I] === Build Options ===
[12/24/2021-11:37:06] [I] Max batch: explicit
[12/24/2021-11:37:06] [I] Workspace: 16 MiB
[12/24/2021-11:37:06] [I] minTiming: 1
[12/24/2021-11:37:06] [I] avgTiming: 8
[12/24/2021-11:37:06] [I] Precision: FP32
[12/24/2021-11:37:06] [I] Calibration:
[12/24/2021-11:37:06] [I] Refit: Disabled
[12/24/2021-11:37:06] [I] Sparsity: Disabled
[12/24/2021-11:37:06] [I] Safe mode: Disabled
[12/24/2021-11:37:06] [I] Restricted mode: Disabled
[12/24/2021-11:37:06] [I] Save engine: //home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.engine
[12/24/2021-11:37:06] [I] Load engine:
[12/24/2021-11:37:06] [I] NVTX verbosity: 0
[12/24/2021-11:37:06] [I] Tactic sources: Using default tactic sources
[12/24/2021-11:37:06] [I] timingCacheMode: local
[12/24/2021-11:37:06] [I] timingCacheFile:
[12/24/2021-11:37:06] [I] Input(s)s format: fp32:CHW
[12/24/2021-11:37:06] [I] Output(s)s format: fp32:CHW
[12/24/2021-11:37:06] [I] Input build shapes: model
[12/24/2021-11:37:06] [I] Input calibration shapes: model
[12/24/2021-11:37:06] [I] === System Options ===
[12/24/2021-11:37:06] [I] Device: 0
[12/24/2021-11:37:06] [I] DLACore:
[12/24/2021-11:37:06] [I] Plugins:
[12/24/2021-11:37:06] [I] === Inference Options ===
[12/24/2021-11:37:06] [I] Batch: Explicit
[12/24/2021-11:37:06] [I] Input inference shapes: model
[12/24/2021-11:37:06] [I] Iterations: 10
[12/24/2021-11:37:06] [I] Duration: 3s (+ 200ms warm up)
[12/24/2021-11:37:06] [I] Sleep time: 0ms
[12/24/2021-11:37:06] [I] Streams: 1
[12/24/2021-11:37:06] [I] ExposeDMA: Disabled
[12/24/2021-11:37:06] [I] Data transfers: Enabled
[12/24/2021-11:37:06] [I] Spin-wait: Disabled
[12/24/2021-11:37:06] [I] Multithreading: Disabled
[12/24/2021-11:37:06] [I] CUDA Graph: Disabled
[12/24/2021-11:37:06] [I] Separate profiling: Disabled
[12/24/2021-11:37:06] [I] Time Deserialize: Disabled
[12/24/2021-11:37:06] [I] Time Refit: Disabled
[12/24/2021-11:37:06] [I] Skip inference: Disabled
[12/24/2021-11:37:06] [I] Inputs:
[12/24/2021-11:37:06] [I] === Reporting Options ===
[12/24/2021-11:37:06] [I] Verbose: Disabled
[12/24/2021-11:37:06] [I] Averages: 10 inferences
[12/24/2021-11:37:06] [I] Percentile: 99
[12/24/2021-11:37:06] [I] Dump refittable layers:Disabled
[12/24/2021-11:37:06] [I] Dump output: Disabled
[12/24/2021-11:37:06] [I] Profile: Disabled
[12/24/2021-11:37:06] [I] Export timing to JSON file:
[12/24/2021-11:37:06] [I] Export output to JSON file:
[12/24/2021-11:37:06] [I] Export profile to JSON file:
[12/24/2021-11:37:06] [I]
[12/24/2021-11:37:06] [I] === Device Information ===
[12/24/2021-11:37:06] [I] Selected Device: Xavier
[12/24/2021-11:37:06] [I] Compute Capability: 7.2
[12/24/2021-11:37:06] [I] SMs: 6
[12/24/2021-11:37:06] [I] Compute Clock Rate: 1.109 GHz
[12/24/2021-11:37:06] [I] Device Global Memory: 7773 MiB
[12/24/2021-11:37:06] [I] Shared Memory per SM: 96 KiB
[12/24/2021-11:37:06] [I] Memory Bus Width: 256 bits (ECC disabled)
[12/24/2021-11:37:06] [I] Memory Clock Rate: 1.109 GHz
[12/24/2021-11:37:06] [I]
[12/24/2021-11:37:06] [I] TensorRT version: 8001
[12/24/2021-11:37:08] [I] [TRT] [MemUsageChange] Init CUDA: CPU +354, GPU +0, now: CPU 372, GPU 4662 (MiB)
[12/24/2021-11:37:08] [I] Start parsing network model
[12/24/2021-11:37:08] [I] [TRT] ----------------------------------------------------------------
[12/24/2021-11:37:08] [I] [TRT] Input filename: /home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.onnx
[12/24/2021-11:37:08] [I] [TRT] ONNX IR version: 0.0.8
[12/24/2021-11:37:08] [I] [TRT] Opset version: 13
[12/24/2021-11:37:08] [I] [TRT] Producer name:
[12/24/2021-11:37:08] [I] [TRT] Producer version:
[12/24/2021-11:37:08] [I] [TRT] Domain:
[12/24/2021-11:37:08] [I] [TRT] Model version: 0
[12/24/2021-11:37:08] [I] [TRT] Doc string:
[12/24/2021-11:37:08] [I] [TRT] ----------------------------------------------------------------
[12/24/2021-11:37:08] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/24/2021-11:37:08] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[12/24/2021-11:37:08] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[12/24/2021-11:37:09] [E] Error[9]: [graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Postprocessor/Reshape: -1 wildcard solution does not fit in int32_t
)
[12/24/2021-11:37:09] [E] [TRT] ModelImporter.cpp:720: While parsing node number 366 [Reshape → “Postprocessor/Reshape:0”]:
[12/24/2021-11:37:09] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[12/24/2021-11:37:09] [E] [TRT] ModelImporter.cpp:722: input: “Postprocessor/Tile:0”
input: “const_fold_opt__6647”
output: “Postprocessor/Reshape:0”
name: “Postprocessor/Reshape”
op_type: “Reshape”

[12/24/2021-11:37:09] [E] [TRT] ModelImporter.cpp:723: — End node —
[12/24/2021-11:37:09] [E] [TRT] ModelImporter.cpp:726: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Postprocessor/Reshape
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Postprocessor/Reshape: -1 wildcard solution does not fit in int32_t
)
[12/24/2021-11:37:09] [E] Failed to parse onnx file
[12/24/2021-11:37:09] [I] Finish parsing network model
[12/24/2021-11:37:09] [E] Parsing model failed
[12/24/2021-11:37:09] [E] Engine creation failed
[12/24/2021-11:37:09] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --onnx=/home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.onnx --saveEngine=//home/rachel/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozenNew_NewDatatype_opset_13.engine```

Could you help with fixing the reshape issue?

ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Postprocessor/Reshape
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Postprocessor/Reshape: -1 wildcard solution does not fit in int32_t
)

This is viszulation of the reshape layer.

Hi,

Suppose you are using TF 2.x Object Detection API.

Since it use some operation variation, you will need some customization when exporting to ONNX and TensorRT.
We do have an example for this. Do you have the complete model file as below:

[model/name]
├── checkpoint
│   ├── ckpt-0.data-00000-of-00001
│   └── ckpt-0.index
├── pipeline.config
└── saved_model
    └── saved_model.pb

If yes, would you mind following the steps mentioned in the below sample to see if it works?

Thanks.

@AastaLLL I am using TF 1.15.3 Object detection API. I have the complete model files. Do you have documentation for TF 1.x Object detection API.

I checked the model supporting list. They are not supporting the SSD MobilnetV3 conversion.

Could you help with fixing the reshape issue?

ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Postprocessor/Reshape
[graphShapeAnalyzer.cpp::throwIfError::1306] Error Code 9: Internal Error (Postprocessor/Reshape: -1 wildcard solution does not fit in int32_t
)