Failed to generate TRT engine from ONNX model ( Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered in te)

I’m using trt API to generate an trt engine from trained SSD model with ONNX format.
running /usr/src/tensorrt/bin/trtexec --onnx=~/modifed_SSD_model_ops13.onnx raised the below error. I was wondering if you could help me to fix it.

[11/19/2020-16:44:47] [I] === Model Options ===
[11/19/2020-16:44:47] [I] Format: ONNX
[11/19/2020-16:44:47] [I] Model: /home/bl/Desktop/Workplace/projects/Meat_stag1/SSD/modifed_SSD_model_ops13.onnx
[11/19/2020-16:44:47] [I] Output:
[11/19/2020-16:44:47] [I] === Build Options ===
[11/19/2020-16:44:47] [I] Max batch: 1
[11/19/2020-16:44:47] [I] Workspace: 16 MB
[11/19/2020-16:44:47] [I] minTiming: 1
[11/19/2020-16:44:47] [I] avgTiming: 8
[11/19/2020-16:44:47] [I] Precision: FP32
[11/19/2020-16:44:47] [I] Calibration:
[11/19/2020-16:44:47] [I] Safe mode: Disabled
[11/19/2020-16:44:47] [I] Save engine:
[11/19/2020-16:44:47] [I] Load engine:
[11/19/2020-16:44:47] [I] Builder Cache: Enabled
[11/19/2020-16:44:47] [I] NVTX verbosity: 0
[11/19/2020-16:44:47] [I] Inputs format: fp32:CHW
[11/19/2020-16:44:47] [I] Outputs format: fp32:CHW
[11/19/2020-16:44:47] [I] Input build shapes: model
[11/19/2020-16:44:47] [I] Input calibration shapes: model
[11/19/2020-16:44:47] [I] === System Options ===
[11/19/2020-16:44:47] [I] Device: 0
[11/19/2020-16:44:47] [I] DLACore:
[11/19/2020-16:44:47] [I] Plugins:
[11/19/2020-16:44:47] [I] === Inference Options ===
[11/19/2020-16:44:47] [I] Batch: 1
[11/19/2020-16:44:47] [I] Input inference shapes: model
[11/19/2020-16:44:47] [I] Iterations: 10
[11/19/2020-16:44:47] [I] Duration: 3s (+ 200ms warm up)
[11/19/2020-16:44:47] [I] Sleep time: 0ms
[11/19/2020-16:44:47] [I] Streams: 1
[11/19/2020-16:44:47] [I] ExposeDMA: Disabled
[11/19/2020-16:44:47] [I] Spin-wait: Disabled
[11/19/2020-16:44:47] [I] Multithreading: Disabled
[11/19/2020-16:44:47] [I] CUDA Graph: Disabled
[11/19/2020-16:44:47] [I] Skip inference: Disabled
[11/19/2020-16:44:47] [I] Inputs:
[11/19/2020-16:44:47] [I] === Reporting Options ===
[11/19/2020-16:44:47] [I] Verbose: Enabled
[11/19/2020-16:44:47] [I] Averages: 10 inferences
[11/19/2020-16:44:47] [I] Percentile: 99
[11/19/2020-16:44:47] [I] Dump output: Disabled
[11/19/2020-16:44:47] [I] Profile: Disabled
[11/19/2020-16:44:47] [I] Export timing to JSON file:
[11/19/2020-16:44:47] [I] Export output to JSON file:
[11/19/2020-16:44:47] [I] Export profile to JSON file:
[11/19/2020-16:44:47] [I]
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Proposal version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::Split version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[11/19/2020-16:44:47] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:3: Expected identifier, got: :
Failed to parse ONNX model from file/home/bl/Desktop/Workplace/projects/Meat_stag1/SSD/modifed_SSD_model_ops13.onnx
[11/19/2020-16:44:48] [E] [TRT] Network must have at least one output
[11/19/2020-16:44:48] [E] [TRT] Network validation failed.
[11/19/2020-16:44:48] [E] Engine creation failed
[11/19/2020-16:44:48] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/home/bl/Desktop/Workplace/projects/Meat_stag1/SSD/modifed_SSD_model_ops13.onnx --verbose

thanks

Hi,

Is your model using a text format rather than binary?
If yes, would you mind to use binary onnx format instead?

Thanks.

Hi @AastaLLL,
For running object detection model on jetson nano, I’m going to generate a tensorrt engine from Tensorflow V2 ( Trained with Object detection API2 ). I follow the steps below:
1- Converte TF2 ( .pb) format to ONNX using "tf2onnx’
2- Modify the ONNX format to solve “Unsupported ONNX data type: UINT8 (2)” issue (Exporting Tensorflow models to Jetson Nano)
3- Generating Tensorrt engine using either trtexec API or Jetson.inefernce.DetectNet ( Hello AI world).
I generate a modified ONNX format via stage 2 using (’
onnx.save(gs.export_onnx(graph), “updated_model.onnx”)
How I can save a binary ONNX model?

Thanks

Hi,

Would you mind to share the tf2onnx output with us for checking?

Thanks.

Thanks @AastaLLL
Please find the ONNX format in the link below.
looking forward to hearing your feedback.
Thanks

Hi, parham.khojasteh

We cannot access the drive shared above.
Could you help to double-check it?

Thanks.

Thanks @AastaLLL,

Could you please try this one?

Thanks

Hi,

We have tested your model with TensorRT and found some error from “If” operation,

While parsing node number 15 [If]:
ERROR: /home/nvidia/TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: If

This layer is not supported in our TensorRT library.
Could you check if this operation is essential for your model first?

Thanks.