parseModel: Failed to parse ONNX model

I am getting error while converting the etlt model to engine file inside docker container.

0:00:02.962554975 88 0x562b1dca8160 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 4]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
parseModel: Failed to parse ONNX model
ERROR: tlt/tlt_decode.cpp:389 Failed to build network, error in model parsing.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:04.008469920 88 0x562b1dca8160 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 4]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 4]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::61] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)

This is the config file that i am using:

[property]
gie-unique-id=4
net-scale-factor=1.0
offsets=103.939;116.779;123.68
infer-dims=3;288;416
tlt-model-key=ZW4wdW10cGc0YXRmNmw1b3B1dWthYTJrcHE6MTQzNzQ1NjktZWI1ZC00Y2NlLTkxMjQtYmU1YzY2ZjY5MGZh
labelfile-path=…/…/…/models/ppe_v1.1/labels.txt
model-engine-file=…/…/…/models/ppe_v1.1/yolov4_resnet18_epoch_080_int8.engine
int8-calib-file=…/…/…/models/ppe_v1.1/cal.bin
tlt-encoded-model=…/…/…/models/ppe_v1.1/yolov4_resnet18_epoch_080.etlt
network-type=0
num-detected-classes=7
model-color-format=1
maintain-aspect-ratio=0
output-tensor-meta=0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1
gie-unique-name=ppe
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/DS_TAO/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so

Hi,

Could you please share with us the issue repro ONNX model and complete trtexec --verbose ... logs for better debugging.

Thank you.

I am facing a similar issue with my ONNX model. I have verified my ONNX model using ONNX checker and it has shown no errors. I have also visualized and verified the ONNX model with my pytorch model. Can you please check the same and help me with this.

/usr/src/tensorrt/bin/trtexec --verbose --onnx=[sxxx_multiclass_3.onnx] --saveEngine=[SxxxT.engine]
&&&& RUNNING TensorRT.trtexec [TensorRT v8203] # /usr/src/tensorrt/bin/trtexec --verbose --onnx=[sxxx_multiclass_3.onnx] --saveEngine=[SxxxT.engine]
[10/04/2023-22:13:17] [I] === Model Options ===
[10/04/2023-22:13:17] [I] Format: ONNX
[10/04/2023-22:13:17] [I] Model: [sxxx_multiclass_3.onnx]
[10/04/2023-22:13:17] [I] Output:
[10/04/2023-22:13:17] [I] === Build Options ===
[10/04/2023-22:13:17] [I] Max batch: explicit batch
[10/04/2023-22:13:17] [I] Workspace: 16 MiB
[10/04/2023-22:13:17] [I] minTiming: 1
[10/04/2023-22:13:17] [I] avgTiming: 8
[10/04/2023-22:13:17] [I] Precision: FP32
[10/04/2023-22:13:17] [I] Calibration:
[10/04/2023-22:13:17] [I] Refit: Disabled
[10/04/2023-22:13:17] [I] Sparsity: Disabled
[10/04/2023-22:13:17] [I] Safe mode: Disabled
[10/04/2023-22:13:17] [I] DirectIO mode: Disabled
[10/04/2023-22:13:17] [I] Restricted mode: Disabled
[10/04/2023-22:13:17] [I] Save engine: [SxxxT.engine]
[10/04/2023-22:13:17] [I] Load engine:
[10/04/2023-22:13:17] [I] Profiling verbosity: 0
[10/04/2023-22:13:17] [I] Tactic sources: Using default tactic sources
[10/04/2023-22:13:17] [I] timingCacheMode: local
[10/04/2023-22:13:17] [I] timingCacheFile:
[10/04/2023-22:13:17] [I] Input(s)s format: fp32:CHW
[10/04/2023-22:13:17] [I] Output(s)s format: fp32:CHW
[10/04/2023-22:13:17] [I] Input build shapes: model
[10/04/2023-22:13:17] [I] Input calibration shapes: model
[10/04/2023-22:13:17] [I] === System Options ===
[10/04/2023-22:13:17] [I] Device: 0
[10/04/2023-22:13:17] [I] DLACore:
[10/04/2023-22:13:17] [I] Plugins:
[10/04/2023-22:13:17] [I] === Inference Options ===
[10/04/2023-22:13:17] [I] Batch: Explicit
[10/04/2023-22:13:17] [I] Input inference shapes: model
[10/04/2023-22:13:17] [I] Iterations: 10
[10/04/2023-22:13:17] [I] Duration: 3s (+ 200ms warm up)
[10/04/2023-22:13:17] [I] Sleep time: 0ms
[10/04/2023-22:13:17] [I] Idle time: 0ms
[10/04/2023-22:13:17] [I] Streams: 1
[10/04/2023-22:13:17] [I] ExposeDMA: Disabled
[10/04/2023-22:13:17] [I] Data transfers: Enabled
[10/04/2023-22:13:17] [I] Spin-wait: Disabled
[10/04/2023-22:13:17] [I] Multithreading: Disabled
[10/04/2023-22:13:17] [I] CUDA Graph: Disabled
[10/04/2023-22:13:17] [I] Separate profiling: Disabled
[10/04/2023-22:13:17] [I] Time Deserialize: Disabled
[10/04/2023-22:13:17] [I] Time Refit: Disabled
[10/04/2023-22:13:17] [I] Skip inference: Disabled
[10/04/2023-22:13:17] [I] Inputs:
[10/04/2023-22:13:17] [I] === Reporting Options ===
[10/04/2023-22:13:17] [I] Verbose: Enabled
[10/04/2023-22:13:17] [I] Averages: 10 inferences
[10/04/2023-22:13:17] [I] Percentile: 99
[10/04/2023-22:13:17] [I] Dump refittable layers:Disabled
[10/04/2023-22:13:17] [I] Dump output: Disabled
[10/04/2023-22:13:17] [I] Profile: Disabled
[10/04/2023-22:13:17] [I] Export timing to JSON file:
[10/04/2023-22:13:17] [I] Export output to JSON file:
[10/04/2023-22:13:17] [I] Export profile to JSON file:
[10/04/2023-22:13:17] [I]
[10/04/2023-22:13:17] [I] === Device Information ===
[10/04/2023-22:13:17] [I] Selected Device: NVIDIA GeForce RTX 3080 Laptop GPU
[10/04/2023-22:13:17] [I] Compute Capability: 8.6
[10/04/2023-22:13:17] [I] SMs: 48
[10/04/2023-22:13:17] [I] Compute Clock Rate: 1.545 GHz
[10/04/2023-22:13:17] [I] Device Global Memory: 7982 MiB
[10/04/2023-22:13:17] [I] Shared Memory per SM: 100 KiB
[10/04/2023-22:13:17] [I] Memory Bus Width: 256 bits (ECC disabled)
[10/04/2023-22:13:17] [I] Memory Clock Rate: 7.001 GHz
[10/04/2023-22:13:17] [I]
[10/04/2023-22:13:17] [I] TensorRT version: 8.2.3
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::ScatterND version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Proposal version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::Split version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[10/04/2023-22:13:17] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[10/04/2023-22:13:17] [I] [TRT] [MemUsageChange] Init CUDA: CPU +457, GPU +0, now: CPU 469, GPU 3524 (MiB)
[10/04/2023-22:13:18] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 469 MiB, GPU 3524 MiB
[10/04/2023-22:13:18] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 623 MiB, GPU 3568 MiB
[10/04/2023-22:13:18] [I] Start parsing network model
Could not open file [sxxx_multiclass_3.onnx]
Could not open file [sxxx_multiclass_3.onnx]
[10/04/2023-22:13:18] [E] [TRT] ModelImporter.cpp:735: Failed to parse ONNX model from file: [sxxx_multiclass_3.onnx]
[10/04/2023-22:13:18] [E] Failed to parse onnx file
[10/04/2023-22:13:18] [I] Finish parsing network model
[10/04/2023-22:13:18] [E] Parsing model failed
[10/04/2023-22:13:18] [E] Failed to create engine from model.
[10/04/2023-22:13:18] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8203] # /usr/src/tensorrt/bin/trtexec --verbose --onnx=[sxxx_multiclass_3.onnx] --saveEngine=[SxxxT.engine]

Issue from my side is solved. I am able to convert the model to onnx.

Hello, I have the same problem, how did you solve it?