[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered

• Hardware Platform (GPU) - NVIDIA A10
• DeepStream Version deepstream:6.3

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-6.3/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /root/trafficcamnet/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine open error
0:00:04.344353510  8325      0x35dab60 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/root/trafficcamnet/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine failed
0:00:04.580464756  8325      0x35dab60 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/root/trafficcamnet/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:04.582473253  8325      0x35dab60 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:1: Invalid control characters encountered in text.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:12: Interpreting non ascii codepoint 246.
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format onnx2trt_onnx.ModelProto: 1:12: Message type "onnx2trt_onnx.ModelProto" has no field named "input_1".
ERROR: [TRT]: ModelImporter.cpp:688: Failed to parse ONNX model from file: /root/trafficcamnet/resnet18_trafficcamnet_finetuned.onnx
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:971 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:804 failed to build network.
0:00:08.618267242  8325      0x35dab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
0:00:08.798528764  8325      0x35dab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2108> [UID = 1]: build backend context failed
0:00:08.800609669  8325      0x35dab60 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<infer> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1282> [UID = 1]: generate backend failed, check config file settings
0:00:08.801040409  8325      0x35dab60 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<infer> error: Failed to create NvDsInferContext instance
0:00:08.801086677  8325      0x35dab60 WARN                 nvinfer gstnvinfer.cpp:898:gst_nvinfer_start:<infer> error: Config file path: /root/trafficcamnet/config_infer_primary_trafficcamnet.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(898): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:infer:
Config file path: /root/trafficcamnet/config_infer_primary_trafficcamnet.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
  • i already fine-tune the pretrained weights of trafficcamnet with the help of this blog training link
  • After conversion of model.tlt to resnet18_trafficcamnet_finetuned.onnx
  • In my deepstream pipeline if engine file is not given then it will create the engine file but some error are coming.

Please upgrade to the latest DeepStream version 7.1. Are you meet the issue when run /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test2 without any changes?

But if i am using the pretrained model in same build version deepstream:6.3, it works fine but after fine tune the weights and if i am using those weights then only issue is coming.
should i update the version will this fix my issue?

@Morganh do you have any suggestion?

Please check if you can open this new onnx file with Netron.

The .tlt model cannot convert to onnx file.
Which TAO version did you use to finetune? For latest 5.0 or 5.5 TAO, you can export the your trained .tlt file into .onnx file. For example, you can run the “export” command with 5.0 docker.

I had use the unpruned_v1.0 model resnet18_trafficcamnet.tlt, and trained the custom dataset
according to this blog train the model

  • detectnet_v2 train -e spec.txt -r /home/trainval/model -k tlt_encode using this command i trained the model.
  • Then in model folder i got the model.tlt, then i convert the model.tlt to onnx using detectnet_v2 export --model model.tlt --key tlt_encode --output converted_model.onnx
  • This onnx model is i am using.

Container that i was using for training is “nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3
and i am using onnx model in deepstream:6.3

@Morganh @kesong please suggest some solution

Please use 5.0 docker or 5.5 docker to run.
The tao toolkit 5.0 containers can directly take in .tlt files and convert it to .onnx during export.
$docker run --runtime=nvidia -it --rm nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
then inside the docker, run below.
#detectnet_v2 export --model /path/to/model.tlt --key tlt_encode --output /path/to/model.onnx

1 Like

Thanks @Morganh @kesong, It’s working now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.