Deepstream 5.0 peoplenet : ERROR from primary_gie: Configuration file parsing failed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only) N.A
• Issue Type( questions, new requirements, bugs) question

I tried to execute the following command

deepstream-app -c configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt

Bit it came out the error like this:

Error: Could not parse TLT encoded model file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed

Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:00.343254430 31869     0x2ebbbf30 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Configuration file parsing failed
0:00:00.343321879 31869     0x2ebbbf30 WARN                 nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt
** ERROR: <main:655>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Configuration file parsing failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt
App run failed

I checked the config file at Primary GIE group and found that the model engine file is not in the identified folder, where should I get this file?

[primary-gie]
enable=1
gpu-id=0
model-engine-file=…/…/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_peoplenet.txt

The sample config file use relative path instead of absolute path, so either you go into the /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models folder and run the command “deepstream-app -c deepstream_app_source1_peoplenet.txt”, or you need to modify the config file /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/deepstream_app_source1_peoplenet.txt to change all relative path in it to absolute path.

1 Like

Hi @Fiona.Chen
I tried to run the command deepstream-app -c deepstream_app_source1_peoplenet.txt under /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models

but I still get the error like this

Error: Could not parse TLT encoded model file path
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1242>: failed

Using winsys: x11
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:00.242334378 19299 0x7f30002380 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Configuration file parsing failed
0:00:00.242411047 19299 0x7f30002380 WARN nvinfer gstnvinfer.cpp:766:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt
** ERROR: main:655: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Configuration file parsing failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(766): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt
App run failed

here is a screenshot

Have you downloaded peoplenet etlt file in /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet?

Hi @Fiona.Chen
Yup, but I don’t have the resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine in this folder
here is the screenshot

I also changed the tlt-encoded-model in infer config file to /opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt from
/opt/nvidia/deepstream/deepstream-5.0/samples/models/tlt_peoplenet_pruned_v1.0/resnet34_peoplenet_pruned.etlt as default

here are the config files that I used to run the sample app.
deepstream_app_source1_peoplenet.txt (3.3 KB) config_infer_primary_peoplenet.txt (2.3 KB)

Please open the file /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/README and read it carefully.

This error means the tlt-encoded-model can not be found or read correctly.

1 Like

Since you are using absolute path in your config file and there is no engine file for the first time, you need to comment out the model-engine-file=xxxx setting in your config file.

And if you want the engine file being generated, please run with root account for at least one time. And then you can set model-engine-file=xxxx in your config file and run the case.

Thanks @Fiona.Chen, Problem is solved.