PGIE element could not be created. Exiting

• Hardware Platform (JGPU) X86/RTX3080
• DeepStream Version6.3
• NVIDIA GPU Driver Version (valid for GPU only) 530.41.03
Can’t run poseclassificatinet app with the following errors.

tic@atic-Nuvo-8108GC-Series:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification$ ./deepstream-pose-classification-app ../../../configs/app/deepstream_pose_classification_config.yaml
width 1280 hight 720
video file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov
PGIE element could not be created. Exiting.

I have setup all required models.
Main config yaml is deepstream_pose_classification_config.yaml

source-list:
   list: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov

streammux:
  width: 1280
  height: 720
  batched-push-timeout: 40000

tracker:
  enable: 1
  ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
  ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml

primary-gie:
  plugin-type: 1
  #config-file-path: ../nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt
  config-file-path: ../triton/peoplenet_tao/config_infer_primary_peoplenet.yml
  #config-file-path: ../triton-grpc/peoplenet_tao/config_infer_primary_peoplenet.yml

secondary-gie0:
  plugin-type: 1
  #config-file-path: ../nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt
  config-file-path: ../triton/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml
  #config-file-path: ../triton-grpc/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml

secondary-preprocess1:
  config-file-path: ../nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt

secondary-gie1:
  plugin-type: 1
  #config-file-path: ../nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt
  config-file-path: ../triton/bodypose_classification_tao/config_infer_third_bodypose_classification.txt
  #config-file-path: ../triton-grpc/bodypose_classification_tao/config_infer_third_bodypose_classification.yml

sink:
  #0 fakesink 
  #1 filesinke generate the out.mp4 file in the current directory
  #2 rtspsink publish at rtsp://localhost:8554/ds-test
  #3 displaysink
  sink-type: 1

Correct peoplenet model is placed at correct folder.

Correct bodypose3d model is also located.

bodypose_classification model is also located.

tracker for reidentification model is located here.

All files are located correctly. But why I have error?

Could you remove the cache and run the demo again, attach the whole log?

rm ${HOME}/.cache/gstreamer-1.0/*

Still the same.

…/configs/app/deepstream_pose_classification_config.yaml

(gst-plugin-scanner:13480): GStreamer-WARNING **: 15:57:35.755: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:13480): GStreamer-WARNING **: 15:57:35.778: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:13480): GStreamer-WARNING **: 15:57:36.120: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory
width 1280 hight 720
video file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov
PGIE element could not be created. Exiting.

Do I need to convert to engine files separately?
I thought engine files are created when application runs.

In the configuration file, the default is the configuration of nviferserver. If you want to use this plugin, we suggest you use the docker.
Or you can also change the config file like below:

primary-gie:
  plugin-type: 0
  config-file-path: ../nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt
  #config-file-path: ../triton/peoplenet_tao/config_infer_primary_peoplenet.yml
  #config-file-path: ../triton-grpc/peoplenet_tao/config_infer_primary_peoplenet.yml

secondary-gie0:
  plugin-type: 0
  config-file-path: ../nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt
  #config-file-path: ../triton/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml
  #config-file-path: ../triton-grpc/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml

secondary-preprocess1:
  config-file-path: ../nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt

secondary-gie1:
  plugin-type: 0
  config-file-path: ../nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt
  #config-file-path: ../triton/bodypose_classification_tao/config_infer_third_bodypose_classification.yml
  #config-file-path: ../triton-grpc/bodypose_classification_tao/config_infer_third_bodypose_classification.yml

I have updated as follows.

source-list:
   list: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov

streammux:
  width: 1280
  height: 720
  batched-push-timeout: 40000

tracker:
  enable: 1
  ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
  ll-config-file: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml

primary-gie:
  plugin-type: 1
  config-file-path: ../nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt
  #config-file-path: ../triton/peoplenet_tao/config_infer_primary_peoplenet.yml
  #config-file-path: ../triton-grpc/peoplenet_tao/config_infer_primary_peoplenet.yml

secondary-gie0:
  plugin-type: 1
  config-file-path: ../nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt
  #config-file-path: ../triton/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml
  #config-file-path: ../triton-grpc/bodypose3d_tao/config_infer_secondary_bodypose3dnet.yml

secondary-preprocess1:
  config-file-path: ../nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt

secondary-gie1:
  plugin-type: 1
  config-file-path: ../nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt
  #config-file-path: ../triton/bodypose_classification_tao/config_infer_third_bodypose_classification.txt
  #config-file-path: ../triton-grpc/bodypose_classification_tao/config_infer_third_bodypose_classification.yml

sink:
  #0 fakesink 
  #1 filesinke generate the out.mp4 file in the current directory
  #2 rtspsink publish at rtsp://localhost:8554/ds-test
  #3 displaysink
  sink-type: 1

Still same error.

atic@atic-Nuvo-8108GC-Series:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification$ ./deepstream-pose-classification-app ../../../configs/app/deepstream_pose_classification_config.yaml
width 1280 hight 720
video file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov
PGIE element could not be created. Exiting.
atic@atic-Nuvo-8108GC-Series:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification$

Do I need to use docker?
My deepstream is dGPU Ubuntu version.

This docker ?
docker pull nvcr.io/nvidia/deepstream:6.3-gc-triton-devel

You don’t change the plugin-type: 0.

Thanks.
Actually I like to run with display. When set sink-type: 3. There is error.
The error is one of engine file loading failed. But all engine files are created.

.

/deepstream-pose-classification-app ../../../configs/app/deepstream_pose_classification_config.yaml
width 1280 hight 720
video file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
config_file_path:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt
Unknown or legacy key specified 'is-classifier' for group [property]
Now playing!
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:03.731890105 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x300x34x1      
1   OUTPUT kFLOAT fc_pred         6               

0:00:03.798593839 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
0:00:03.798902455 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<bodypose-classification-nvinference-engine> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt sucessfully
frameSeqLen:300
0:00:03.800656768 10235 0x5567394bd2a0 WARN                 nvinfer gstnvinfer.cpp:887:gst_nvinfer_start:<secondary-nvinference-engine> warning: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:05.686709347 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 9
0   INPUT  kFLOAT input0          3x256x192       min: 1x3x256x192     opt: 8x3x256x192     Max: 8x3x256x192     
1   INPUT  kFLOAT k_inv           3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
2   INPUT  kFLOAT t_form_inv      3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
3   INPUT  kFLOAT scale_normalized_mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
4   INPUT  kFLOAT mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
5   OUTPUT kFLOAT pose25d         34x4            min: 0               opt: 0               Max: 0               
6   OUTPUT kFLOAT pose2d          34x3            min: 0               opt: 0               Max: 0               
7   OUTPUT kFLOAT pose3d          34x3            min: 0               opt: 0               Max: 0               
8   OUTPUT kFLOAT pose2d_org_img  34x3            min: 0               opt: 0               Max: 0               

0:00:05.757270589 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:05.758297337 10235 0x5567394bd2a0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[src/modules/ReID/ReID.cpp, loadTRTEngine() @line 583]: Engine file does not exist
[NvMultiObjectTracker] Load engine failed. Create engine again.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars

!![ERROR] TAO model file does not exist
[NvMultiObjectTracker] De-initialized
An exception occurred. TAO model file does not exist
gstnvtracker: Failed to initialize tracker context!
gstnvtracker:: Failed to create batch context. Shutting down processing.
size:20
Running...



WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:09.603145583 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x300x34x1      
1   OUTPUT kFLOAT fc_pred         6               



0:00:09.669850157 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
0:00:09.670202659 10235 0x556732425d20 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<bodypose-classification-nvinference-engine> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt sucessfully
ERROR from element preprocess-plugin: Configuration file not provided
Error details: gstnvdspreprocess.cpp(442): gst_nvdspreprocess_start (): /GstPipeline:deepstream_pose_classfication_app/GstNvDsPreProcess:preprocess-plugin
Returned, stopping playback
0:00:09.670314090 10235 0x556732425d20 WARN                 nvinfer gstnvinfer.cpp:887:gst_nvinfer_start:<secondary-nvinference-engine> warning: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:11.549200130 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 9
0   INPUT  kFLOAT input0          3x256x192       min: 1x3x256x192     opt: 8x3x256x192     Max: 8x3x256x192     
1   INPUT  kFLOAT k_inv           3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
2   INPUT  kFLOAT t_form_inv      3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
3   INPUT  kFLOAT scale_normalized_mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
4   INPUT  kFLOAT mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
5   OUTPUT kFLOAT pose25d         34x4            min: 0               opt: 0               Max: 0               
6   OUTPUT kFLOAT pose2d          34x3            min: 0               opt: 0               Max: 0               
7   OUTPUT kFLOAT pose3d          34x3            min: 0               opt: 0               Max: 0               
8   OUTPUT kFLOAT pose2d_org_img  34x3            min: 0               opt: 0               Max: 0               

0:00:11.614977505 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:11.615932535 10235 0x556732425d20 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[src/modules/ReID/ReID.cpp, loadTRTEngine() @line 583]: Engine file does not exist
[NvMultiObjectTracker] Load engine failed. Create engine again.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars

!![ERROR] TAO model file does not exist
[NvMultiObjectTracker] De-initialized
An exception occurred. TAO model file does not exist
gstnvtracker: Failed to initialize tracker context!
gstnvtracker:: Failed to create batch context. Shutting down processing.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:15.257842943 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/peoplenet/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

0:00:15.323343797 10235 0x556732425d20 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/peoplenet/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
0:00:15.324065076 10235 0x556732425d20 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Decodebin child added: qtdemux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: aacparse0
Decodebin child added: avdec_aac0
Deleting pipeline

The following is saving to mp4 file by setting set sink-type: 1

atic@atic-Nuvo-8108GC-Series:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/apps/tao_others/deepstream-pose-classification$ ./deepstream-pose-classification-app ../../../configs/app/deepstream_pose_classification_config.yaml
width 1280 hight 720
video file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_walk.mov
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
config_file_path:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_preprocess_bodypose_classification.txt
Unknown or legacy key specified 'is-classifier' for group [property]
Now playing!
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:03.642707315 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x300x34x1      
1   OUTPUT kFLOAT fc_pred         6               

0:00:03.709218460 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<bodypose-classification-nvinference-engine> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/poseclassificationnet/st-gcn_3dbp_nvidia.etlt_b1_gpu0_fp32.engine
0:00:03.709574862 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<bodypose-classification-nvinference-engine> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose_classification_tao/config_infer_third_bodypose_classification.txt sucessfully
frameSeqLen:300
0:00:03.711229644 10335 0x558edcb2e300 WARN                 nvinfer gstnvinfer.cpp:887:gst_nvinfer_start:<secondary-nvinference-engine> warning: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:05.598225385 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 9
0   INPUT  kFLOAT input0          3x256x192       min: 1x3x256x192     opt: 8x3x256x192     Max: 8x3x256x192     
1   INPUT  kFLOAT k_inv           3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
2   INPUT  kFLOAT t_form_inv      3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3           
3   INPUT  kFLOAT scale_normalized_mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
4   INPUT  kFLOAT mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36            
5   OUTPUT kFLOAT pose25d         34x4            min: 0               opt: 0               Max: 0               
6   OUTPUT kFLOAT pose2d          34x3            min: 0               opt: 0               Max: 0               
7   OUTPUT kFLOAT pose3d          34x3            min: 0               opt: 0               Max: 0               
8   OUTPUT kFLOAT pose2d_org_img  34x3            min: 0               opt: 0               Max: 0               

0:00:05.665984663 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/models/bodypose3dnet/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:05.667616929 10335 0x558edcb2e300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_tao_apps/configs/nvinfer/bodypose3d_tao/config_infer_secondary_bodypose3dnet.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
~~ CLOG[src/modules/ReID/ReID.cpp, loadTRTEngine() @line 583]: Engine file does not exist
[NvMultiObjectTracker] Load engine failed. Create engine again.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars

!![ERROR] TAO model file does not exist
[NvMultiObjectTracker] De-initialized
An exception occurred. TAO model file does not exist
gstnvtracker: Failed to initialize tracker context!
gstnvtracker:: Failed to create batch context. Shutting down processing.
size:20
Running...

^C** ERROR: <_intr_handler:1210>: User Interrupted.. 


(deepstream-pose-classification-app:10335): GLib-CRITICAL **: 19:39:35.991: g_main_loop_quit: assertion 'loop != NULL' failed

Did you run the download_models.sh before run this demo? You can check if the following model exist: Tracker.

How can I set up for multiple rtsp streaming for this application in config file?
Is it inside source list as

source-list:
list: [rtsp1, rtsp2]
Is it like that?

No. You can refer to other config files, like bodypose2d_app_config.yml.

source-list:
  list: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4;file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4

OK thanks a lot for support. I have 13 cctvs for one project and using RTX4080 as GPU, I need to check RTX4080 can support how many CCTVs for poseclassificatinet app.
Any suggestion for hardware setup?
For such poseclassificationnet application, how is hardware setup to work with 13cctvs.
Should I setup like modular type using deepstream for each edge device with RTX4080 GPU. Say one edge device can interface 4 cctvs, then I increase to 4 edge devices to interface 13 cctvs.

Or should I use Triton server lib?
What is better hardware setup for multiple CCTVs application for poseclassificationnet?

I like to achieve 10fps for each cctv.

poseclassificationnet app is working well for 1-2 people in image.
When video has multiple people like 10-12 people, the application freeze. Becomes 0 fps
I used Geforce RTX3080 GPU

It was freezed like below.

Maybe the loading is too heavy, you can check that by top and nvidia-smicommands.
Could you attach the video to us? Or you can just message this video to us.

I sent dropbox link in private messages.
Pls let me know whether you can download.

In nvidia-smi, volatile GPU-Util reached to >90% and freezing happened. Pls test at your side. I am using Geforce RTX 3080.

Can you download my videos?

Yes. I am trying to reproduce this issue on my T4 server. We will reply promptly if there is a conclusion.

Thank you Sir.
Since I need to customize the poseclassificationnet app for my actual application need, I am creating a new application using deepstream-app.
I’ll detect a few object detection at first gie, not only human. I’ll run 3DSkeleton gie only on certain creteria so that I can save processing. I won’t run 3DSkeleton & posetureclassificationnet on all human, run only on a certain person. So that I can run multiple sources faster. Now poseclassificationnet supports only one source.

if(num_sources > 1) {
    g_printerr ("We only support 1 source now. Exiting.\n");
    return -1;
  }

Once your side solved issue, I can update at my side.

I run that on my board T4 with your stream, it worked well. Perhaps your card was overloaded and got stuck. Our next version will support multiple sources, please wait for the update.