Error at running deepstream-bodypose-3d app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Intel CPU & A30 GPU
• DeepStream Version Docker: nvcr.io/nvidia/deepstream:6.3-gc-triton-devel

I am using docker nvcr.io/nvidia/deepstream:6.3-gc-triton-devel. The command used is
./deepstream-pose-estimation-app --input file:///workspace/Nyan/tao_source_codes_v5.0.0/notebooks/tao_launcher_starter_kit/pose_classification_net/data/videos/selfinf_2136.avi --output ../streams/bodypose_3dbp.mp4 --focal 800.0 --width 1280 --height 720 --fps --save-pose /workspace/Nyan/tao_source_codes_v5.0.0/notebooks/tao_launcher_starter_kit/pose_classification_net/data/jasons/selfinf_2136.json

I have errors as

Using GPU 0 (NVIDIA A30, 56 SMs, 2048 th/SM max, CC 8.0, ECC on)
In cb_newpad
ENC_CTX(0x7f475400ac20) Error in initializing nvenc context
ERROR from element nv-filesink-encoder: Could not get/set settings from/on resource.
Error details: gstv4l2object.c(3536): gst_v4l2_object_set_format_full (): /GstPipeline:deepstream-bodypose3dnet/GstDsNvVideoEncFilesinkBin:nv-filesink/nvv4l2h265enc:nv-filesink-encoder:
Device is in streaming mode
Returned, stopping playback
[NvMultiObjectTracker] De-initialized
Deleting pipeline

What could be the issue?
The whole message came out from running the command is

Now playing: file:///workspace/Nyan/tao_source_codes_v5.0.0/notebooks/tao_launcher_starter_kit/pose_classification_net/data/videos/selfinf_2136.avi
0:00:00.916725193   406 0x561220e986d0 WARN                 nvinfer gstnvinfer.cpp:887:gst_nvinfer_start:<secondary-nvinference-engine> warning: NvInfer output-tensor-meta is enabled but init_params auto increase memory (auto-inc-mem) is disabled. The bufferpool will not be automatically resized.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:04.552285043   406 0x561220e986d0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/bodypose3dnet_vdeployable_accuracy_v1.0/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 9
0   INPUT  kFLOAT input0          3x256x192       min: 1x3x256x192     opt: 8x3x256x192     Max: 8x3x256x192
1   INPUT  kFLOAT k_inv           3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3
2   INPUT  kFLOAT t_form_inv      3x3             min: 1x3x3           opt: 8x3x3           Max: 8x3x3
3   INPUT  kFLOAT scale_normalized_mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36
4   INPUT  kFLOAT mean_limb_lengths 36              min: 1x36            opt: 8x36            Max: 8x36
5   OUTPUT kFLOAT pose25d         34x4            min: 0               opt: 0               Max: 0
6   OUTPUT kFLOAT pose2d          34x3            min: 0               opt: 0               Max: 0
7   OUTPUT kFLOAT pose3d          34x3            min: 0               opt: 0               Max: 0
8   OUTPUT kFLOAT pose2d_org_img  34x3            min: 0               opt: 0               Max: 0

0:00:04.659691127   406 0x561220e986d0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/bodypose3dnet_vdeployable_accuracy_v1.0/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:04.664925363   406 0x561220e986d0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary-nvinference-engine> [UID 2]: Load new model:../configs/config_infer_secondary_bodypose3dnet.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
0:00:06.726520791   406 0x561220e986d0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/peoplenet_vdeployable_quantized_v2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60

0:00:06.856775805   406 0x561220e986d0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.3/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/peoplenet_vdeployable_quantized_v2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
0:00:06.859059045   406 0x561220e986d0 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:../configs/config_infer_primary_peoplenet.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...

Using GPU 0 (NVIDIA A30, 56 SMs, 2048 th/SM max, CC 8.0, ECC on)
In cb_newpad
ENC_CTX(0x7f475400ac20) Error in initializing nvenc context
ERROR from element nv-filesink-encoder: Could not get/set settings from/on resource.
Error details: gstv4l2object.c(3536): gst_v4l2_object_set_format_full (): /GstPipeline:deepstream-bodypose3dnet/GstDsNvVideoEncFilesinkBin:nv-filesink/nvv4l2h265enc:nv-filesink-encoder:
Device is in streaming mode
Returned, stopping playback
[NvMultiObjectTracker] De-initialized
Deleting pipeline

A30 does not have hardware encoder module. You can refer to the link below: https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new

I have to use Geforce RTX.

Where can I check for deepstream apps, which app needs hardware decoding module? I need to use deepstream-bodypose-3d app and 'poseclassificationnet`, so need to check which GPU to use, whether I can use A100 or I have to use RTX.

Could you attach the demo code link? Basically, when you use the nvv4l2h264enc or nvv4l2h265enc as encoder, you need hardware encoder module. You can check that in your own code.
About which card supports hardware encoder, you can refer to the link I attached before.

I am using this deepstream-bodypose-3d

You can try to use fakesink or nveglglessink on the A30.

In which configuration file I need to change?

You can refer to the --output parameter in the README.

The command is

./deepstream-pose-estimation-app --input file://workspace/data/Activities/videos/selfinf_2120.avi --output fakesink --focal 800.0 --width 1280 --height 720 --fps --save-pose /workspace/data/Activities/jsons/selfinf_2120.json

The error is as follows and stuck there. Is it driver issue?

Error: Could not get cuda device count (cudaErrorInsufficientDriver)
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1319>: failed
Error: Could not get cuda device count (cudaErrorInsufficientDriver)
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:1319>: failed
Now playing: file://workspace/data/Activities/videos/selfinf_2120.avi
Unable to set device in gst_nvstreammux_change_state
Unable to set device in gst_nvstreammux_change_state
Running...

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes. Please refer to our Guide to configure the related environments.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.