Error when running deepstream-pose-estimation-app

Hello everyone,

I am trying to run the deepstream-pose-estimation-app but I am facing the following error:

deepstream-pose-estimation-app: …/nvdsinfer/nvdsinfer_model_builder.cpp:618: nvdsinfer::TrtModelBuilder::TrtModelBuilder(int, nvinfer1::ILogger&, const std::shared_ptrnvdsinfer::DlLibHandle&): Assertion `m_Builder’ failed.
Aborted (core dumped)

This is the command that get this error:
./deepstream-pose-estimation-app --input file:///opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/9-12-53_57sec.mp4 --output /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/9-12-53_57sec_output.mp4 --focal 800.0 --width 1280 --height 720 --fps --save-pose /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/3_16.json

Can anyone suggest what might be causing this error and how to resolve it? I would appreciate any help or insights on this issue. Thank you in advance.

My Environment Setup:
DeepStreamSDK 6.1.0
TensorRT Version: 8.6.0.12-1+cuda12.0
NVIDIA GPU Driver Version: 510.108.03
CUDA Version: 11.6

what is the device model? the libs version is not compatible please refer to Quickstart Guide — DeepStream 6.2 Release documentation
if on dgpu, you might use docker, here is the link: DeepStream | NVIDIA NGC

Thanks for your response.

The models are ‘‘bodypose3dnet_vdeployable_accuracy_v1.0’’ and ‘‘peoplenet_vdeployable_quantized_v2.5’’

Is it the TensorRT version not compatible? Should it be 8.2.5.1?

Here are my environment set up
GPU platforms: V100
DeepStreamSDK 6.1.0
TensorRT Version: 8.6.0.12-1+cuda12.0
NVIDIA GPU Driver Version: 510.108.03
CUDA Version: 11.6
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
GStreamer1.16.3

if using deepstream 6.1 on dgpu, the libs version should be TRT 8.2.5.1, please refer the link above.

I didn’t find the TRT 8.2.5.1 from https://developer.nvidia.com/tensorrt, so I used these below lines to install TRT 8.5.3

os=“ubuntu2004”
tag=“8.5.3-cuda-11.8”
sudo dpkg -i ./nv-tensorrt-local-repo-${os}-${tag}_1.0-1_amd64.deb
sudo cp /var/nv-tensorrt-local-repo-${os}-${tag}/*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get install tensorrt -y

But I still got the tensorTR with 8.6.0.12 installed. Do you know how to install the TRT 8.5.3 or 8.2.5.1?

you might install the current version: please refer to Installation Guide :: NVIDIA Deep Learning TensorRT Documentation.
here is the method to install TRT 8.2.5.1 Quickstart Guide — DeepStream 6.1 Release documentation
here is the method to install TRT 8.5.2.2 , it is for DS6.2 Quickstart Guide — DeepStream 6.2 Release documentation

Thanks for your help, I have installed TRT 8.5.2 successfully using Quickstart Guide — DeepStream 6.2 Release documentation

It doesn’t have the above error when running the deepstream-pose-estimation-app right now. But I still can’t get any result from the model. Is it still having other conflicting libraries? Below is the output in the terminal when running deepstream-pose-estimation-app:

Now playing: file:///opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/streams/9-12-53_57sec.mp4
0:00:16.238838232 7 0x558c74e4dac0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/bodypose3dnet_vdeployable_accuracy_v1.0/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:16.644554372 7 0x558c74e4dac0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/bodypose3dnet_vdeployable_accuracy_v1.0/bodypose3dnet_accuracy.etlt_b8_gpu0_fp16.engine
0:00:16.652055699 7 0x558c74e4dac0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/configs/config_infer_secondary_bodypose3dnet.txt sucessfully
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: (foreignNode) cuBLASLt subversions: compiled against 11.8.1.0 but running against 11.9.2.0.
WARNING: [TRT]: (foreignNode) cuBLASLt subversions: compiled against 11.8.1.0 but running against 11.9.2.0.
WARNING: [TRT]: (foreignNode) cuBLASLt subversions: compiled against 11.8.1.0 but running against 11.9.2.0.
WARNING: [TRT]: (foreignNode) cuBLASLt subversions: compiled against 11.8.1.0 but running against 11.9.2.0.
WARNING: [TRT]: (foreignNode) cuBLASLt subversions: compiled against 11.8.1.0 but running against 11.9.2.0.
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 9
*0 INPUT kFLOAT input0 3x256x192 min: 1x3x256x192 opt: 8x3x256x192 Max: 8x3x256x192 *
*1 INPUT kFLOAT k_inv 3x3 min: 1x3x3 opt: 8x3x3 Max: 8x3x3 *
*2 INPUT kFLOAT t_form_inv 3x3 min: 1x3x3 opt: 8x3x3 Max: 8x3x3 *
*3 INPUT kFLOAT scale_normalized_mean_limb_lengths 36 min: 1x36 opt: 8x36 Max: 8x36 *
*4 INPUT kFLOAT mean_limb_lengths 36 min: 1x36 opt: 8x36 Max: 8x36 *
*5 OUTPUT kFLOAT pose25d 34x4 min: 0 opt: 0 Max: 0 *
*6 OUTPUT kFLOAT pose2d 34x3 min: 0 opt: 0 Max: 0 *
*7 OUTPUT kFLOAT pose3d 34x3 min: 0 opt: 0 Max: 0 *
*8 OUTPUT kFLOAT pose2d_org_img 34x3 min: 0 opt: 0 Max: 0 *

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:22.267961231 7 0x558c74e4dac0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/peoplenet_vdeployable_quantized_v2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
0:00:22.310639278 7 0x558c74e4dac0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream-bodypose-3d/models/peoplenet_vdeployable_quantized_v2.5/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
0:00:22.312621688 7 0x558c74e4dac0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus: [UID 1]: Load new model:…/configs/config_infer_primary_peoplenet.txt sucessfully
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
*0 INPUT kFLOAT input_1 3x544x960 *
*1 OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60 *
*2 OUTPUT kFLOAT output_cov/Sigmoid 3x34x60 *

Decodebin child added: source
Decodebin child added: decodebin0
Running…
In cb_newpad
Error: Decodebin did not pick nvidia decoder plugin.
ERROR from element qtdemux0: Internal data stream error.
Error details: qtdemux.c(6619): gst_qtdemux_loop (): /GstPipeline:deepstream-bodypose3dnet/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/GstQTDemux:qtdemux0:
streaming stopped, reason not-linked (-1)
Returned, stopping playback

duplicate with

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.