Runtime errors when running the human pose estimation application

Dear NVIDIA developers,

I’m having issues with running the human pose estimation application. I was following the instructions here. I successfully ran all the steps up until Step 6, when I encountered runtime errors while trying to run the app. In other words, I compiled the app successfully, but when I tried to run it I encountered errors.

A side note: While getting the ONNX model (Step 2 of the guide I linked), I used a Docker image, which I had to modify in order to run the script to get the ONNX model.

You can find the information about my setup below:

Hardware Platform (Jetson / GPU): Tesla K80 x2 (two Tesla K80 graphic cards)
DeepStream Version: 5.0.1
TensorRT Version: 7.2.1.6
NVIDIA GPU Driver Version (valid for GPU only): 450.36.06
CUDA Version: 11.0
Issue Type( questions, new requirements, bugs): Bugs

Bug description:

This is what happens when I try to run the app I compiled:

ecmazureadmin@sendd-mj-srv02:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-pose-estimation$ ./deepstream-pose-estimation-app SENDD_P15_1_4_3.mp4 .

(gst-plugin-scanner:94726): GStreamer-WARNING **: 13:28:59.749: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:94726): GStreamer-WARNING **: 13:28:59.753: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_osd.so': libcudart.so.11.0: cannot open shared object file: No such file or directory
One element could not be created. Exiting.

I looked at this thread for potential solutions. Here are the outputs of the commands that were required suggested there:

ecmazureadmin@sendd-mj-srv02:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-pose-estimation$ rm -rf ~/.cache/gstreamer-1.0/
ecmazureadmin@sendd-mj-srv02:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-pose-estimation$ gst-inspect-1.0
tcp:  multisocketsink: Multi socket sink
tcp:  multifdsink: Multi filedescriptor sink
tcp:  tcpserversrc: TCP server source
tcp:  tcpserversink: TCP server sink
tcp:  tcpclientsrc: TCP client source
tcp:  tcpclientsink: TCP client sink
tcp:  socketsrc: socket source
videoscale:  videoscale: Video scaler
volume:  volume: Volume
audioresample:  audioresample: Audio resampler
adder:  adder: Adder
videotestsrc:  videotestsrc: Video test source
rawparse:  rawvideoparse: rawvideoparse
rawparse:  rawaudioparse: rawaudioparse
rawparse:  unalignedvideoparse: unalignedvideoparse
rawparse:  unalignedaudioparse: unalignedaudioparse
videoconvert:  videoconvert: Colorspace converter
playback:  parsebin: Parse Bin
playback:  urisourcebin: URI reader
playback:  uridecodebin3: URI Decoder
playback:  uridecodebin: URI Decoder
playback:  decodebin3: Decoder Bin 3
playback:  decodebin: Decoder Bin
playback:  streamsynchronizer: Stream Synchronizer
playback:  subtitleoverlay: Subtitle Overlay
playback:  playsink: Player Sink
playback:  playbin3: Player Bin 3
playback:  playbin: Player Bin 2
audiorate:  audiorate: Audio rate adjuster
audiomixer:  audiointerleave: AudioInterleave
audiomixer:  liveadder: AudioMixer
audiomixer:  audiomixer: AudioMixer
gio:  giostreamsrc: GIO stream source
gio:  giostreamsink: GIO stream sink
gio:  giosrc: GIO source
gio:  giosink: GIO sink
ximagesink:  ximagesink: Video sink
coretracers:  leaks (GstTracerFactory)
coretracers:  stats (GstTracerFactory)
coretracers:  rusage (GstTracerFactory)
coretracers:  log (GstTracerFactory)
coretracers:  latency (GstTracerFactory)
app:  appsink: AppSink
app:  appsrc: AppSrc
videorate:  videorate: Video rate adjuster
coreelements:  streamiddemux: Streamid Demux
coreelements:  valve: Valve element
coreelements:  multiqueue: MultiQueue
coreelements:  typefind: TypeFind
coreelements:  tee: Tee pipe fitting
coreelements:  filesink: File Sink
coreelements:  queue2: Queue 2
coreelements:  queue: Queue
coreelements:  output-selector: Output selector
coreelements:  input-selector: Input selector
coreelements:  identity: Identity
coreelements:  funnel: Funnel pipe fitting
coreelements:  filesrc: File Source
coreelements:  fdsink: Filedescriptor Sink
coreelements:  fdsrc: Filedescriptor Source
coreelements:  fakesink: Fake Sink
coreelements:  fakesrc: Fake Source
coreelements:  downloadbuffer: DownloadBuffer
coreelements:  dataurisrc: data: URI source element
coreelements:  concat: Concat
coreelements:  capsfilter: CapsFilter
opengl:  glfilterglass: OpenGL glass filter
opengl:  gldeinterlace: OpenGL deinterlacing filter
opengl:  gltestsrc: Video test source
opengl:  glstereosplit: GLStereoSplit
opengl:  glviewconvert: OpenGL Multiview/3D conversion filter
opengl:  glfilterapp: OpenGL application filter
opengl:  glshader: OpenGL fragment shader filter
opengl:  glcolorscale: OpenGL color scale
opengl:  gleffects_laplacian: Laplacian Convolution Demo Effect
opengl:  gleffects_blur: Blur with 9x9 separable convolution Effect
opengl:  gleffects_sobel: Sobel edge detection Effect
opengl:  gleffects_glow: Glow Lighting Effect
opengl:  gleffects_sin: All Grey but Red Effect
opengl:  gleffects_xray: Glowing negative effect
opengl:  gleffects_lumaxpro: Luma Cross Processing Effect
opengl:  gleffects_xpro: Cross Processing Effect
opengl:  gleffects_sepia: Sepia Toning Effect
opengl:  gleffects_heat: Heat Signature Effect
opengl:  gleffects_square: Square Effect
opengl:  gleffects_bulge: Bulge Effect
opengl:  gleffects_twirl: Twirl Effect
opengl:  gleffects_fisheye: FishEye Effect
opengl:  gleffects_tunnel: Light Tunnel Effect
opengl:  gleffects_stretch: Stretch Effect
opengl:  gleffects_squeeze: Squeeze Effect
opengl:  gleffects_mirror: Mirror Effect
opengl:  gleffects_identity: Do nothing Effect
opengl:  gleffects: Gstreamer OpenGL Effects
opengl:  glfiltercube: OpenGL cube filter
opengl:  glsrcbin: GL Src Bin
opengl:  glsinkbin: GL Sink Bin
opengl:  glfilterbin: GL Filter Bin
opengl:  glcolorbalance: Video balance
opengl:  glcolorconvert: OpenGL color converter
opengl:  gldownload: OpenGL downloader
opengl:  glupload: OpenGL uploader
opengl:  glimagesinkelement: OpenGL video sink
opengl:  glimagesink: GL Sink Bin
typefindfunctions: audio/audible: aa, aax
typefindfunctions: audio/x-xi: xi
typefindfunctions: video/x-pva: pva
typefindfunctions: application/x-ssa: ssa, ass
typefindfunctions: application/octet-stream: no extensions
typefindfunctions: image/x-degas: no extensions
typefindfunctions: image/x-icon: no extensions
typefindfunctions: application/x-yuv4mpeg: no extensions
typefindfunctions: image/vnd.wap.wbmp: no extensions
typefindfunctions: image/vnd.adobe.photoshop: psd
typefindfunctions: application/msword: doc
typefindfunctions: application/pdf: pdf
typefindfunctions: audio/x-kss: kss
typefindfunctions: video/x-ivf: ivf
typefindfunctions: audio/x-sap: sap
typefindfunctions: audio/x-vgm: vgm
typefindfunctions: audio/x-gbs: gbs
typefindfunctions: audio/x-ay: ay
typefindfunctions: audio/x-gym: gym
typefindfunctions: audio/x-nsf: nsf
typefindfunctions: video/vivo: viv
typefindfunctions: application/x-mmsh: no extensions
typefindfunctions: multipart/x-mixed-replace: no extensions
typefindfunctions: video/x-dirac: no extensions
typefindfunctions: application/x-ms-dos-executable: dll, exe, ocx, sys, scr, msstyles, cpl
typefindfunctions: application/x-ar: a
typefindfunctions: application/x-tar: tar
typefindfunctions: application/x-rar: rar
typefindfunctions: image/svg+xml: svg
typefindfunctions: application/postscript: ps
typefindfunctions: audio/x-caf: caf
typefindfunctions: audio/x-wavpack-correction: wvc
typefindfunctions: audio/x-wavpack: wv, wvp
typefindfunctions: audio/x-spc: spc
typefindfunctions: audio/aac: aac, adts, adif, loas
typefindfunctions: application/x-executable: no extensions
typefindfunctions: text/x-cmml: no extensions
typefindfunctions: application/x-ogg-skeleton: no extensions
typefindfunctions: audio/x-celt: no extensions
typefindfunctions: audio/x-speex: no extensions
typefindfunctions: application/x-ogm-text: no extensions
typefindfunctions: application/x-ogm-audio: no extensions
typefindfunctions: application/x-ogm-video: no extensions
typefindfunctions: video/x-theora: no extensions
typefindfunctions: audio/x-vorbis: no extensions
typefindfunctions: audio/x-flac: flac
typefindfunctions: application/x-subtitle-vtt: vtt
typefindfunctions: subtitle/x-kate: no extensions
typefindfunctions: application/x-compress: Z
typefindfunctions: application/zip: zip
typefindfunctions: application/x-gzip: gz
typefindfunctions: application/x-bzip: bz2
typefindfunctions: image/x-sun-raster: ras
typefindfunctions: image/x-xpixmap: xpm
typefindfunctions: image/x-jng: jng
typefindfunctions: video/x-mng: mng
typefindfunctions: image/x-xcf: xcf
typefindfunctions: audio/x-sid: sid
typefindfunctions: audio/x-sbc: sbc
typefindfunctions: audio/iLBC-sh: ilbc
typefindfunctions: audio/x-amr-wb-sh: amr
typefindfunctions: audio/x-amr-nb-sh: amr
typefindfunctions: video/x-dv: dv, dif
typefindfunctions: video/x-mve: mve
typefindfunctions: application/mxf: mxf
typefindfunctions: video/x-matroska: mkv, mka, mk3d, webm
typefindfunctions: image/x-portable-pixmap: pnm, ppm, pgm, pbm
typefindfunctions: image/x-exr: exr
typefindfunctions: image/webp: webp
typefindfunctions: image/tiff: tif, tiff
typefindfunctions: image/bmp: bmp
typefindfunctions: image/png: png
typefindfunctions: image/gif: gif
typefindfunctions: image/jpeg: jpg, jpe, jpeg
typefindfunctions: application/x-ape: ape
typefindfunctions: audio/x-shorten: shn
typefindfunctions: audio/x-rf64: rf64
typefindfunctions: audio/x-w64: w64
typefindfunctions: audio/x-ircam: sf
typefindfunctions: audio/x-sds: sds
typefindfunctions: audio/x-voc: voc
typefindfunctions: audio/x-nist: nist
typefindfunctions: audio/x-paris: paf
typefindfunctions: audio/x-svx: iff, svx
typefindfunctions: audio/x-aiff: aiff, aif, aifc
typefindfunctions: audio/x-wav: wav
typefindfunctions: application/xml: xml
typefindfunctions: application/ttml+xml: ttml+xml
typefindfunctions: application/smil: smil
typefindfunctions: application/sdp: sdp
typefindfunctions: application/x-hls: m3u8
typefindfunctions: application/itc: itc
typefindfunctions: text/uri-list: ram
typefindfunctions: text/utf-32: txt
typefindfunctions: text/utf-16: txt
typefindfunctions: text/plain: txt
typefindfunctions: video/x-flv: flv
typefindfunctions: application/vnd.ms-sstr+xml: no extensions
typefindfunctions: application/dash+xml: mpd, MPD
typefindfunctions: application/x-shockwave-flash: swf, swfl
typefindfunctions: application/x-pn-realaudio: ra, ram, rm, rmvb
typefindfunctions: application/vnd.rn-realmedia: ra, ram, rm, rmvb
typefindfunctions: text/html: htm, html
typefindfunctions: video/mj2: mj2
typefindfunctions: image/x-jpc: jpc, j2k
typefindfunctions: image/jp2: jp2
typefindfunctions: image/x-quicktime: qif, qtif, qti
typefindfunctions: video/quicktime: mov, mp4
typefindfunctions: application/x-3gp: 3gp
typefindfunctions: audio/x-m4a: m4a
typefindfunctions: video/x-nuv: nuv
typefindfunctions: video/x-h265: h265, x265, 265
typefindfunctions: video/x-h264: h264, x264, 264
typefindfunctions: video/x-h263: h263, 263
typefindfunctions: video/mpeg4: m4v
typefindfunctions: video/mpeg-elementary: mpv, mpeg, mpg
typefindfunctions: application/ogg: ogg, oga, ogv, ogm, ogx, spx, anx, axa, axv
typefindfunctions: video/mpegts: ts, mts
typefindfunctions: video/mpeg-sys: mpe, mpeg, mpg
typefindfunctions: audio/x-gsm: gsm
typefindfunctions: audio/x-dts: dts
typefindfunctions: audio/x-ac3: ac3, eac3
typefindfunctions: audio/mpeg: mp3, mp2, mp1, mpga
typefindfunctions: audio/x-mod: 669, amf, ams, dbm, digi, dmf, dsm, gdm, far, imf, it, j2b, mdl, med, mod, mt2, mtm, okt, psm, ptm, sam, s3m, stm, stx, ult, umx, xm
typefindfunctions: audio/x-ttafile: tta
typefindfunctions: application/x-apetag: mp3, ape, mpc, wv
typefindfunctions: application/x-id3v1: mp3, mp2, mp1, mpga, ogg, flac, tta
typefindfunctions: application/x-id3v2: mp3, mp2, mp1, mpga, ogg, flac, tta
typefindfunctions: video/x-fli: flc, fli
typefindfunctions: audio/mobile-xmf: mxmf
typefindfunctions: audio/riff-midi: mid, midi
typefindfunctions: audio/midi: mid, midi
typefindfunctions: audio/x-imelody: imy, ime, imelody
typefindfunctions: video/x-vcd: dat
typefindfunctions: video/x-cdxa: dat
typefindfunctions: audio/qcelp: qcp
typefindfunctions: video/x-msvideo: avi
typefindfunctions: audio/x-au: au, snd
typefindfunctions: audio/x-musepack: mpc, mpp, mp+
typefindfunctions: video/x-ms-asf: asf, wm, wma, wmv
subparse:  ssaparse: SSA Subtitle Parser
subparse:  subparse: Subtitle parser
subparse: subparse_typefind: srt, sub, mpsub, mdvd, smi, txt, dks, vtt
audiotestsrc:  audiotestsrc: Audio test source
audioconvert:  audioconvert: Audio converter
encoding:  encodebin: Encoder Bin
pbtypes:  GstVideoMultiviewFlagsSet (GstDynamicTypeFactory)
staticelements:  bin: Generic bin
staticelements:  pipeline: Pipeline object

Total count: 25 plugins, 253 features
ecmazureadmin@sendd-mj-srv02:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-pose-estimation$ gst-inspect-1.0 nveglglessink
No such element or plugin 'nveglglessink'

Since I get No such element or plugin 'nveglglessink' when I try to run gst-inspect-1.0 nveglglessink, I thought that my DeepStream SDK wasn’t installed properly. However, when I try to install it from the deb package I get the following output:

ecmazureadmin@sendd-mj-srv02:~/uploads$ sudo apt install ./deepstream-5.0_5.0.1-1_amd64.deb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'deepstream-5.0' instead of './deepstream-5.0_5.0.1-1_amd64.deb'
deepstream-5.0 is already the newest version (5.0.1-1).
The following packages were automatically installed and are no longer required:
  grub-pc-bin linux-azure-5.3-cloud-tools-5.3.0-1031
  linux-azure-5.3-tools-5.3.0-1031
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 97 not upgraded.

Similarly, when I try to install TensorRT, I get:

ecmazureadmin@sendd-mj-srv02:~/uploads$ sudo apt install ./nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.2.1.6-ga-20201006_1-1_amd64.deb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.2.1.6-ga-20201006' instead of './nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.2.1.6-ga-20201006_1-1_amd64.deb'
nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.2.1.6-ga-20201006 is already the newest version (1-1).
The following packages were automatically installed and are no longer required:
  grub-pc-bin linux-azure-5.3-cloud-tools-5.3.0-1031
  linux-azure-5.3-tools-5.3.0-1031
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 97 not upgraded.

I have two questions on this:

  1. Can the issue be stemming from how the app was compiled or is the issue just related to my environment setup?
  2. What is going wrong here and how do I fix it?

I came across the same error ! My topic is here, till now, still do not know why…

do you mean you are not using the DeepStream docker? Why not use DeepStream release docker?
if you use the docker customized by yourself, you can refer to Quickstart Guide — DeepStream 6.3 Release documentation this install DeepStream and setup its env.

1 Like

Hello @mchi,

I used the latest DeepStream docker base container and I am able to get the app running.

However, now that the app is running, I get additional errors. They’re different depending on if the file extension is .mp4 or .avi.

Here’s my output log:

 root@ea230df7fe23:/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimati
on# ./deepstream-pose-estimation-app DPK_input_sendd_3_1.avi .                  
(gst-plugin-scanner:80): GStreamer-WARNING **: 10:23:31.008: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory
Now playing: DPK_input_sendd_3_1.avi
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:01.222237211    79 0x56289a0c0d80 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:01.222278711    79 0x56289a0c0d80 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:01.222293111    79 0x56289a0c0d80 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.7
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1291 FP16 not supported by platform. Using FP32 mode.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:35.918207473    79 0x56289a0c0d80 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x224x224       min: 1x3x224x224     opt: 1x3x224x224     Max: 1x3x224x224
1   OUTPUT kFLOAT part_affinity_fields 56x56x42        min: 0               opt: 0               Max: 0
2   OUTPUT kFLOAT heatmap         56x56x18        min: 0               opt: 0               Max: 0
3   OUTPUT kFLOAT maxpool_heatmap 56x56x18        min: 0               opt: 0               Max: 0

0:00:35.929015342    79 0x56289a0c0d80 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
libnvosd (603):(ERROR) : Out of bound radius
0:00:45.222393034    79 0x56289a103a30 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:45.222410334    79 0x56289a103a30 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element nv-onscreendisplay: Unable to draw circles
Error details: gstnvdsosd.c(558): gst_nvds_osd_transform_ip (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvDsOsd:nv-onscreendisplay
Returned, stopping playback
libnvosd (603):(ERROR) : Out of bound radius
Deleting pipeline
root@ea230df7fe23:/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimati
on# ./deepstream-pose-estimation-app SENDD_P7_3_4_1.mp4 .
Now playing: SENDD_P7_3_4_1.mp4
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:00.712037936    98 0x55c1ad5f6b90 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:00.712079437    98 0x55c1ad5f6b90 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.712093137    98 0x55c1ad5f6b90 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.7
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1291 FP16 not supported by platform. Using FP32 mode.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:35.612457029    98 0x55c1ad5f6b90 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x224x224       min: 1x3x224x224     opt: 1x3x224x224     Max: 1x3x224x224
1   OUTPUT kFLOAT part_affinity_fields 56x56x42        min: 0               opt: 0               Max: 0
2   OUTPUT kFLOAT heatmap         56x56x18        min: 0               opt: 0               Max: 0
3   OUTPUT kFLOAT maxpool_heatmap 56x56x18        min: 0               opt: 0               Max: 0

0:00:35.623462899    98 0x55c1ad5f6b90 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
ERROR from element h264-parser: Failed to parse stream
Error details: gstbaseparse.c(2954): gst_base_parse_check_sync (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstH264Parse:h264-parser
Returned, stopping playback
Deleting pipeline

I tried running rm ${HOME}/.cache/gstreamer-1.0/registry.x86_64.bin, but that didn’t help.

How do I fix thisissue?

For this, could you try other H264 or mp4 video file instead of avi ?

I have tried it for both .avi and .mp4. See the above output.

please try H264?

please refer to code - deepstream_pose_estimation/deepstream_pose_estimation_app.cpp at master · NVIDIA-AI-IOT/deepstream_pose_estimation · GitHub , it only support H264 raw media file

1 Like

@mchi I tried running the program with the raw H264 media file. I still get an error. See the output below:

root@ea230df7fe23:/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimati
on# ./deepstream-pose-estimation-app stream.264 .
Now playing: stream.264
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:00.712641747   180 0x55f78ba74790 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:00.712681947   180 0x55f78ba74790 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.712698347   180 0x55f78ba74790 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.7
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1291 FP16 not supported by platform. Using FP32 mode.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 3 output network tensors.
0:00:35.641783401   180 0x55f78ba74790 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp32.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x224x224       min: 1x3x224x224     opt: 1x3x224x224     Max: 1x3x224x224
1   OUTPUT kFLOAT part_affinity_fields 56x56x42        min: 0               opt: 0               Max: 0
2   OUTPUT kFLOAT heatmap         56x56x18        min: 0               opt: 0               Max: 0
3   OUTPUT kFLOAT maxpool_heatmap 56x56x18        min: 0               opt: 0               Max: 0

0:00:35.653254679   180 0x55f78ba74790 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
libnvosd (603):(ERROR) : Out of bound radius
ERROR from element nv-onscreendisplay: Unable to draw circles
Error details: gstnvdsosd.c(558): gst_nvds_osd_transform_ip (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvDsOsd:nv-onscreendisplay
0:00:35.836519020   180 0x55f78bab8630 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
Returned, stopping playback
0:00:35.836566820   180 0x55f78bab8630 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
libnvosd (603):(ERROR) : Out of bound radius
libnvosd (603):(ERROR) : Out of bound radius
libnvosd (603):(ERROR) : Out of bound radius
libnvosd (603):(ERROR) : Out of bound radius
libnvosd (603):(ERROR) : Out of bound radius
Deleting pipeline

Could you tell me what is going wrong? I found this thread when looking for the error. Is this a bug with DeepStream itself? I’m running all of this via Putty (SSH) on a Microsoft Azure virtual machine if that’s relevant.

DId you follow the README in the github?

1 Like

Hello @mchi,

I forgot to replace the OSD binaries in the DeepStream Docker container. I did it now. Here is the output I get now:

root@723375a233fe:/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimati
on# ./deepstream-pose-estimation-app stream.264 .

(deepstream-pose-estimation-app:114): GStreamer-WARNING **: 07:28:04.437: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_osd.so': libcudart.so.11.0: cannot open shared object file: No such file or directory
One element could not be created. Exiting.

When I run gst-inspect-1.0 nveglglessink, it gives me the below output, which is telling me that there are no problems with the DeepStream SDK setup in my Docker container:

root@723375a233fe:/opt/nvidia/deepstream/deepstream-5.0/deepstream-pose-estimati
on# gst-inspect-1.0 nveglglessink
Factory Details:
  Rank                     secondary (128)
  Long-name                EGL/GLES vout Sink
  Klass                    Sink/Video
  Description              An EGL/GLES Video Output Sink Implementing the VideoOverlay interface
  Author                   Reynaldo H. Verdejo Pinochet <reynaldo@collabora.com>, Sebastian Dröge <sebastian.droege@collabora.co.uk>

Plugin Details:
  Name                     nvdsgst_eglglessink
  Description              EGL/GLES sink
  Filename                 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_eglglessink.so
  Version                  5.0.1
  License                  LGPL
  Source module            gst-plugins-bad
  Source release date      2014-02-08
  Binary package           GStreamer Bad Plug-ins source release
  Origin URL               Unknown package origin

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseSink
                         +----GstVideoSink
                               +----GstEglGlesSink

Implemented Interfaces:
  GstVideoOverlay

Pad Templates:
  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw(memory:EGLImage)
                 format: { (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)AYUV, (string)Y444, (string)I420, (string)YV12, (string)NV12, (string)NV21, (string)Y42B, (string)Y41B, (string)RGB, (string)BGR, (string)RGB16 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
      video/x-raw(meta:GstVideoGLTextureUploadMeta)
                 format: { (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)AYUV, (string)Y444, (string)I420, (string)YV12, (string)NV12, (string)NV21, (string)Y42B, (string)Y41B, (string)RGB, (string)BGR, (string)RGB16 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
      video/x-raw
                 format: { (string)RGBA, (string)BGRA, (string)ARGB, (string)ABGR, (string)RGBx, (string)BGRx, (string)xRGB, (string)xBGR, (string)AYUV, (string)Y444, (string)I420, (string)YV12, (string)NV12, (string)NV21, (string)Y42B, (string)Y41B, (string)RGB, (string)BGR, (string)RGB16 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
      video/x-raw(memory:NVMM)
                 format: { (string)BGRx, (string)RGBA, (string)I420, (string)NV12, (string)BGR, (string)RGB }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "eglglessink0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  sync                : Sync on the clock
                        flags: readable, writable
                        Boolean. Default: true
  max-lateness        : Maximum number of nanoseconds that a buffer can be late before it is dropped (-1 unlimited)
                        flags: readable, writable
                        Integer64. Range: -1 - 9223372036854775807 Default: 20000000
  qos                 : Generate Quality-of-Service events upstream
                        flags: readable, writable
                        Boolean. Default: true
  async               : Go asynchronously to PAUSED
                        flags: readable, writable
                        Boolean. Default: true
  ts-offset           : Timestamp offset in nanoseconds
                        flags: readable, writable
                        Integer64. Range: -9223372036854775808 - 9223372036854775807 Default: 0
  enable-last-sample  : Enable the last-sample property
                        flags: readable, writable
                        Boolean. Default: true
  last-sample         : The last sample received in the sink
                        flags: readable
                        Boxed pointer of type "GstSample"
  blocksize           : Size in bytes to pull per buffer (0 = default)
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 4096
  render-delay        : Additional render delay of the sink in nanoseconds
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
  throttle-time       : The time to keep between rendered buffers (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
  max-bitrate         : The maximum bits per second to render (0 = disabled)
                        flags: readable, writable
                        Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
  show-preroll-frame  : Whether to render video frames during preroll
                        flags: readable, writable
                        Boolean. Default: true
  create-window       : If set to true, the sink will attempt to create it's own window to render to if none is provided. This is currently only supported when the sink is used under X11
                        flags: readable, writable
                        Boolean. Default: true
  force-aspect-ratio  : If set to true, the sink will attempt to preserve the incoming frame's geometry while scaling, taking both the storage's and display's pixel aspect ratio into account
                        flags: readable, writable
                        Boolean. Default: true
  display             : If set, the sink will use the passed X Display for rendering
                        flags: readable, writable
                        Pointer.
  window-x            : X coordinate of window
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 2147483647 Default: 10
  window-y            : Y coordinate of window
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 2147483647 Default: 10
  window-width        : Width of window
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 2147483647 Default: 0
  window-height       : Height of window
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 2147483647 Default: 0
  rows                : Rows of Display
                        flags: readable, writable
                        Unsigned Integer. Range: 1 - 2147483647 Default: 1
  columns             : Columns of display
                        flags: readable, writable
                        Unsigned Integer. Range: 1 - 2147483647 Default: 1
  gpu-id              : Set GPU Device ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  profile             : gsteglglessink jitter information
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0

I have to note that I copied the ONNX model and the compiled program that I already had to the Docker container. I don’t think this affects the process, I’m just noting it here if it’s relevant.

What do you think the issue is now?

I noticed that in my docker container, when I try to nvcc --version, I get:

root@64380ac508a2:/opt/nvidia/deepstream/deepstream-5.0# nvcc --version
bash: nvcc: command not found

I think this may be why I’m getting the error. I am using the base tag from the DeepStream Docker containers. Should I use the devel tag DeepStream Docker container for running the pose estimation app?

yes, you’d better to use DS NGC docker, but I found the osd library is based on CUDA 11, which should be not correct for current DS since current DS is based on CUDA 10.2.

So, I would recommend to use NGC DS docker, and don’t replace the osd lib for now.
To avoid the OSD error you met above, you could keep the all OSD in image, that is, no OSD draw outside of the image,

So if I understood you correctly:

  • I should use DeepStream Docker container with the devel tag from here
  • I don’t have to re-convert the model to ONXX or re-compile the program
  • I should draw all the on-screen drawings in an image to avoid the error Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_osd.so': libcudart.so.11.0: cannot open shared object file: No such file or directory

How would I do the 3rd point, that is, how should I draw all the on-screen drawings in an image?

Hey @mchi,

I followed the tutorial from scratch again today. I used the PyTorch container for converting the model to the ONXX format and for everything else I used the DeepStream Docker container with the tag devel. In other words, I built the app in the DeepStream Docker container. I made sure I followed all of the instructions.

This is the output that I get:

root@4f541fd78b91:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps
/deepstream_pose_estimation# ./deepstream-pose-estimation-app stream.264 .

(gst-plugin-scanner:46): GStreamer-WARNING **: 11:43:50.268: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtrtserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:46): GStreamer-WARNING **: 11:43:50.272: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_osd.so': libcudart.so.11.0: cannot open shared object file: No such file or directory
One element could not be created. Exiting.

Note that there’s two warnings now, not just one. And this is from trying to run the app in the same container that I made it in, that is, the DeepStream Docker container with the tag devel. Do you know what’s happening here? How do I fix this?

We need go back to Post#10 - Runtime errors when running the human pose estimation application - #10 by mjuric1 , where you run into the OSD error - “libnvosd (603):(ERROR) : Out of bound radius”. This is a DeepStream OSD bug which the OSD lib libnvds_osd.so targets to fix, but this lib is not correct since it’s based on CUDA 11, so for now., to avoid the OSD error , could use below temporary change.
NVIDIA is in holiday this week, we will update the OSD lib after holiday.

--- a/deepstream_pose_estimation_app.cpp
+++ b/deepstream_pose_estimation_app.cpp
@@ -95,6 +95,14 @@ create_display_meta(Vec2D<int> &objects, Vec3D<float> &normalized_peaks, NvDsFra
         auto &peak = normalized_peaks[j][k];
         int x = peak[1] * MUXER_OUTPUT_WIDTH;
         int y = peak[0] * MUXER_OUTPUT_HEIGHT;
+       if ((x > (1920 - 8)) || (y > (1080 - 8))) {
+                printf("x = %d, y = %d\n", x, y);
+                continue;
+        }
+       if ((x < 8) || (y < 8)) {
+               printf("x = %d, y = %d\n", x, y);
+                continue;
+       }
         if (dmeta->num_circles == MAX_ELEMENTS_IN_DISPLAY_META)
         {
1 Like

@mchi This works in the sense that the program is able to run and produce output. Thank you. The quality of the output (pose estimation in the video) is another story…

@wade.wang @mchi provided a solution here. Instead of replacing the DeepStream OSD library, add this code in deepstream_pose_estimation_app.cpp instead. Also, make sure to input raw H264 stream; to see how to do that, look here: matlab - How to extract the bitstream from H.264 video? - Stack Overflow

1 Like

@mjuric1 @mchi Hi,there, great work! Very happy to hear that the solution has been found, in my case, this solution also works, thank you very much !

@wade.wang I wanted to ask you one thing - do the results that the program outputs predict the pose correctly for you? In my case, the program outputs some nonsensical points and doesn’t estimate the pose at all.

Hi, @mjuric1, please look at video of my test results: link.
It seems the result is not very good for small and occluded person.

1 Like