BodyPoseNet example on Jetson nano invalid input pafmap dimension

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) Nano
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) BodyPoseNet
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I cloned the repo and trying to run this example

tried to run a batch size 1 inference on 1 jpg file or 1 video file.
Command used is as per below. Config file changed from batch size 32 to batch size 1

./deepstream-bodypose2d-app 1 bodypose2d_pgie_config.txt file:///home/nano/project/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app/pose1.jpg ./body2out
Request sink_0 pad from streammux
Now playing: file:///home/nano/project/deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app/pose1.jpg
0:00:05.197917137 13594 0x55abc336a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/nano/project/deepstream_tao_apps/models/bodypose2d/model.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 288x384x3
1 OUTPUT kFLOAT heatmap_out/BiasAdd:0 36x48x19
2 OUTPUT kFLOAT conv2d_transpose_1/BiasAdd:0 144x192x38

0:00:05.198205530 13594 0x55abc336a0 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/nano/project/deepstream_tao_apps/models/bodypose2d/model.etlt_b1_gpu0_fp16.engine
0:00:05.266071287 13594 0x55abc336a0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:…/…/…/configs/bodypose2d_tao/bodypose2d_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running…
Decodebin child added: nvjpegdec0
In cb_newpad
###Decodebin pick nvidia decoder plugin.
terminate called after throwing an instance of ‘std::runtime_error’
what(): invalid input pafmap dimension.
Aborted (core dumped)

Please double check the steps, especially

## Prerequisition

* DeepStream SDK 6.0 GA and above
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/nvidia/deepstream/deepstream/lib/cvcore_libs

Hi,

I have done the steps. If i havent exported the cvcore_libs, the error would not be invalid pafmap dimension.

Is there any restrictions on the type of video resolution to be passed in? I used .jpg file, sample_720p.h264 and sample_1080_h264.mp4 all result in the same error.

This sample works fine on my side.
Is it possible to share your .jpg file ?

Hi there, a restart of system seems to solve this problem. Thanks for the quick response!

2 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.