I want to run 「Pose Estimation with DeepStream」, but I don't understand the setup procedure

Hi

I want to run Pose Estimation with DeepStream.

I don’t understand how to do step2 on the page below.

TRTPose model. was obtained from the following git.

next…

But I get the error below.
I got weight resnet18_baseline_att_224x224_A_epoch_249.pth and copied it to the same directory as export_for_isaac.py

@ubuntu:~/Desktop/work/pose_work/trt_pose/trt_pose/utils$ python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Traceback (most recent call last):
  File "export_for_isaac.py", line 57, in <module>
    import trt_pose.models
ModuleNotFoundError: No module named 'trt_pose'

Finally, my environment.

  • Hardware Platform (Jetson / GPU)
    JETSON-AGX-ORIN-DEV-KIT
  • DeepStream Version
    6.1.1
  • JetPack Version (valid for Jetson only)
    5.0.2 (L4T 35.1.0)
  • TensorRT Version
    8.4.1.5
  1. from the error, it is related to environment, please do these steps according this GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT
  2. there is another sample GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline., which already has model.

Hi

Thank you for your advice.
I referenced this forum for installing Pytorch.

The installed version is below.

@ubuntu:~/Desktop/work/pose_work/torch2trt$ pip3 list
...
torch                   1.11.0
torchvision             0.12.0
...

I couldn’t do Step 1 of trt_pose.
I installed Pytorch as recommended by this forum.

@ubuntu:~/Desktop/work/pose_work/torch2trt$ sudo python3 setup.py install --plugins
Traceback (most recent call last):
  File "setup.py", line 3, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'
@ubuntu:~/Desktop/work/pose_work/torch2trt$ python3 setup.py install --plugins
running install
error: can't create or remove files in install directory

The following error occurred while trying to add or remove files in the
installation directory:

    [Errno 13] Permission denied: '/usr/local/lib/python3.8/dist-packages/test-easy-install-7291.write-test'

The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:

    /usr/local/lib/python3.8/dist-packages/

Perhaps your account does not have write access to this directory?  If the
installation directory is a system-owned directory, you may need to sign in
as the administrator or "root" account.  If you do not have administrative
access to this machine, you may wish to choose a different installation
directory, preferably one that is listed in your PYTHONPATH environment
variable.

For information on other options, you may wish to consult the
documentation at:

  https://setuptools.readthedocs.io/en/latest/easy_install.html

Please make the appropriate changes for your system and try again.

Hi

I’ve had success doing the following:

python3 setup.py install --user --plugins

@fanzh

Hi

I have completed the steps below.

from the error, it is related to environment, please do these steps according this GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT 1
@ubuntu:~/Desktop/work/pose_work/torch2trt$ pip3 list
...
torch                   1.11.0
torch2trt               0.4.0
torchvision             0.12.0
trt-pose                0.0.1
...

However, it is not possible to convert from pytorch model to onnix in the following procedure.

@ubuntu:~/Desktop/work/pose_work/trt_pose/trt_pose/utils$ python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Input model is not specified, using resnet18_baseline_att as a default.
Input width/height are not specified, using 224x224 as a default.
Output path is not specified, using resnet18_baseline_att_224x224_A_epoch_249.onnx as a default.
Input topology human_pose.json is not a valid (.json) file.

what is the cause… help me

from the sample, it should be in docker container, did you download that human_pose?

@fanzh

Hi

Thank you for your advice.
I’m not using docker containers.
I’ve never heard of human_pose.
Because it doesn’t come out in the procedure so far.
So I didn’t download it.

where is this available?

Hi

Copy the human_pose.json file below,

/home/user/Desktop/work/pose_work/trt_pose/tasks/human_pose/human_pose.json

I pasted it in the directory below.

/home/tes/Desktop/work/pose_work/trt_pose/trt_pose/utils

I have successfully generated an onnx file with this.

@ubuntu:~/Desktop/work/pose_work/trt_pose/trt_pose/utils$ python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Input model is not specified, using resnet18_baseline_att as a default.
Input width/height are not specified, using 224x224 as a default.
Output path is not specified, using resnet18_baseline_att_224x224_A_epoch_249.onnx as a default.
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/tes/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 44.7M/44.7M [00:03<00:00, 15.0MB/s]
Successfully completed convertion of resnet18_baseline_att_224x224_A_epoch_249.pth to resnet18_baseline_att_224x224_A_epoch_249.onnx.
1 Like

Hi

I’ve gotten this far.
Building deepstream_pose_estimation seems to be successful.

@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation$ sudo make
g++ -c -o deepstream_pose_estimation_app.o -DPLATFORM_TEGRA -I../../apps-common/includes -I../../../includes -I../deepstream-app/ -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=5 -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/orc-0.4 -I/usr/include/gstreamer-1.0 -I/usr/include/json-glib-1.0 -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_pose_estimation_app.cpp
deepstream_pose_estimation_app.cpp: In function ‘GstPadProbeReturn osd_sink_pad_buffer_probe(GstPad*, GstPadProbeInfo*, gpointer)’:
deepstream_pose_estimation_app.cpp:231:75: warning: zero-length gnu_printf format string [-Wformat-zero-length]
  231 |     offset = snprintf(txt_params->display_text + offset, MAX_DISPLAY_LEN, "");
      |                                                                           ^~
deepstream_pose_estimation_app.cpp:236:41: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
  236 |     txt_params->font_params.font_name = "Mono";
      |                                         ^~~~~~
g++ -o deepstream-pose-estimation-app deepstream_pose_estimation_app.o -L/opt/nvidia/deepstream/deepstream-6.1/lib/ -lnvdsgst_meta -lnvds_meta -lnvds_utils -lm -lpthread -ldl -Wl,-rpath,/opt/nvidia/deepstream/deepstream-6.1/lib/ -lgstvideo-1.0 -lgstbase-1.0 -lgstreamer-1.0 -lX11 -ljson-glib-1.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0

But I don’t know how to run the application.

@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream_pose_estimation v4l2:///dev/video0 out.mp4
sudo: ./deepstream_pose_estimation: command not found

The tutorial was like this: I am going to do the same.

Please give me some advice. help

was deepstream_pose_estimation generated? is there any compilation error?

@fanzh

Thank you.

I checked and the file was created.
The command I entered was incorrect.

sudo ./deepstream_pose_estimation v4l2:///dev/video0 out.mp4

I changed the command I entered to:

sudo ./deepstream-pose-estimation-app v4l2:///dev/video0 out.mp4

Last but not least, I want to use a USB camera with . How should the command parameters be in this case?

ERROR from element file-source: Resource not found.
Error details: gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstFileSrc:file-source:
No such file "v4l2:///dev/video0"
Returned, stopping playback
Deleting pipeline

Hi

I had an mp4 file of my child’s baseball class, so I tried entering it, but I got an error.

What is the cause····
I’m so tired.

@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation$ ./deepstream-pose-estimation-app baseball.mp4 out.mp4
Now playing: baseball.mp4
Opening in BLOCKING MODE 
Opening in BLOCKING MODE 
0:00:05.777924909 13891 0xaaaad3f03800 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input.1         3x224x224       
1   OUTPUT kFLOAT 262             18x56x56        
2   OUTPUT kFLOAT 264             42x56x56        

0:00:05.949592575 13891 0xaaaad3f03800 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine
0:00:05.979145332 13891 0xaaaad3f03800 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
ERROR from element nvv4l2-decoder: No valid frames decoded before end of stream
Error details: gstvideodecoder.c(1140): gst_video_decoder_sink_event_default (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/nvv4l2decoder:nvv4l2-decoder:
no valid frames found
Returned, stopping playback
Deleting pipeline

please use a h264 file ,such as: /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264, please refer to the code: deepstream_pose_estimation/deepstream_pose_estimation_app.cpp at master · NVIDIA-AI-IOT/deepstream_pose_estimation · GitHub

@fanzh

Thank you.
I was able to run Pose Estimation with your help!!

I will have a good weekend.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.