Thank you for your advice.
I referenced this forum for installing Pytorch.
The installed version is below.
@ubuntu:~/Desktop/work/pose_work/torch2trt$ pip3 list
...
torch 1.11.0
torchvision 0.12.0
...
I couldn’t do Step 1 of trt_pose.
I installed Pytorch as recommended by this forum.
@ubuntu:~/Desktop/work/pose_work/torch2trt$ sudo python3 setup.py install --plugins
Traceback (most recent call last):
File "setup.py", line 3, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
@ubuntu:~/Desktop/work/pose_work/torch2trt$ python3 setup.py install --plugins
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/local/lib/python3.8/dist-packages/test-easy-install-7291.write-test'
The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/usr/local/lib/python3.8/dist-packages/
Perhaps your account does not have write access to this directory? If the
installation directory is a system-owned directory, you may need to sign in
as the administrator or "root" account. If you do not have administrative
access to this machine, you may wish to choose a different installation
directory, preferably one that is listed in your PYTHONPATH environment
variable.
For information on other options, you may wish to consult the
documentation at:
https://setuptools.readthedocs.io/en/latest/easy_install.html
Please make the appropriate changes for your system and try again.
from the error, it is related to environment, please do these steps according this GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT 1
However, it is not possible to convert from pytorch model to onnix in the following procedure.
@ubuntu:~/Desktop/work/pose_work/trt_pose/trt_pose/utils$ python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Input model is not specified, using resnet18_baseline_att as a default.
Input width/height are not specified, using 224x224 as a default.
Output path is not specified, using resnet18_baseline_att_224x224_A_epoch_249.onnx as a default.
Input topology human_pose.json is not a valid (.json) file.
Thank you for your advice.
I’m not using docker containers.
I’ve never heard of human_pose.
Because it doesn’t come out in the procedure so far.
So I didn’t download it.
I have successfully generated an onnx file with this.
@ubuntu:~/Desktop/work/pose_work/trt_pose/trt_pose/utils$ python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Input model is not specified, using resnet18_baseline_att as a default.
Input width/height are not specified, using 224x224 as a default.
Output path is not specified, using resnet18_baseline_att_224x224_A_epoch_249.onnx as a default.
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/tes/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 44.7M/44.7M [00:03<00:00, 15.0MB/s]
Successfully completed convertion of resnet18_baseline_att_224x224_A_epoch_249.pth to resnet18_baseline_att_224x224_A_epoch_249.onnx.
@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream_pose_estimation v4l2:///dev/video0 out.mp4
sudo: ./deepstream_pose_estimation: command not found
The tutorial was like this: I am going to do the same.
Last but not least, I want to use a USB camera with . How should the command parameters be in this case?
ERROR from element file-source: Resource not found.
Error details: gstfilesrc.c(532): gst_file_src_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstFileSrc:file-source:
No such file "v4l2:///dev/video0"
Returned, stopping playback
Deleting pipeline
I had an mp4 file of my child’s baseball class, so I tried entering it, but I got an error.
What is the cause····
I’m so tired.
@ubuntu:/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation$ ./deepstream-pose-estimation-app baseball.mp4 out.mp4
Now playing: baseball.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:05.777924909 13891 0xaaaad3f03800 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input.1 3x224x224
1 OUTPUT kFLOAT 262 18x56x56
2 OUTPUT kFLOAT 264 42x56x56
0:00:05.949592575 13891 0xaaaad3f03800 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine
0:00:05.979145332 13891 0xaaaad3f03800 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
ERROR from element nvv4l2-decoder: No valid frames decoded before end of stream
Error details: gstvideodecoder.c(1140): gst_video_decoder_sink_event_default (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/nvv4l2decoder:nvv4l2-decoder:
no valid frames found
Returned, stopping playback
Deleting pipeline