AK51
January 15, 2022, 10:37pm
1
Hi,
I am trying to get the pose-estimation works for the last 2 weeks…
I used deepstream-6.0 instead of 5.0… will it cause problem?
I have install these also, it does not help
87 sudo apt-get install libgstreamer1.0-dev
88 sudo apt-get install libgstreamer1.0
91 sudo apt install libjson-glib-dev
92 sudo apt-get install libgstreamer-plugins-base1.0-dev
nvidia@nvidia-desktop:~/deepstream_pose_estimation$ sudo make
g++ -c -o deepstream_pose_estimation_app.o -DPLATFORM_TEGRA -I../../apps-common/includes -I../../../includes -I../deepstream-app/ -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=5 -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/orc-0.4 -I/usr/include/gstreamer-1.0 -I/usr/include/json-glib-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_pose_estimation_app.cpp
In file included from deepstream_pose_estimation_app.cpp:4:0:
post_process.cpp:12:10: fatal error: gstnvdsmeta.h: No such file or directory
#include "gstnvdsmeta.h"
^~~~~~~~~~~~~~~
compilation terminated.
Makefile:44: recipe for target 'deepstream_pose_estimation_app.o' failed
make: *** [deepstream_pose_estimation_app.o] Error 1
nvidia@nvidia-desktop:~/deepstream_pose_estimation$
I can see this .h file in here though
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/includes$ lscvcore_headers nvdsgstutils.h nvds_mask_utils.h
gstnvdsbufferpool.h nvdsinfer_context.h nvdsmeta.h
gstnvdsinfer.h nvdsinfer_custom_impl.h nvdsmeta_schema.h
gstnvdsmeta.h nvdsinfer_dbscan.h nvds_msgapi.h
gst-nvdssr.h nvdsinfer.h nvds_obj_encode.h
gst-nvevent.h nvdsinfer_logger.h nvds_opticalflow_meta.h
gst-nvmessage.h nvdsinferserver nvds_roi_meta.h
gst-nvquery.h nvdsinferserver_common.proto nvdstracker.h
nvbufaudio.h nvdsinferserver_config.proto nvds_tracker_meta.h
nvbufsurface.h nvdsinferserver_plugin.proto nvds_version.h
nvbufsurftransform.h nvdsinfer_tlt.h nvll_osd_api.h
nvds_analytics_meta.h nvdsinfer_utils.h nvll_osd_struct.h
nvds_audio_meta.h nvds_latency_meta.h nvmsgbroker.h
nvds_dewarper_meta.h nvds_logger.h
Hi,
It requires some update for the JetPack4.6+Deepstream6.0 environment.
Please apply the below change and try it again:
diff --git a/Makefile b/Makefile
index ae2c316..ca64725 100644
--- a/Makefile
+++ b/Makefile
@@ -9,7 +9,7 @@ APP:= deepstream-pose-estimation-app
TARGET_DEVICE = $(shell gcc -dumpmachine | cut -f1 -d -)
-NVDS_VERSION:=5.0
+NVDS_VERSION:=6.0
LIB_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/lib/
APP_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/bin/
@@ -22,11 +22,16 @@ SRCS:= deepstream_pose_estimation_app.cpp
INCS:= $(wildcard *.h)
-PKGS:= gstreamer-1.0 gstreamer-video-1.0 x11 json-glib-1.0
+PKGS:= gstreamer-1.0 gstreamer-video-1.0 x11
OBJS:= $(patsubst %.c,%.o, $(patsubst %.cpp,%.o, $(SRCS)))
-
-CFLAGS+= -I../../apps-common/includes -I../../../includes -I../deepstream-app/ -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=5
+CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/apps-common/includes
+CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/includes
+CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
+CFLAGS+= -I/usr/include/gstreamer-1.0
+CFLAGS+= -I/usr/include/glib-2.0
+CFLAGS+= -I/usr/lib/aarch64-linux-gnu/glib-2.0/include
+CFLAGS+= -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=6
LIBS+= -L$(LIB_INSTALL_DIR) -lnvdsgst_meta -lnvds_meta -lnvds_utils -lm \
-lpthread -ldl -Wl,-rpath,$(LIB_INSTALL_DIR)
Thanks.
AK51
January 22, 2022, 7:50am
5
Hi,
I have in the last step, but I don’t see any video…
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /home/nvidia/Downloads/video.mp4 .
[sudo] password for nvidia:
Now playing: /home/nvidia/Downloads/video.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:04.774147343 24455 0x55adf87e00 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:04.774280262 24455 0x55adf87e00 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:04.774313128 24455 0x55adf87e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Detected invalid timing cache, setup a local cache instead
ERROR: [TRT]: Tactic Device request: 2128MB Available: 1536MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to oom error on requested size of 2128 detected for tactic 4.
ERROR: [TRT]: Tactic Device request: 2125MB Available: 1536MB. Device memory is insufficient to use tactic.
WARNING: [TRT]: Skipping tactic 3 due to oom error on requested size of 2125 detected for tactic 4.
0:03:50.298019054 24455 0x55adf87e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:03:50.707401710 24455 0x55adf87e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ ls
bin
CLA_LICENSE.md
cover_table.hpp
deepstream-pose-estimation-app
deepstream_pose_estimation_app.cpp
deepstream_pose_estimation_app.o
deepstream_pose_estimation_config.txt
images
LICENSE.md
Makefile
Makefile.bak
munkres_algorithm.cpp
pair_graph.hpp
pose_estimation.onnx
post_process.cpp
README.md
resnet18_baseline_att_224x224_A_epoch_249.onnx
resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
less deepstream_pose_estimation_config.txt
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=resnet18_baseline_att_224x224_A_epoch_249.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=pose_estimation.onnx_b1_gpu0_fp16.engine
network-type=100
workspace-size=3000
AK51
January 22, 2022, 8:07am
6
Hi,
I have updated deepstream_pose_estimation_config.txt of the engine. Is it the problem of my input video? Is there any sample mp4 I can try? Thx
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=resnet18_baseline_att_224x224_A_epoch_249.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
network-type=100
workspace-size=3000
Still stuck in here, no video…
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /home/nvidia/Downloads/video.mp4 .
Now playing: /home/nvidia/Downloads/video.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:06.385767250 26593 0x5595e8fe00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:06.386037256 26593 0x5595e8fe00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:06.435467990 26593 0x5595e8fe00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
AK51
January 22, 2022, 8:21am
7
Hi have tried the sample from Nvidia, still don’t see any video.
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/samples/streams$ ls
sample_1080p_h264.mp4 sample_720p.mp4 sample_qHD.mp4 yoga.jpg
sample_1080p_h265.mp4 sample_cam6.mp4 sample_ride_bike.mov yoga.mp4
sample_720p.h264 sample_industrial.jpg sample_run.mov
sample_720p.jpg sample_push.mov sample_walk.mov
sample_720p.mjpeg sample_qHD.h264 sonyc_mixed_audio.wav
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/samples/streams$
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4 .
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_1080p_h264.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:06.559212540 27987 0x5582547e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:06.559382907 27987 0x5582547e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:06.603474195 27987 0x5582547e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
ERROR from element h264-parser: Internal data stream error.
Error details: gstbaseparse.c(3611): gst_base_parse_loop (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstH264Parse:h264-parser:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Deleting pipeline
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4 .
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:05.475546996 29008 0x55804bbe00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:05.475732051 29008 0x55804bbe00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:05.517778942 29008 0x55804bbe00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
c^C
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.mp4 .
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:05.508273881 31125 0x55b10c3e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:05.508450291 31125 0x55b10c3e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:05.546837663 31125 0x55b10c3e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
ERROR from element h264-parser: Failed to parse stream
Error details: gstbaseparse.c(2954): gst_base_parse_check_sync (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstH264Parse:h264-parser
Returned, stopping playback
Deleting pipeline
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264 .
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:05.010839195 32203 0x557b7d3e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:05.011017011 32203 0x557b7d3e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:05.047908272 32203 0x557b7d3e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
^C
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg .
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:05.521212673 1198 0x559f2b7e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18
0:00:05.521395281 1198 0x559f2b7e00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
0:00:05.558014676 1198 0x559f2b7e00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
AK51
January 22, 2022, 8:36am
8
Hi,
Here is the procedure of what I did for my nano 4G. I have been working on this for 2 weeks already. >_< Help…
Using SDK manager to burn 4.6 with deepstream 6.0
Then follow the GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.
And change the Makefile based on this forum
Update the config txt, deepstream_pose_estimation_config.txt
Pls let me know which part I shall look into.
Thx
AK51
January 22, 2022, 8:57am
9
Hi,
Here is the make process, it is less than 10 seconds…
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo touch Makefile
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo make
g++ -c -o deepstream_pose_estimation_app.o -DPLATFORM_TEGRA -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/apps-common/includes -I/opt/nvidia/deepstream/deepstream-6.0/sources/includes -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=6 -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/orc-0.4 -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_pose_estimation_app.cpp
deepstream_pose_estimation_app.cpp: In function ‘GstPadProbeReturn osd_sink_pad_buffer_probe(GstPad*, GstPadProbeInfo*, gpointer)’:
deepstream_pose_estimation_app.cpp:231:77: warning: zero-length gnu_printf format string [-Wformat-zero-length]
offset = snprintf(txt_params->display_text + offset, MAX_DISPLAY_LEN, "");
^
deepstream_pose_estimation_app.cpp:236:41: warning: ISO C++ forbids converting a string constant to ‘char*’ [-Wwrite-strings]
txt_params->font_params.font_name = "Mono";
^~~~~~
g++ -o deepstream-pose-estimation-app deepstream_pose_estimation_app.o -L/opt/nvidia/deepstream/deepstream-6.0/lib/ -lnvdsgst_meta -lnvds_meta -lnvds_utils -lm -lpthread -ldl -Wl,-rpath,/opt/nvidia/deepstream/deepstream-6.0/lib/ -lgstvideo-1.0 -lgstbase-1.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 -lX11
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$
Here is the part of the Makefile
CXX=g++
APP:= deepstream-pose-estimation-app
TARGET_DEVICE = $(shell gcc -dumpmachine | cut -f1 -d -)
NVDS_VERSION:=6.0
LIB_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/lib/
APP_INSTALL_DIR?=/opt/nvidia/deepstream/deepstream-$(NVDS_VERSION)/bin/
ifeq ($(TARGET_DEVICE),aarch64)
CFLAGS:= -DPLATFORM_TEGRA
endif
SRCS:= deepstream_pose_estimation_app.cpp
INCS:= $(wildcard *.h)
PKGS:= gstreamer-1.0 gstreamer-video-1.0 x11
OBJS:= $(patsubst %.c,%.o, $(patsubst %.cpp,%.o, $(SRCS)))
CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/apps-common/includes
CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/includes
CFLAGS+= -I/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-app
CFLAGS+= -I/usr/include/gstreamer-1.0
CFLAGS+= -I/usr/include/glib-2.0
CFLAGS+= -I/usr/lib/aarch64-linux-gnu/glib-2.0/include
CFLAGS+= -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=6
LIBS+= -L$(LIB_INSTALL_DIR) -lnvdsgst_meta -lnvds_meta -lnvds_utils -lm \
-lpthread -ldl -Wl,-rpath,$(LIB_INSTALL_DIR)
CFLAGS+= $(shell pkg-config --cflags $(PKGS))
LIBS+= $(shell pkg-config --libs $(PKGS))
AK51
January 22, 2022, 9:02am
10
Hi,
I follow this link to create the onnx file, is it correct?
Hi,
I converted torch model to onnx by myself by using torch.onnx.export then get the engine via trtexec. But I find that the result worse than when I run by torch. How can I increase the accuracy?
python3 export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth --input_topology ../../tasks/human_pose/human_pose.json
AK51
January 22, 2022, 9:49am
11
Hi,
I have also tried the densenet model, same problem. No video output…
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg .
[sudo] password for nvidia:
Now playing: /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.mjpeg
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:11.064688194 10297 0x557d76be00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x256x256
1 OUTPUT kFLOAT part_affinity_fields 64x64x42
2 OUTPUT kFLOAT heatmap 64x64x18
3 OUTPUT kFLOAT maxpool_heatmap 64x64x18
0:00:11.064864238 10297 0x557d76be00 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation/densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine
0:00:11.107573879 10297 0x557d76be00 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream_pose_estimation$ ls
bin
CLA_LICENSE.md
cover_table.hpp
deepstream-pose-estimation-app
deepstream_pose_estimation_app.cpp
deepstream_pose_estimation_app.o
deepstream_pose_estimation_config.txt
densenet121_baseline_att_256x256_B_epoch_160.onnx
densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine
images
LICENSE.md
Makefile
Makefile.bak
munkres_algorithm.cpp
pair_graph.hpp
pose_estimation.onnx
post_process.cpp
README.md
resnet18_baseline_att_224x224_A_epoch_249.onnx
resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine
Here is the config file
[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=densenet121_baseline_att_256x256_B_epoch_160.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine
network-type=100
workspace-size=3000
Hi,
The example is using a filesink so you will get a file output rather than a display.
For example, we can get a Pose_Estimation.mp4 file when running the following command.
Please make sure the deepstream-pose-estimation-app has the authority to write the file in the output folder
# ./deepstream-pose-estimation-app <file-uri> <output-path>
$ ./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_720p.h264 ./
Thanks.
AK51
January 25, 2022, 5:34am
13
Yes yes, I see the output video now. Thanks. :>
It is missing the skeleton though, I remember someone asked the same question before in the forum, I will search for it.
And then, I need to use rtsp instead of mp4 for my application.
Thx
Hi,
Since the sample is working now.
Would you mind opening a new topic for the skeleton and RTSP issue?
Thanks.
1 Like
system
Closed
February 22, 2022, 1:34am
16
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.