• Hardware Platform (Jetson / GPU): Jetson AGX Orin
• DeepStream Version: 6.1.1
• JetPack Version (valid for Jetson only): 5.0.2
• TensorRT Version: 8.4.1-1+cuda11.4
• Issue Type( questions, new requirements, bugs): questions
I use deepstream_pose_estimation for deploy human pose estimation application on DeepStream 6.1.1.
I clone the repository, mount it in deepstream-l4t container, and run it.
git clone https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation.git
cd deepstream_pose_estimation
sudo docker run -it --rm --runtime=nvidia --net=host -v ${PWD}:/tmp nvcr.io/nvidia/deepstream-l4t:6.1.1-iot
I replace the OSD binary for Jetson in /opt/nvidia/deepstream/deepstream/libs
with the ones provided in this repository under bin/
.
cp /tmp/bin/Jetson/libnvds_osd.so /opt/nvidia/deepstream/deepstream/lib/libnvds_osd.so
I try to run the following command, it does not draw inference results.
gst-launch-1.0 -e filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
qtdemux ! queue ! h264parse ! nvv4l2decoder ! \
mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=1 \
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine ! \
queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)RGBA" ! nvstreamdemux name=demux demux.src_0 ! \
queue ! nvvideoconvert ! nvdsosd process-mode=CPU_MODE ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=(string)I420" ! \
nvv4l2h264enc ! h264parse ! qtmux ! filesink sync=false location=out.mp4
However, the following command draw inference results.
It only add nvvideoconvert
of converting format to NV12 to the above command after nvstreamdemux
.
gst-launch-1.0 -e filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
qtdemux ! queue ! h264parse ! nvv4l2decoder ! \
mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=1 \
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine ! \
queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=(string)RGBA" ! nvstreamdemux name=demux demux.src_0 ! \
queue ! nvvideoconvert ! "video/x-raw(memory:NVMM), format=NV12" ! \
queue ! nvvideoconvert ! nvdsosd process-mode=CPU_MODE ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=(string)I420" ! \
nvv4l2h264enc ! h264parse ! qtmux ! filesink sync=false location=out.mp4
Why does this phenomenon occur?
As a side note, if I do not replace the OSD binary, both of the above 2 commands draw inference results.