Deepstream-test3-app don't work with rtsp camera

Hi!
I’m trying to run deepstream-test3-app with my IP camera (Master mr-idnm213mp) via rtsp. The app freezes at this point:

$ ./deepstream-test3-app rtsp://user:password@192.168.1.113/av0_0
Now playing: rtsp://user:password@192.168.1.113/av0_0,

Using winsys: x11 
Creating LL OSD context new
0:00:00.751139578 30008   0x55746ba070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:18.402268812 30008   0x55746ba070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at 
/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
 - Decodebin child added: source
Running...
 - Decodebin child added: decodebin0
 - Decodebin child added: rtph264depay0
 - Decodebin child added: h264parse0
 - Decodebin child added: capsfilter0
 - Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

I think cb_newpad callback is not received src-pad from uri-decode-bin.
The test app works well with a sample video. And capturing from the camera using VLC player works well too.
Can anyone help me?

2 Likes

Hi,
May i know which platform you using, it will stream the output onto display, video file should also have issue, on Jetson you can export DISPLAY=:0 or 1
xranr to check if display set, on x86 platform it will be different, first you need nvidia display card if you run deepstream sample with sink type eglglessink, you need to install display driver with nvidia opengl, supposed your card have display port, if your card is tesla series, you can follow this to setup virtual display, Deepstream/FAQ - eLinux.org 5A,
another option is you can change sink to Fakesink.

Thank you for reply,
I’m using Jetson Nano. Running the test app with a video file has no any issue.
cd_newpad callback has received src-pad and OSD context was created:

$ ./deepstream-test3-app file:///home/mm/Documents/deepstream-test3/sample_720p.h264 
Now playing: file:///home/mm/Documents/deepstream-test3/sample_720p.h264,

Using winsys: x11 
Creating LL OSD context new
0:00:00.754251645  7450   0x55c66f5c70 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:18.817019000  7450   0x55c66f5c70 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
 - Decodebin child added: source
 - Decodebin child added: decodebin0
Running...
 - Decodebin child added: h264parse0
 - Decodebin child added: capsfilter0
 - Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad: received new pad 'src_0' from 'uri-decode-bin'
Creating LL OSD context new

Whole pipeline works well, people and cars were detected. The app working with any format of a video file (avi, mkv, h264, mp4), but in my case it not working with RTSP source.

rtsp source works for test3 sample, can you share the failed log?

one run well clip:
root@0030980c3ed9:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app rtsp://10.19.225.233:8554/test
Now playing: rtsp://10.19.225.233:8554/test,
0:00:03.121532095 293 0x563d45b5f6d0 INFO nvinfer gstnvinfer.cpp:598:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1574> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640 min: 1x3x368x640 opt: 1x3x368x640 Max: 1x3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 min: 0 opt: 0 Max: 0

0:00:03.121641797 293 0x563d45b5f6d0 INFO nvinfer gstnvinfer.cpp:598:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1678> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.122686669 293 0x563d45b5f6d0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Running…
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
In cb_newpad
Frame Number = 0 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2

It does not work. I tried it on my RTX 2080

root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app file:///home/mike/work/dsdk5/mask1.mp4
Now playing: file:///home/mike/work/dsdk5/mask1.mp4,
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
Running…
^C
root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app rtsp://mike:mike@192.168.0.1:554/live/main
Now playing: rtsp://mike:mike@192.168.0.1:554/live/main,
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
Running…
^C
root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3#

seems your local file also can not run well, test3 output stream onto display, first you need nvidia driver installed with nvidia opengl, before you run with docker, make sure do these:
export DISPLAY=:0 or 1
xrandr to check is DISPLAY set
xhost +
another option, you can chenge
sink = gst_element_factory_make (“nveglglessink”, “nvvideo-renderer”);
to
sink = gst_element_factory_make (“fakesink”, “nvvideo-renderer”);

The app freezes at this point:

$ ./deepstream-test3-app rtsp://user:password@192.168.1.113/av0_0
Now playing: rtsp://user:password@192.168.1.113/av0_0,

Using winsys: x11 
Creating LL OSD context new
0:00:00.751139578 30008   0x55746ba070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:18.402268812 30008   0x55746ba070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
 - Decodebin child added: source
Running...
 - Decodebin child added: decodebin0
 - Decodebin child added: rtph264depay0
 - Decodebin child added: h264parse0
 - Decodebin child added: capsfilter0
 - Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

Please make sure you can play local file first. refer to post 6.

Hi! Sorry for late response.
I have updated Deepstream to 5.0 version. Got the same issue.
With local file deepstream-test3-app works well:

$ ./deepstream-test3-app file:///home/mm/Documents/jcntr_test/tat-01.mkv
Now playing: file:///home/mm/Documents/jcntr_test/tat-01.mkv,

Using winsys: x11 
0:00:00.247967299  8107   0x5592e48ca0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:30.656726408  8107   0x5592e48ca0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:30.700924046  8107   0x5592e48ca0 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Decodebin child added: decodebin0
Running...
Decodebin child added: matroskademux0
Decodebin child added: multiqueue0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
In cb_newpad
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0

With a rtsp camera the app freezes at this point:

$ ./deepstream-test3-app rtsp://admin:admin@192.168.1.168
Now playing: rtsp://admin:admin@192.168.1.168,

Using winsys: x11 
0:00:00.297263288  8168   0x55785ee4a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:26.755230862  8168   0x55785ee4a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:26.797113572  8168   0x55785ee4a0 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Running...
Decodebin child added: decodebin0
Decodebin child added: rtph264depay0
Decodebin child added: h264parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

I tried two cameras Master mr-idnm213mp and Master mr-idnm104p. Maybe something wrong with cameras settings? Can you help me to figure out?

Can you view the rtsp camera streaming through vlc?

I noticed that the app is not working only in h264 format. With h265 all works well.
Through vlc camera streaming works well with both h264 and h265 formats.
Maybe you have any suggestion about that?

./deepstream-test3-app rtsp://admin:admin@192.168.1.168
Now playing: rtsp://admin:qwerty123@192.168.1.168,

Using winsys: x11 
0:00:00.456309856 12688   0x5582da14a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:30.916894265 12688   0x5582da14a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:31.187546136 12688   0x5582da14a0 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Running...
Decodebin child added: decodebin0
Decodebin child added: rtph265depay0
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 279 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 279 
In cb_newpad
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0

I am able to stream h264 with test3. I don’t know how to verify its really h264, other than what the uri indicates. I see both a file stream and camera stream side by side.

rtsp://system:password@192.168.1.16:554//h264Preview_01_main

Can you provide your camera setup for h264 codec information?
or can you get the log GST_DEBUG=v4l2videodec:5 deepstream-test3-app “your rtsp camera” > log 2>&1 and pasted here.