Hi!
I’m trying to run deepstream-test3-app with my IP camera (Master mr-idnm213mp) via rtsp. The app freezes at this point:
$ ./deepstream-test3-app rtsp://user:password@192.168.1.113/av0_0
Now playing: rtsp://user:password@192.168.1.113/av0_0,
Using winsys: x11
Creating LL OSD context new
0:00:00.751139578 30008 0x55746ba070 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:18.402268812 30008 0x55746ba070 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at
/opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
- Decodebin child added: source
Running...
- Decodebin child added: decodebin0
- Decodebin child added: rtph264depay0
- Decodebin child added: h264parse0
- Decodebin child added: capsfilter0
- Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
I think cb_newpad callback is not received src-pad from uri-decode-bin.
The test app works well with a sample video. And capturing from the camera using VLC player works well too.
Can anyone help me?
Hi,
May i know which platform you using, it will stream the output onto display, video file should also have issue, on Jetson you can export DISPLAY=:0 or 1
xranr to check if display set, on x86 platform it will be different, first you need nvidia display card if you run deepstream sample with sink type eglglessink, you need to install display driver with nvidia opengl, supposed your card have display port, if your card is tesla series, you can follow this to setup virtual display, Deepstream/FAQ - eLinux.org 5A,
another option is you can change sink to Fakesink.
Thank you for reply,
I’m using Jetson Nano. Running the test app with a video file has no any issue. cd_newpad callback has received src-pad and OSD context was created:
$ ./deepstream-test3-app file:///home/mm/Documents/deepstream-test3/sample_720p.h264
Now playing: file:///home/mm/Documents/deepstream-test3/sample_720p.h264,
Using winsys: x11
Creating LL OSD context new
0:00:00.754251645 7450 0x55c66f5c70 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:18.817019000 7450 0x55c66f5c70 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_fp32.engine
- Decodebin child added: source
- Decodebin child added: decodebin0
Running...
- Decodebin child added: h264parse0
- Decodebin child added: capsfilter0
- Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad: received new pad 'src_0' from 'uri-decode-bin'
Creating LL OSD context new
Whole pipeline works well, people and cars were detected. The app working with any format of a video file (avi, mkv, h264, mp4), but in my case it not working with RTSP source.
root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app file:///home/mike/work/dsdk5/mask1.mp4
Now playing: file:///home/mike/work/dsdk5/mask1.mp4,
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
Running…
^C
root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3# ./deepstream-test3-app rtsp://mike:mike@192.168.0.1:554/live/main
Now playing: rtsp://mike:mike@192.168.0.1:554/live/main,
No protocol specified
No protocol specified
No protocol specified
No protocol specified
No protocol specified
Running…
^C
root@mike:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test3#
seems your local file also can not run well, test3 output stream onto display, first you need nvidia driver installed with nvidia opengl, before you run with docker, make sure do these:
export DISPLAY=:0 or 1
xrandr to check is DISPLAY set
xhost +
another option, you can chenge
sink = gst_element_factory_make (“nveglglessink”, “nvvideo-renderer”);
to
sink = gst_element_factory_make (“fakesink”, “nvvideo-renderer”);
I noticed that the app is not working only in h264 format. With h265 all works well.
Through vlc camera streaming works well with both h264 and h265 formats.
Maybe you have any suggestion about that?
./deepstream-test3-app rtsp://admin:admin@192.168.1.168
Now playing: rtsp://admin:qwerty123@192.168.1.168,
Using winsys: x11
0:00:00.456309856 12688 0x5582da14a0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:30.916894265 12688 0x5582da14a0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:31.187546136 12688 0x5582da14a0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source
Running...
Decodebin child added: decodebin0
Decodebin child added: rtph265depay0
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 279
In cb_newpad
Frame Number = 0 Number of objects = 0 Vehicle Count = 0 Person Count = 0
Frame Number = 1 Number of objects = 0 Vehicle Count = 0 Person Count = 0
I am able to stream h264 with test3. I don’t know how to verify its really h264, other than what the uri indicates. I see both a file stream and camera stream side by side.
Can you provide your camera setup for h264 codec information?
or can you get the log GST_DEBUG=v4l2videodec:5 deepstream-test3-app “your rtsp camera” > log 2>&1 and pasted here.