How to configure RTP source in deepstream-app 7.1?

I used to use deepstream-app -c source_config_yolov8n.txt to get rtp video source in deepstream 6.3/jetpack5.1.4.

But I can’t get it working with this command + config for deepstream-app 7.1?

==> It’s black screen now.

Why? Any thing I should change?

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0
#nvbuf-memory-type
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=rtp://0.0.0.0:5600
#uri=rtsp://127.0.0.1:8554/my_stream
#type=2
#uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
# Output Type:1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
type=3  # Output type: 3 means save to a file
container=1  # Container type: 1 means MP4 format
codec=1  # Codec type: 1 means H.264 encoding
bitrate=4000000  # Bitrate: 4 Mbps (adjust as needed)
output-file=output.mp4
sync=0  # Disable synchronization of playback
# Encoder type:0=Hardware 1=Software
enc-type=1

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=yolov8n_infer_primary.txt

[tests]
file-loop=0
Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 22.04 Jammy Jellyfish
 - Release: 5.15.148-tegra
jtop:
 - Version: 4.3.1
 - Service: Active
Libraries:
 - CUDA: 12.6.68
 - cuDNN: 9.3.0.75
 - TensorRT: 10.3.0.30
 - VPI: 3.2.4
 - OpenCV: 4.11.0 - with CUDA: YES
DeepStream C/C++ SDK version: 7.1

Python Environment:
Python 3.10.12
    GStreamer:                   YES (1.20.3)
  NVIDIA CUDA:                   YES (ver 12.6, CUFFT CUBLAS FAST_MATH)
         OpenCV version: 4.11.0  CUDA True
           YOLO version: 8.3.68
         PYCUDA version: 2024.1.2
          Torch version: 2.5.1+l4t36.4
    Torchvision version: 0.20.0
 DeepStream SDK version: 1.2.0
onnxruntime     version: 1.20.1
onnxruntime-gpu version: 1.19.2

deepstream-app is opensource. create_source_bin in \opt\nvidia\deepstream\deepstream\sources\apps\apps-common\src\deepstream_source_bin.c does not support udp source. but you can modify the code to customize. please refer to this topic for how to input udp in Gstreamer.

ds6.3 works fine.

please simplify the configurations first, for example, disabling pgie, sige, osd temporarily, then the compare the two pipeline graph by referring to this faq. if still can’t work, please share the two graphs.

Currently, I’m using the configuration file which works fine with DS6.3 deepstream-app. Just don’t know why it failed with DS7.1.

The configuration is the same, I think the graph should be same.

please refer to my last comment. could you simplify the pipeline to narrow down this issue? for example, disabling pgie, sige, osd temporarily.

Right now,(same configuration file, just type and uri difference)

  • [DS6.3] rtp stream OK
  • [DS6.3] file stream OK
  • [DS7.1] rtp stream failed, black screen
  • [DS7.1] files source OK

So the issue should be somewhere at the the source, we are reviewing the code right now. But we are not there yet.

I tested on DS6.3. after setting rtp://0.0.0.0:9024, deepstream-app can’t work. here is the log.ds6.3-failed.txt (9.5 KB). Noticing the app works on your DS6.3. could you share the running log and negotiated graph ds-app-playing on DS6.3? Thanks!

Here is the log output, which suggest uri might be wrong.

check my configurations here

Code: main → parse_config_file → parse_source → CONFIG_GROUP_SOURCE_URI

** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
ERROR from src_elem: No URI handler implemented for "rtp".
Debug info: gsturidecodebin.c(1408): gen_source_element (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem
App run failed

from your test, the app can’t work with rtp uri. this result is the same with mine. what do you mean about “[DS6.3] rtp stream OK”?

We have tested on Jetpack 5.1.4/L4T36.4/Jetson Orin Nano 8GB. It works fine with rtp uri.

We are debuging the code, which deepstream-app has changed to jetson-yolo, you can find there is no kind of ERROR from src_elem: No URI handler implemented for "rtp". error in DS7.1.

$ ./jetson-yolo -c source_config_yolov8n.txt
txt -> source_config_yolov8n.txt
CONFIG_GROUP_SOURCE_URI -> rtp://0.0.0.0:5600
parse_source -> 0
multi_source_config -> 1
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.288660833 19208 0xaaaace926a60 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/home/daniel/Work/jetson-fpv/utils/dsyolo/model_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.288785666 19208 0xaaaace926a60 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /home/daniel/Work/jetson-fpv/utils/dsyolo/model_b1_gpu0_fp16.engine
0:00:00.304711265 19208 0xaaaace926a60 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/daniel/Work/jetson-fpv/utils/dsyolo/yolov8n_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running

q
Quitting
App run successful

the opensourced deepstream-app used uridecodebin to decode rtp data and failed. is your jetson-yolo using udpsrc to decode rtp data?

jetson-yolo is deepstream-app nothing else.

you can execute “export GST_DEBUG=3” to get more logs. from my side, if using uri=rtp://0.0.0.0:9024, there will be new error “_rtp_udpsrc0> Failed to resolve ▒m”. please refer to the whole log log-0207.txt (20.3 KB). if using gstreamer on Jetpack 5.1.4 works, you can migrate to the same gtreamer version by referring to this link.

You might met some streaming source issue, try this to simulate $ video-viewer file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 rtp://@:5600 --input-loop=-1 --headless

Here is my log for DS7.1 – black screen (of course)

rtp_source.txt (571.3 KB)

I don’t know what has changed in your configuration file. But I have found a sample and tested on DS7.1.

I can’t open the video file, generated by configuration.

But it’s rtp stream right? anyway, there is no error / warning tips in the log.

UPDATE: It’s jetson-yolo from DS7.1(my build) it’s the same result with deepstream-app. But there is FPS rate, my previous configuration fps always 0

daniel@daniel-nvidia:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app$ ./jetson-yolo -c source2_1080p_dec_infer-resnet_demux_int8_modify.txt
txt -> source2_1080p_dec_infer-resnet_demux_int8_modify.txt
CONFIG_GROUP_SOURCE_URI -> file:///opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/../../streams/sample_1080p_h264.mp4
parse_source -> 0
multi_source_config -> 1
parse_source -> 0
** INFO: <create_encode_file_bin:364>: Could not create HW encoder. Falling back to SW encoder
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.227998606 73335 0xaaaae25b4000 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b2_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 2x3x544x960     Max: 2x3x544x960
1   OUTPUT kFLOAT output_cov/Sigmoid:0 4x34x60         min: 0               opt: 0               Max: 0
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 16x34x60        min: 0               opt: 0               Max: 0

0:00:00.228173779 73335 0xaaaae25b4000 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b2_gpu0_int8.engine
0:00:00.237830383 73335 0xaaaae25b4000 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

** INFO: <bus_callback:291>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:277>: Pipeline running


**PERF:  FPS 0 (Avg)
**PERF:  288.68 (0.69)
**PERF:  10.80 (11.11)
**PERF:  57.07 (27.08)
^C** ERROR: <_intr_handler:131>: User Interrupted..

Quitting
nvstreammux: Successfully handled EOS for source_id=0
App run successful

from source2_1080p_dec_infer-resnet_demux_int8_modify.txt you shared, [source0] is using the local file, [source1] is disabled. so this test did not test rtp source.

Oh yes, then it’s right. You can disable source0 and enable souce1. That’s what it is I have described above.

daniel@daniel-nvidia:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app$ deepstream-app -c source2_1080p_dec_infer                                        -resnet_demux_int8_modify.txt
** INFO: <create_encode_file_bin:364>: Could not create HW encoder. Falling back to SW encoder
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performa                                        nce or even cause errors.
0:00:00.399094028  3511 0xaaab01823a00 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInfe                                        rContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: dese                                        rialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/../../models/Primary_Detector/res                                        net18_trafficcamnet_pruned.onnx_b2_gpu0_int8.engine
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 2x3x544x960     Max: 2x3x544x960
1   OUTPUT kFLOAT output_cov/Sigmoid:0 4x34x60         min: 0               opt: 0               Max: 0
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 16x34x60        min: 0               opt: 0               Max: 0

0:00:00.399289324  3511 0xaaab01823a00 INFO                 nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInfe                                        rContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deser                                        ialized engine model: /opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/../../models/Primary_Detector/resnet1                                        8_trafficcamnet_pruned.onnx_b2_gpu0_int8.engine
0:00:00.424121370  3511 0xaaab01823a00 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie>                                         [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/samples/configs/deepstream-app/config_infer_primary.txt sucessfull                                        y

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

** INFO: <bus_callback:291>: Pipeline ready

** INFO: <bus_callback:277>: Pipeline running


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
^C** ERROR: <_intr_handler:131>: User Interrupted..

Quitting
App run successful

testing on DS7.1 with your source2_1080p_dec_infer-resnet_demux_int8_modify.txt, I got the same result with yours. that is fps is 0.log-0208.txt (2.1 KB)

  1. As I mentioned, deepstream-app uses uridecodebin if source type=3. the first cmd does not work. the second cmd works. it is a the issue of uridecodebin receiving rtp, not a deepsteram bug.
gst-launch-1.0 uridecodebin uri=rtp://0.0.0.0:9024 ! fakesink  
gst-launch-1.0 udpsrc port=9024 buffer-size=524288 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=96' ! rtph264depay ! nvv4l2decoder ! queue ! fakesink    
  1. here are some solutions.
    2.1 As you mentioned, uridecodebin receiving rtp works on DS6.3, but from my test ds6.3-failed.txt, it still can’t wok on DS6.3. so please provide a negotiated pipeline graph ds-app-playing for further anlysis. the 0.00.00.320092359-ds-app-playing.png you shared is before negotiation, not after negotiation.
    2.2 since the second cmd above works, you can modify deepstream-app to customize.

The question is how? We have tested on DS6.3 which is OK for h264, not OK with h265.

And It’s not work for h264 on Jetpack 6.2 now, so I think it time to take a look inside of what’s happening on deepstrea-app code.

As I don’t how to fix the issue, so I raised the question here.