How to send inference result as RTP streams to another Jetson for on screen display & filesink?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Xavier
• DeepStream Version: 5.0
• JetPack Version (valid for Jetson only): 4.4
• TensorRT Version: 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): questions

My current pipeline is as follow:

I want to send the result after nvtracker to a jetson nano (receiver) and display the results on the screen and save to file, however, all relevant processing steps (nvmultistreamtiler, nvvideoconvert, nvdsosd, nvegltransform, nveglglessink, nvv4l2h264enc, h264parse, matroskamux, filesink) are done by the receiver not the sender. I need the sender to focus solely on doing inference, all steps related to on screen display and saving to file are delegated to the receiver. Does deepstream support this use case? If so what plugins do I need to send stream from the sender and what plugins do I need to receive the stream from the receiver?

Gstreamer supports RTSP server and client.
https://gstreamer.freedesktop.org/documentation/additional/rtp.html?gi-language=c
https://gstreamer.freedesktop.org/documentation/rtsp/index.html?gi-language=c
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-rtsp-server/html/

DeepStreamSDK also provides RTSP server sample codes in deepstream-app sample codes. Please check /opt/nvidia/deepstream/deepstream-5.0/sources/apps/apps-common/src/deepstream_sink_bin.c and the related codes.

Hi @Fiona.Chen,

I have seen the example provided by deepstream-app and it doesn’t work for the use case I asked about. I want to ask if the server/sender can just send the raw GstBuffer to the client/receiver and the receiver will perform all the relevant steps required for on-screen display and saving to file. For example, sender focuses solely on doing inference, receiver takes care of displaying on screen:
Sender pipeline (Jetson 1): camera → nvinfer → tracker → send result via RTSP
Receiver pipeline (Jetson 2): get result from RTSP → nvmultistreamtiler → nvvideoconvert → nvdsosd → nvegltransform → nveglglessink

Does deepstream support such a use case?

Yes. It can. I do not mean to use deepstream-app directly but just refer to the implementation of rtsp sending part and rtspsrc receiver part. The code will help you. To be frank, deepstream-app can also be used to implement your scenario, it is a configurable pipeline tool.

Since you pasted your pipeline, I suppose you already know gstreamer pipeline.
You need different gstreamer pipeline in your sender and receiver.

RTSP can only transfer video itself, this is decided by RTSP spec but not DeepStream. RFC 2326: Real Time Streaming Protocol (RTSP). So it is impossible to send infer result(bounding boxes, labels,…) with rtsp. You need to handle any infer result information in your sender.

If you want to show inference result in receiver,
Sender:
v4l2src->videoconvert->nvvideoconvert->capsfilter->nvstreammux->nvinfer->nvtracker->nvmultistreamtiler->nvosd->nvvideoconvert->udpsink-bin+rtsp-server

Receiver:
rtspsrc->h264parse->nvv4l2decoder->nvegltransform->nveglglessink

If you don’t want to display inference result in receiver
Sender:
v4l2src->videoconvert->nvvideoconvert->capsfilter->nvstreammux->nvinfer->nvtracker->nvmultistreamtiler->nvvideoconvert->udpsink-bin+rtsp-server

Receiver:
rtspsrc->h264parse->nvv4l2decoder->nvegltransform->nveglglessink

1 Like

Hi @Fiona.Chen,

Thank you for the clarifications. Just to double check, after running nvinfer->nvtracker on one jetson, is there anyway (I know RTSP doesn’t work now) to run these steps (nvmultistreamtiler->nvosd) on a different jetson?

I read from DeepStream reference model and tracker Performance benchmarks that “output rendering, OSD, and tiler use some % of compute resources and it can reduce the inference performance” so the benchmarks were achieved with all 3 steps disabled. I can delegate output rendering to a different jetson but I can’t find a way to delegate OSD, and tiler as well. Is what I’m trying to do even possible and will it makes significant improvement on performance?

No. But you can send inference result to cloud(such as Kafka, Azure, …) by nvmsgbroker Gst-nvmsgbroker — DeepStream 6.3 Release documentation, we have some samples for it. C/C++ Sample Apps Source Details — DeepStream 6.3 Release documentation

Even OSD, nvmultistreamtiler will consume some GPU and CPU, according to current test result, compare to inference, the increase of GPU and CPU consumption is relatively smaller. It will not impact the whole performance too much. The main consumption of GPU is nvinfer and nvtracker.

Hi @Fiona.Chen,

Thank you so much for your help!