DeepStream to WebRTC, State of play

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.4 (using Python bindings)
• JetPack Version (valid for Jetson only) 6.0-b52
• TensorRT Version 8.6.2
• Issue Type( questions, new requirements, bugs) question

Hi,

I’m trying to get a WebRTC server running as the final sink on a DeepStream pipeline. There appear to be several ways to do this:

  1. Gstreamer webrtcsink - There are no Jetson-compatible binaries (to my knowledge) and I haven’t been able to compile the source.
  2. Gstreamer webrtcbin - deprecated by Gstreamer. I haven’t tried this.
  3. The nVidia hardware-accelerated Jetson WebRTC server, mentioned in DeepStream to WebRTC and downloadable at Jetson Download Center | NVIDIA Developer

On option 3, my plan would be to

  • create a video loopback “virtual device” using something like v4l2sink
  • launch peerconnection_server and peerconnection_client to ingest this virtual device.

However, this software doesn’t appear to work with the latest JetPack. Running either of these two commands:

./video_loopback --codec H264 --width 1280 --height 720 --capture_device_index 0
./peerconnection_client --server 127.0.0.1 --autoconnect --autocall

will generate an error:

Cannot open nvbuf_utils library: /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so: cannot open shared object file: No such file or directory 
Could not open device ENCODER DEV
Error while DQing buffer
 ERROR while DQing buffer at output plane 
Segmentation fault (core dumped)
  1. Is there a way to run this software on current JetPack versions?
  2. Independently of this, what is the best way to get DeepStream to generate a WebRTC stream given the way things stand?

Thanks

Simon

“Hardware Acceleration in the WebRTC Framework” is a part of JetPack. Hardware Acceleration in the WebRTC Framework — NVIDIA Jetson Linux Developer Guide 1 documentation

Please refer to JetPack document.

All the examples are for cameras as source streams. How can a DeepStream pipeline output to this WebRTC framework?

Hi!
I’m also interested in this. I don’t really understand how this repo plays into this:

Did you find any satisfactory solution?
Thanks!

Hi,
By default we support RTSP in DeepStream SDK. For using WebRTC, you would need to check existing implementation and integrate with DeepStream SDK. Would suggest check this and do integration:

jetson-inference/docs/webrtc-server.md at master · dusty-nv/jetson-inference · GitHub
jetson-inference/docs/webrtc-html.md at master · dusty-nv/jetson-inference · GitHub

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

It’s a bit of a mess. There seem to be two different SDKs, with some overlap:

  1. The DeepStream SDK
  2. The “Jetson Inference” package by Dusty

The latter includes a webRTC implementation built on top of Gstreamer. So from within the provided Docker container, you can run a command like

video-viewer /dev/video0 webrtc://@:8554/output

and it will stream the camera to a webrtc endpoint that you can access with a browser. It works! The relevant code is in the Jetson Inference repo at utils/codec/gstWebRTC.cpp

So nyinglui is suggesting (I think) that we take Dusty’s code and produce our own plugin for DeepStream. That might be possible, but seems a bit of an adventure.

The other suggestion (I think) is that we end the Deepstream pipeline with a sink to RSTP, which seems more or less straightforward (although with this platform and documentation, who knows). You could then run one of Dusty’s apps with the INPUT being RTSP and the OUTPUT being webrtc. That would in theory link the two pipelines together, at the cost of introducing RTSPs latency and a lot of needless encoding and decoding.

In general I find the Jetson dev experience frustrating. There are multiple frameworks, clearly implemented by different teams and getting them to work together is a nightmare.

Your mileage may vary.

1 Like

Hi Simon,
that makes sense. And I agree, it would be helpful to have a native WebRTC implementation. After all, that’s a common use-case…

I’m not sure about the latency of the RTSP-to-WebRTC…

Definitely share the frustrating with you…

1 Like

This is a topic I also spent a lot of frustrating time with. The main benefit of using webrtcsink or the hardware-accelerated WebRTC available for Jetson is, that the sink element can control the encoder. Thus, the bitrate can be changed if the bandwidth changes etc.

I compared some approaches and right I’m using Dustys RTP sink from jetson-utils relayed using Janus WebRTC gateway. This has a significantly lower latency compared to deepstream-app with RTSP output (without any processing in the pipeline and all parameters tuned).

Regarding webrtcsink:
It should be possible to build webrtcsink from gst-plugins-rs for Jetpack 6, since gstreamer got updated. However, I haven’t tried this yet.

I gave the hardware-accelerated WebRTC example on Jetpack 6 a try and it looks promising, but it seems quite hard to adapt it to real world use cases. To set it up, you have to use the webrtc_argus_camera_app_src.tbz2 located in public_sources.tbz2 you can download by clicking on Driver Package (BSP) Sources here and not the outdated sources for Jetpack 4.6.
Also, I don’t really understand why NVIDIA did not implement this as a GStreamer plugin/bin, since the input pipeline is already based on GStreamer elements. This would simplify the usage a lot.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.