Cannot open nvbuf_utils library: /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so: cannot open shared object file: No such file or directory
Could not open device ENCODER DEV
Error while DQing buffer
ERROR while DQing buffer at output plane
Segmentation fault (core dumped)
Is there a way to run this software on current JetPack versions?
Independently of this, what is the best way to get DeepStream to generate a WebRTC stream given the way things stand?
Hi,
By default we support RTSP in DeepStream SDK. For using WebRTC, you would need to check existing implementation and integrate with DeepStream SDK. Would suggest check this and do integration:
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
It’s a bit of a mess. There seem to be two different SDKs, with some overlap:
The DeepStream SDK
The “Jetson Inference” package by Dusty
The latter includes a webRTC implementation built on top of Gstreamer. So from within the provided Docker container, you can run a command like
video-viewer /dev/video0 webrtc://@:8554/output
and it will stream the camera to a webrtc endpoint that you can access with a browser. It works! The relevant code is in the Jetson Inference repo at utils/codec/gstWebRTC.cpp
So nyinglui is suggesting (I think) that we take Dusty’s code and produce our own plugin for DeepStream. That might be possible, but seems a bit of an adventure.
The other suggestion (I think) is that we end the Deepstream pipeline with a sink to RSTP, which seems more or less straightforward (although with this platform and documentation, who knows). You could then run one of Dusty’s apps with the INPUT being RTSP and the OUTPUT being webrtc. That would in theory link the two pipelines together, at the cost of introducing RTSPs latency and a lot of needless encoding and decoding.
In general I find the Jetson dev experience frustrating. There are multiple frameworks, clearly implemented by different teams and getting them to work together is a nightmare.
This is a topic I also spent a lot of frustrating time with. The main benefit of using webrtcsink or the hardware-accelerated WebRTC available for Jetson is, that the sink element can control the encoder. Thus, the bitrate can be changed if the bandwidth changes etc.
I compared some approaches and right I’m using Dustys RTP sink from jetson-utils relayed using Janus WebRTC gateway. This has a significantly lower latency compared to deepstream-app with RTSP output (without any processing in the pipeline and all parameters tuned).
Regarding webrtcsink:
It should be possible to build webrtcsink from gst-plugins-rs for Jetpack 6, since gstreamer got updated. However, I haven’t tried this yet.
I gave the hardware-accelerated WebRTC example on Jetpack 6 a try and it looks promising, but it seems quite hard to adapt it to real world use cases. To set it up, you have to use the webrtc_argus_camera_app_src.tbz2 located in public_sources.tbz2 you can download by clicking on Driver Package (BSP) Sources here and not the outdated sources for Jetpack 4.6.
Also, I don’t really understand why NVIDIA did not implement this as a GStreamer plugin/bin, since the input pipeline is already based on GStreamer elements. This would simplify the usage a lot.