I’m trying to figure out how to livestream camera video from my 8GB Orin NX encoded to AV1, running the newly released Jetpack 6. This is sort of a follow-up to this thread I posted on it with the newer version of gstreamer on Jetpack 6.
I’m using RTSP to/from a MediaMTX server. I needed to build the rtp plugin from gst-plugins-rs to be able to payload AV1 in RTP, as it didn’t seem package on L4T.
I’m able to stream a test file using gst-launch-1.0 filesrc location=test.mkv ! parsebin ! rtspclientsink location=rtsp://mediamtx-host:8554/testand receive it on another host using gst-launch-1.0.exertspsrc location=rtsp://mediamtx-host:8554/test ! parsebin ! decodebin ! autovideoconvert ! autovideosink and this plays fine. Receiver is running gstreamer 1.24.2.
However, when I use the Orin’s onboard encoder, it seems to generate a stream that isn’t playable. I can see packets moving, that look well formed, so I’m wondering if there’s something I’m missing.
At a basic level, a test sending pipeline (no camera to rule that out) is: gst-launch-1.0 videotestsrc is-live=1 ! 'video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)30/1' ! nvvidconv ! nvv4l2av1enc ! av1parse ! rtspclientsink location=rtsp://mediamtx-hos:8554/test
I’ve tried various settings to the encoder, but haven’t found anything that generates a stream I can play back. Sending the output to a file instead of RTSP generates a file I can playback, so the encoder does produce a valid bitstream. I can see that the encoder is also active by looking at the NVENC clocks.
The exact same pipeline also works with the h265 encoder replacing the AV1. But I’d like to use AV1 as my application is bandwidth constrained.
Jetson AGX Orin FAQ
Q: Is there an example for running UDP streaming?
Q: Is there any example of running RTSP streaming?
We don’t have much experience in using rtspclientsink plugin. Please try UDP or RTSP through test-launch, to see if the issue is specific to using rtspclientsink.
First, thanks for helping me out with this. I appreciate it.
There doesn’t seem to be any documentation related to building the test-launch.c file, so I was unable to try that.
With UDP, I have similar behaviour as with RTSP, except I can start the listener before the sender, and in that case, I get video from a videotestsrc.
Here’s the test sending, adapter from your link: gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1280,height=720 ! timeoverlay valignment=4 halignment=1 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2av1enc idrinterval=15 ! av1parse ! rtpav1pay ! udpsink host=udp-dest port=5000 sync=0
and the receiver: gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=AV1,payload=96' ! rtpav1depay ! decodebin ! autovideoconvert ! autovideosink
If I send the test video, order doesn’t matter. Command for test video: gst-launch-1.0 filesrc location=test.mkv ! parsebin ! queue ! rtpav1pay ! udpsink host=udp-dest port=5000
This seems to suggest to me that the AV1 encoder isn’t including some necessary metadata for the stream RTSP or UDP, to pick up after the stream has started. Given the vagaries of reconnections, etc., this is something I need it to be able to do.
I ran another test, using the av1enc software encoder, and this seems to produce playable video, though its too CPU intensive to do it at a reasonable framerate and resolution. I’d really like to use the Hardware encoder on my Orin NX.
This seems to suggest there’s something wrong with nvv4l2av1enc (or my usage of it). Maybe it’s not generating some sort of metadata the receiver needs to properly play the stream.
Hi,
Please enable the property in nvv4l2av1enc and give it a try:
enable-headers : Enable AV1 file and frame headers, if enabled, dump elementary stream
flags: readable, writable, changeable only in NULL or READY state
Boolean. Default: false
And I need the av1parse element, because it seems like RTP needs a parsed stream (this is similar with H264/5, where a parser is needed before RTP payloading).
I think this setting is more useful for writing files, but I could be wrong about that.
I’ve made progress on this, and it seems like it’s an issue with the encoder. I actually got a hint looking at the GPU NVENC documentation, and saw a repeatSeqHdr parameter for the AV1 encoder.
This lead me to look at the raw AV1 bitstream (using this project). There’s only one sequence header being generated at the start of the stream (whether that’s going to a file or being livestreamed). This would explain why it’s not possible to decode the stream properly, as the av1parse element on the receiver will wait forever for a sequence header to get needed metadata on the stream.
It appears that the Sequence Header OBU is equivalent to HEVC’s SPS (and maybe PPS). nvv4l2h265enc has the insert-sps-pps parameter, which is described as “Insert H.265 SPS, PPS at every IDR frame”.
Note that these are not the same headers using enable-headers, which generates IVF headers useful for saving files.
@DaneLLL To enable streaming, or creating files that are easily seekable, it seems required to have some way to add sequence headers at IDR frames. This appears to be allowed by the AV1 spec, called a “random access point”. Maybe the option could be called insert-seq-hdr. Is there some way to get this added/fixed?
We tested UDP streaming with H.264 and H.265, and they both work fine.
For AV1, seems like Ubuntu does not offer any packages containing rtpav1pay/rtpav1depay, and we have to build these plugins ourselves.
Can you share your steps for compiling them, or can you share your pre-build binaries so we can check it quickly?
For the AV1 RTP support, that’s via the gst-plugins-rs project. I followed the directions there, slightly modified to install system wide: sudo apt install cargo sudo cargo install --locked --version 0.9.31 cargo-c git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs.git cd gst-plugins-rs sudo cargo cbuild -p gst-plugin-rtp --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu/ sudo cargo cinstall -p gst-plugin-rtp --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu/
I’ve also attached (zipped) the compiled .so, built on my NX with the latest L4T/Jetpack install. It should end up in: /usr/lib/aarch64-linux-gnu/gstreamer-1.0/ libgstrsrtp.zip (5.3 MB)
We can reproduce this issue locally, and we are currently checking it.
However, it does work if you use Jetson’s hardware decoder with nv3dsink, instead of avdec_xxx + xvimagesink (or other built-in videosinks in GStreamer).
Thanks @DaveYYY, I appreciate that this is being looked into. I tried your example, and I wasn’t able to get it working if I started the sender before I started the receiver, just afterwards, which is the same behaviour I was seeing otherwise. Unless I’m missing something else here. Did it work when the receiver was started before the sender, in your tests?
I think the output is technically standard, as there only needs to be Sequence Headers inserted when the “tu” (time unit) changes, and that’s not actually happening here. It’s not entirely clear to me from the AV1 spec if this is true, though.
For the livestreaming use case, something like this seems to be needed for most decoders to work if there’s packet loss or a disconnection of the stream.
Sorry, I meant to ask if it worked if the sender was started before the receiver. I wasn’t able to get it working in that direction.
I think, either way, having the ability to send Sequence Headers every IDR would allow for it to work with a wider range of decoders, so it’d be great feature to add, if available.
I gave this another try, and I am still unable to get it working. I’m using an Orin NX as the sender and a Orin Nano as the receiver with a nv3dsink via a connected HDMI monitor. Up-to-date JP6 on both systems, with the same rtpav1pay/rtpav1depay elements I compiled and shared earlier the thread.
The pipelines both work when I use a filesrc with the known working AV1 encode. This is the same behaviour I’ve observed with RTSP and RIST.
Even if it did work on the Jetson Orin Nano, that’s not where I’m trying to ingest the video. Ultimately the solution seems to be the encoder can create sequence headers at every IDR frame, like the dGPU NVENC can do for AV1.
I quickly tested this against a couple decoders, including on both the Jetson Orin Nano and a Windows machine with the NVDEC AV1 decoder. This looks great for a quick test, and I’ll hopefully have some time soon to more thoroughly test it out soon.
Is this planned on being added as a feature to an upcoming release?
And again, thanks for this. I really do appreciate it.
Hi. I also have the same problem on a jetson orin agx.
Would it be possible to get a build of libtegrav4l2.so for the jetson orin agx?
or where can I download it?
libv4l2: error getting capabilities: Inappropriate ioctl for device
ERROR: from element /GstPipeline:pipeline0/nvv4l2av1enc:nvv4l2av1enc0: Error getting capabilities for device ‘/dev/v4l2-nvenc’: It isn’t a v4l2 driver. Check if it is a v4l1 driver.
Additional debug info:
/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-v4l2/gst-v4l2/v4l2_calls.c(107): gst_v4l2_get_capabilities (): /GstPipeline:pipeline0/nvv4l2av1enc:nvv4l2av1enc0:
system error: Inappropriate ioctl for device
ERROR: pipeline doesn’t want to preroll.
ERROR: from element /GstPipeline:pipeline0/nvv4l2av1enc:nvv4l2av1enc0: Could not initialize supporting library.
Additional debug info:
…/gst-libs/gst/video/gstvideoencoder.c(1797): gst_video_encoder_change_state (): /GstPipeline:pipeline0/nvv4l2av1enc:nvv4l2av1enc0