AV1/RTSP Streaming Not Working

I’m trying to figure out how to livestream camera video from my 8GB Orin NX encoded to AV1, running the newly released Jetpack 6. This is sort of a follow-up to this thread I posted on it with the newer version of gstreamer on Jetpack 6.

I’m using RTSP to/from a MediaMTX server. I needed to build the rtp plugin from gst-plugins-rs to be able to payload AV1 in RTP, as it didn’t seem package on L4T.

I’m able to stream a test file using gst-launch-1.0 filesrc location=test.mkv ! parsebin ! rtspclientsink location=rtsp://mediamtx-host:8554/testand receive it on another host using gst-launch-1.0.exertspsrc location=rtsp://mediamtx-host:8554/test ! parsebin ! decodebin ! autovideoconvert ! autovideosink and this plays fine. Receiver is running gstreamer 1.24.2.

However, when I use the Orin’s onboard encoder, it seems to generate a stream that isn’t playable. I can see packets moving, that look well formed, so I’m wondering if there’s something I’m missing.
At a basic level, a test sending pipeline (no camera to rule that out) is:
gst-launch-1.0 videotestsrc is-live=1 ! 'video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)30/1' ! nvvidconv ! nvv4l2av1enc ! av1parse ! rtspclientsink location=rtsp://mediamtx-hos:8554/test

I’ve tried various settings to the encoder, but haven’t found anything that generates a stream I can play back. Sending the output to a file instead of RTSP generates a file I can playback, so the encoder does produce a valid bitstream. I can see that the encoder is also active by looking at the NVENC clocks.

The exact same pipeline also works with the h265 encoder replacing the AV1. But I’d like to use AV1 as my application is bandwidth constrained.

Please try UDP or RTSP through test-launch:

Jetson AGX Orin FAQ
Q: Is there an example for running UDP streaming?
Q: Is there any example of running RTSP streaming?

We don’t have much experience in using rtspclientsink plugin. Please try UDP or RTSP through test-launch, to see if the issue is specific to using rtspclientsink.

First, thanks for helping me out with this. I appreciate it.

There doesn’t seem to be any documentation related to building the test-launch.c file, so I was unable to try that.

With UDP, I have similar behaviour as with RTSP, except I can start the listener before the sender, and in that case, I get video from a videotestsrc.

Here’s the test sending, adapter from your link: gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1280,height=720 ! timeoverlay valignment=4 halignment=1 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2av1enc idrinterval=15 ! av1parse ! rtpav1pay ! udpsink host=udp-dest port=5000 sync=0
and the receiver: gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=AV1,payload=96' ! rtpav1depay ! decodebin ! autovideoconvert ! autovideosink

If I send the test video, order doesn’t matter. Command for test video: gst-launch-1.0 filesrc location=test.mkv ! parsebin ! queue ! rtpav1pay ! udpsink host=udp-dest port=5000

This seems to suggest to me that the AV1 encoder isn’t including some necessary metadata for the stream RTSP or UDP, to pick up after the stream has started. Given the vagaries of reconnections, etc., this is something I need it to be able to do.

I ran another test, using the av1enc software encoder, and this seems to produce playable video, though its too CPU intensive to do it at a reasonable framerate and resolution. I’d really like to use the Hardware encoder on my Orin NX.

This seems to suggest there’s something wrong with nvv4l2av1enc (or my usage of it). Maybe it’s not generating some sort of metadata the receiver needs to properly play the stream.

Please enable the property in nvv4l2av1enc and give it a try:

  enable-headers      : Enable AV1 file and frame headers, if enabled, dump elementary stream
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false

If I set that, I get the following error from the downstream av1parse element:

GStreamer-CodecParsers:ERROR:../gst-libs/gst/codecparsers/gstav1parser.c:4365:gst_av1_parser_parse_tile_list_obu: assertion failed: (gst_bit_reader_get_pos (br) % 8 == 0)
Bail out! GStreamer-CodecParsers:ERROR:../gst-libs/gst/codecparsers/gstav1parser.c:4365:gst_av1_parser_parse_tile_list_obu: assertion failed: (gst_bit_reader_get_pos (br) % 8 == 0)
Aborted (core dumped)

And I need the av1parse element, because it seems like RTP needs a parsed stream (this is similar with H264/5, where a parser is needed before RTP payloading).

I think this setting is more useful for writing files, but I could be wrong about that.

I’ve made progress on this, and it seems like it’s an issue with the encoder. I actually got a hint looking at the GPU NVENC documentation, and saw a repeatSeqHdr parameter for the AV1 encoder.

This lead me to look at the raw AV1 bitstream (using this project). There’s only one sequence header being generated at the start of the stream (whether that’s going to a file or being livestreamed). This would explain why it’s not possible to decode the stream properly, as the av1parse element on the receiver will wait forever for a sequence header to get needed metadata on the stream.

It appears that the Sequence Header OBU is equivalent to HEVC’s SPS (and maybe PPS). nvv4l2h265enc has the insert-sps-pps parameter, which is described as “Insert H.265 SPS, PPS at every IDR frame”.

Note that these are not the same headers using enable-headers, which generates IVF headers useful for saving files.

@DaneLLL To enable streaming, or creating files that are easily seekable, it seems required to have some way to add sequence headers at IDR frames. This appears to be allowed by the AV1 spec, called a “random access point”. Maybe the option could be called insert-seq-hdr. Is there some way to get this added/fixed?


We tested UDP streaming with H.264 and H.265, and they both work fine.
For AV1, seems like Ubuntu does not offer any packages containing rtpav1pay/rtpav1depay, and we have to build these plugins ourselves.
Can you share your steps for compiling them, or can you share your pre-build binaries so we can check it quickly?

For the AV1 RTP support, that’s via the gst-plugins-rs project. I followed the directions there, slightly modified to install system wide:
sudo apt install cargo
sudo cargo install --locked --version 0.9.31 cargo-c
git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs.git
cd gst-plugins-rs
sudo cargo cbuild -p gst-plugin-rtp --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu/
sudo cargo cinstall -p gst-plugin-rtp --prefix=/usr --libdir=/usr/lib/aarch64-linux-gnu/

I’ve also attached (zipped) the compiled .so, built on my NX with the latest L4T/Jetpack install. It should end up in: /usr/lib/aarch64-linux-gnu/gstreamer-1.0/
libgstrsrtp.zip (5.3 MB)


We can reproduce this issue locally, and we are currently checking it.
However, it does work if you use Jetson’s hardware decoder with nv3dsink, instead of avdec_xxx + xvimagesink (or other built-in videosinks in GStreamer).

Like this:

gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1280,height=720 ! timeoverlay valignment=4 halignment=1 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2av1enc idrinterval=15 ! av1parse ! rtpav1pay ! udpsink host=<CLIENT IP> port=5000 sync=0
gst-launch-1.0 udpsrc port=5000 ! 'application/x-rtp,encoding-name=AV1,payload=96' ! rtpav1depay ! nvv4l2decoder ! nv3dsink

So there might be some issues that the AV1 header information produced by nvv4l2av1enc is not standard, and can only be comsumed by nvv4l2decoder.

Thanks @DaveYYY, I appreciate that this is being looked into. I tried your example, and I wasn’t able to get it working if I started the sender before I started the receiver, just afterwards, which is the same behaviour I was seeing otherwise. Unless I’m missing something else here. Did it work when the receiver was started before the sender, in your tests?

I think the output is technically standard, as there only needs to be Sequence Headers inserted when the “tu” (time unit) changes, and that’s not actually happening here. It’s not entirely clear to me from the AV1 spec if this is true, though.

For the livestreaming use case, something like this seems to be needed for most decoders to work if there’s packet loss or a disconnection of the stream.

It looks like this thread here is based on the same issue: Equivalent to HEVC V4L2_CID_MPEG_VIDEOENC_INSERT_SPS_PPS_AT_IDR for AV1? I think adding this option, and generating the Sequence Headers, would probably solve my issue as well.

That’s the case.

Sorry, I meant to ask if it worked if the sender was started before the receiver. I wasn’t able to get it working in that direction.

I think, either way, having the ability to send Sequence Headers every IDR would allow for it to work with a wider range of decoders, so it’d be great feature to add, if available.