Streaming NAL Unit of a slice as soon as it is generated by the h264 encoder

I am trying to use gstreamer pipeline to encode the incoming video from a webcam and then stream it to the network after encoding it using H264 encoder. To achieve high latency, I plan to divide each frame into several slices and to stream each NAL unit for each of the slice immediately as it is generated.

After looking at the timestamp of the RTP packets received (analyzed by wireshark), I can see the timestamp values are same for the rtp packets corresponding to a frame. I am assuming that rtph264pay only works when all the NAL unit of a frame are received. I am not sure how to validate this assumption.

Also, is there an alternative method through which I can stream as soon as the NAL unit is generated?
To keep things simple I am using the following script for streaming an image.

gst-launch-1.0 videotestsrc num-buffers=300 ! ‘video/x-raw, width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvv4l2h264enc slice-header-spacing=8 bit-packetization=0 ! ‘video/x-h264, stream-format=(string)byte-stream, alignment=(string)nal’ ! rtph264pay pt=96 ! udpsink host=127.0.0.1 port=5000 sync=false -e

Hi,
slice-header-spacing=8 is to set 8 macroblocks for each slice. This looks to be a too small value. You may try to set to a larger value. Please refer to calculation of macroblocks:
How does slice mode video encoding work for Jetson? - #14 by DaneLLL

The rtph264pay plugin is a native plugin and we are not sure how it works. This would need other users to check and share experience.

I have visited this link before. Here you wrote that:

We don’t support the mode encoder_capture_plane_dp_callback is called per slice

If that is the case, can you please point out how can I readout the NAL units corresponding to a slice immediately without waiting for the generation of remaining NAL units of the same frame.

Moreover, I also want to understand the behavior of nvv4l2h264enc. Does this encoder writes each NAL unit into a buffer as soon as it is generated? Or the buffer is only updated with the NAL units after the whole frame is encoded?

I saw one of your previous replies in Slice encode/decode support - #4 by DaneLLL

Is this feature supported now? I want to split each frame in several slices and start encoding it sequentially without waiting for fullframe at the input.

Hi,
Outputting compressed stream in slices is supported in later releases, please run the two pipelines for comparison and you shall see the effect:

gst-launch-1.0 videotestsrc pattern=1 is-live=1 num-buffers=30 ! "video/x-raw, width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1" ! nvvidconv ! "video/x-raw(memory:NVMM), format=(string)I420" ! nvv4l2h264enc slice-header-spacing=300 bit-packetization=0 ! identity silent=0 ! fakesink -v
gst-launch-1.0 videotestsrc pattern=1 is-live=1 num-buffers=30 ! "video/x-raw, width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1" ! nvvidconv ! "video/x-raw(memory:NVMM), format=(string)I420" ! nvv4l2h264enc ! identity silent=0 ! fakesink -v

The input frames have to be full frame data. It is not supported to encode slices sequentially.

1 Like

@DaneLLL should I create a new thread as I have a follow-up question on this topic.
In the case if the hardware does not accept slices as the input, I am thinking about writing the frame to a memory buffer from which encoder reads the frame directly. In this way, I will be able to pass on the information to the encoder for processing as soon as I have processed part of that frame.
Can you tell me if there is such a buffer to which I can write while the encoder is reading from it for encoding?

Hi,
For further software development, it may be better to use jetson_multimedia_api. You can check the sample:

/usr/src/jetson_multimedia_api/samples/01_video_encode

In the sample, it demonstrates creating NvBuffer as input buffer to encoder. You can
refer to it and do customization to fill in frame data slice by slice. Once the frame data is complete, queue the buffer into encoder.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.