I am trying to use gstreamer pipeline to encode the incoming video from a webcam and then stream it to the network after encoding it using H264 encoder. To achieve high latency, I plan to divide each frame into several slices and to stream each NAL unit for each of the slice immediately as it is generated.
After looking at the timestamp of the RTP packets received (analyzed by wireshark), I can see the timestamp values are same for the rtp packets corresponding to a frame. I am assuming that rtph264pay only works when all the NAL unit of a frame are received. I am not sure how to validate this assumption.
Also, is there an alternative method through which I can stream as soon as the NAL unit is generated?
To keep things simple I am using the following script for streaming an image.
I have visited this link before. Here you wrote that:
We don’t support the mode encoder_capture_plane_dp_callback is called per slice
If that is the case, can you please point out how can I readout the NAL units corresponding to a slice immediately without waiting for the generation of remaining NAL units of the same frame.
Moreover, I also want to understand the behavior of nvv4l2h264enc. Does this encoder writes each NAL unit into a buffer as soon as it is generated? Or the buffer is only updated with the NAL units after the whole frame is encoded?
Is this feature supported now? I want to split each frame in several slices and start encoding it sequentially without waiting for fullframe at the input.
@DaneLLL should I create a new thread as I have a follow-up question on this topic.
In the case if the hardware does not accept slices as the input, I am thinking about writing the frame to a memory buffer from which encoder reads the frame directly. In this way, I will be able to pass on the information to the encoder for processing as soon as I have processed part of that frame.
Can you tell me if there is such a buffer to which I can write while the encoder is reading from it for encoding?
In the sample, it demonstrates creating NvBuffer as input buffer to encoder. You can
refer to it and do customization to fill in frame data slice by slice. Once the frame data is complete, queue the buffer into encoder.