Recently we are using the Jetson AGX Xavier 16G for hardware decoding. The Demand is to get YUV data by decoding video stream (eg rtsp or rtmp).Since there is no such example in the Multimedia API, so with reference to the example [02_video_dec_cuda], we use ffmpeg to decapsulate the rtsp stream and pass the frame data to NvBuffer, but find the format of the sample video[sample_outdoor_car_1080p_10fps.h264] is yuv420p, the stream video is yuvj420p, causing the decoding to fail. Is there any way to solve this problem or meet our needs?
Please check if you can run this sample:
It leverages gstreamer to depay rtp stream and decode into appsink. The buffers can be accessed in appsink through NvBuffer APIs.
Hi DaneLLL,I can pull the rtsp stream for analysis by example[apps/sample_apps/deepstream-test3].Can I use this example to solve the previous problem? To get YUV data
For h264/h265 encoding, you can check 01_video_encode and integrate with the sample
Or simply use nvv4l2h264enc/nvv4l2h265enc plugin to construct a pure gstreamer pipeline.
Not sure if this will help or not … but it seems you are almost there with your initial setup.
yuvj420p is almost the same as yuv420p. They are both YUV data. It is the same data structure and pixel format really. The only difference is the dynamic range represented in the data itself. This is a hangover from the old TV days :-
yuvj420p [0-255] and yuv420p [16-239]
It should be possible to “hack” the decoder example or your ffmpeg code by simply changing the pixel format descriptor. This is what I have done when using the ffmpeg API and it works fine. The only difference would be a potential visual difference in that the blacks may seem lifted and be dark grey. If you notice this visual discrepancy and this is important to you - you could remap from one dynamic range to the other.
Or as DaneLLL says - you could get into gstreamer.
Thank you very much for your guidance.How to hard decode rtsp stream to get video frames, and reduce cpu occupation