I am New to Tegra Programming(MMAPI’s),
I want to understand the Data flow and buffer usage while recording from camera or encoding YUV file.
Question:
I came across different memory layout Block Linear , Pitch and RAW.
a) what is the difference between them ?
b) what Does it means output layout is Block Linear or Pitch ?
c) what Does it means Capture layout is Block Linear or Pitch ?
I went through the sample code /tegra_multimedia_api/samples/07_video_conver
and also nvl4t_docs html but still confused.
while encoding from YUV , what Output layout and Capture Layout should be?
while encoding from camera (capturing Raw ) , what Output layout and Capture Layout should be?
Hi meRaza,
The output from Tegra HW engines is block linear. If you would like to do post-processing via CPU/GPU, you need NvVideoConverter to convert to pitch. You may compare 00_video_decode and 02_video_dec_cuda
I have to capture 4k raw YUV video from camera and downscale to 1080p or 720p …etc, and encode it using Tegra HW engins and then packet them & send the packets using RTMP .
SO my question is :-
What memory layout is supported for INPUT to Tegra HW engines
what memory layout should use for Output plane and capture plane ?
sure i will check 00_video_decode & 02_video_dec_cuda as per your suggestion.
kindly can you reply for my above and previous post questions.
I have to capture 4k raw YUV video from camera and downscale to 1080p or 720p …etc, and encode it using Tegra HW engins and then packet them & send the packets using RTMP .
SO my question is :-
What memory layout is supported for INPUT to Tegra HW engines
what memory layout should use for Output plane and capture plane ?
sure i will check 00_video_decode & 02_video_dec_cuda as per your suggestion.
kindly can you reply for my above and previous post questions as well.
I was preferring to use Tegra Multimedia API’s for encoding and stream encoded data using RTMP (RTMP publish API) , which library is better gstreamer or any other open source