Buffer Type / Layouts in Tegra Tx2/Tx1 Block linear / PITCH /RAW

Hi Nvidia,

I am New to Tegra Programming(MMAPI’s),
I want to understand the Data flow and buffer usage while recording from camera or encoding YUV file.

Question:

  1. I came across different memory layout Block Linear , Pitch and RAW.
    a) what is the difference between them ?
    b) what Does it means output layout is Block Linear or Pitch ?
    c) what Does it means Capture layout is Block Linear or Pitch ?
    I went through the sample code /tegra_multimedia_api/samples/07_video_conver
    and also nvl4t_docs html but still confused.

  2. while encoding from YUV , what Output layout and Capture Layout should be?

  3. while encoding from camera (capturing Raw ) , what Output layout and Capture Layout should be?

Hi meRaza,
The output from Tegra HW engines is block linear. If you would like to do post-processing via CPU/GPU, you need NvVideoConverter to convert to pitch. You may compare 00_video_decode and 02_video_dec_cuda

Hi Danel,

I have to capture 4k raw YUV video from camera and downscale to 1080p or 720p …etc, and encode it using Tegra HW engins and then packet them & send the packets using RTMP .

SO my question is :-

  1. What memory layout is supported for INPUT to Tegra HW engines
    what memory layout should use for Output plane and capture plane ?

sure i will check 00_video_decode & 02_video_dec_cuda as per your suggestion.
kindly can you reply for my above and previous post questions.

Hi Danel,

I have to capture 4k raw YUV video from camera and downscale to 1080p or 720p …etc, and encode it using Tegra HW engins and then packet them & send the packets using RTMP .

SO my question is :-

  1. What memory layout is supported for INPUT to Tegra HW engines
    what memory layout should use for Output plane and capture plane ?

sure i will check 00_video_decode & 02_video_dec_cuda as per your suggestion.
kindly can you reply for my above and previous post questions as well.

Hi meRaza,
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-rtmpsink.html

RTMP is implemented in gstreamer frameworks. Do you consider use it?

Hi Danel,

I was preferring to use Tegra Multimedia API’s for encoding and stream encoded data using RTMP (RTMP publish API) , which library is better gstreamer or any other open source

Hi meRaza,
If your source is a Bayer sensor(like onboard ov5693), you can refer to
tegra_multimedia_api\samples\10_camera_recording

If your source is a YUV sensor/USB camera via v4l2, you can refer to
tegra_multimedia_api\samples\12_camera_v4l2_cuda