DRM_FORMAT_C8 palette?

I am looking for an memory efficient DRM_FORMAT. The DRM_FORMAT_C8 does by name draw my thoughts to the good old palett modes of older graphics cards. I.e. one byte per pixel being an index into an palette that defines the colour, in for example an DRM_FORMAT_ARGB8888. Is there such a format with a palette indexed by a one byte pixel value? Or is there some other single byte per pixel DRM format available? I am looking at implementing an DRM overlay with low memory bandwidth demand. I require few colors and four alpha values, so 4 bits per pixel would be optimal (two for alpha and two for palette color, or just 4 bits to point into an 16 position ARGB8888 palette). Is there anything like this available?

The implementation is in


For video plane, supported formats are listed in NvBufGetDrmParams(). For UI plane, DRM_FORMAT_ARGB8888 is supported. DRM_FORMAT_C8 may not work. Are you able to use existing implementation?

What I want to achieve is to draw some simple graphics on an overlay over the entire screen. There are some videos playing in the background. My idea was to use the NvDrmRenderer, but the DRM_FORMAT_ARGB8888 is using up a lot of memory bandwidth.

Is there some other API that is more appropriate for an overlay? Or is there any info on how to set up a plane directly by hardware registers?

I can perform the actual drawing without support, just as long as I have access to the memory area. I also want to be able to double-buffer the overlay to avoid any flickering.

I did try the NvDrmRenderer with DRM_FORMAT_ARGB4444, hoping to save some memory bandwidth. It seem however that it still operates in DRM_FORMAT_ARGB8888. It did accept creating frame buffers with DRM_FORMAT_ARGB4444, and reporting a matching pitch, however the rendering was wrong.

Do I have to configure the NvDrmRenderer to use DRM_FORMAT_ARGB4444 in some other way than creating frame buffers with the format? Or is some configuration outside of the renderer?

We have verified DRM_FORMAT_ARGB8888 and other formats may not work properly. So you have modified the format in ui_render_loop_fcn() and got negative result? And what is the resolution of video plane? If it hits bandwidth limitation, you probably have 4K video plane?

I have the ubuntu desktop in the background 1920x1080x60 Hz. I use the DRM-renderer to output up to three additional planes on top of that.

Apart from that I also have a second HDMI display with 1920x1080x60. I am hoping to use the remaining two planes on this.

Perhaps the bandwidth is not the key problem, but when enabling several planes there are flickering red lines which seem to correlate with the mouse pointer (more intense flickering of the lines close to the pointer). It is merely a guess that it is a bandwidth related problem. How much bandwidth is available for the screen updates?

I use the following test-code:

uint32_t ui_w = 1920;
uint32_t ui_h = 1080;
struct drm_tegra_hdr_metadata_smpte_2086 metadata;
NvDrmRenderer* r;
r = NvDrmRenderer::createDrmRenderer("rederer0", ui_w, ui_h, 0, 0, 0, 0, metadata, false);
NvDrmFB ui_fb[4];
r->createDumbFB(ui_w, ui_h, DRM_FORMAT_ARGB8888, &ui_fb[0]);
// Draw some graphics in the FB here
r->setPlane(ui_plane_index, ui_fb[0].fb_id, 0, 0, ui_w, ui_h, 0, 0, ui_w << 16, ui_h << 16);

It works as expected with DRM_FORMAT_ARGB8888, but when using DRM_FORMAT_ARGB4444 it still renders using ARGB8888 format (or at least 32 bpp instead of 16 bpp).

ui_fb[0].bo[0].pitch == 7680 for DRM_FORMAT_ARGB8888
ui_fb[0].bo[0].pitch == 3840 for DRM_FORMAT_ARGB4444

Which is as expected indicating that the FB is created OK. But when I draw some pixels it seem like the rendering is using 32 bpp in both cases.

Furthermore the left half of the screen is covered with a plane that is half the width (i.e. 1920/2), while the right hand of the screen is covered by a copy of the same overlay, but one row offset.

My guess would be that because the renderer uses 4 bytes per pixel, but the framebuffer is configured for 2. The right half is rendered by overscanning the framebuffer. However the scan of the next line starts at the correct position in the buffer. So it seems the renderers pointer to memory is inited for each line, and that this init is correct according to frame-buffer size, but for each pixel the address is just incremented and thus one rendered screen line consumes two framebuffer lines. The renderer seem to be configured with the correct stride, but wrong pixel format. Is the pixel format used for rendering to be configured in some other way? The rendering hardware is perhaps not automatically configured with the pixel format of the FB?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

So you have executed the steps

and still hit bandwidth limitation?

Hey, sorry to open up this old thread, but I have a question about this.

What do you mean by one format being supported for UI and the other one for video? What is the difference between showing something as UI and showing something for video in the implementation?

For my project, I’d like to display RGBA data as a video in DRM. At first, I wanted to convert to NV12 before using enqueBuffer on NvDrmRenderer but then I first tried to modify the format statement you are referring to so that it accepts my buffer format RGBA. And it seems to work fine, so what is the disadvantage of using RGBA with DRM for Video Output? Is it performance or what is it?

I am new to DRM Rendering and my knowledge comes from Nvidia examples mostly, so maybe I don’t understand some details in how buffer data is processed.

Thanks for your help!

Please check this sample:
Jetson Linux API Reference: 08_video_dec_drm

There is an option –disable-ui. You can check how it is implemented to get more information.