Render simple animated partial frames into 4k capture from RAW camera

Hi

I need to blend a frame buffer into a portion of a 4k 30fps frame captured from a RAW interfaced camera sensor. The frame buffer is a static background for blending and then there will be some basic 2d animations of gauge needles and texts overlaid into this frame buffer. I need to do this at 30fps for the background render and at least 10fps for the animated objects, whose values are received from an SPI interfaced sensor. This needs to be H.265 encoded and stored in AVC container.

Can this be done with existing GStreamer plugins, or do I need to get down and dirty with CUDA, OpenGL. etc.?

What would be a typical pipeline for something like this?

Thanks.

peterxr95h,

Not very sure if I understand your usecase. Are you going to have a usecase to render on display and also encode them into a h265 video?

Hi,
Please also share information about your sensor. We have camera modules from our partners which utilizes I2C. It is a new case if your sensor utilizes SPI.

Thanks for answering Wayne and sorry my explanation is not so clear.

We will not be displaying anything from this camera just recording it do disk as a context recording. There are three other cameras attached but they are for basic object detection and not to be be recorded.

So the video we want to overlay is purely for saving to disk.

The sensors may not actually be from SPI, I was just mentioning that as an example. We had used SPI for our previous 1080p version where we attached an Atmel CAN processor to a Ti DM368. The info I left out of this is that the CAN interface will be used for connection to the vehicle and of course we will be using the TX2s internal CAN. Capturing the sensors from CAN or other seems to be the easy part. I already have the code for the sensor to graphic from the previous product and it should only need tweaking for the differing colourspace, other than that a pretty straight forward port.

I guess what I really want to know is how can I paint on the captured frame at 30fps prior to encoding.

Happy to share more with you offline as the customer is a little sensitive about their developments.

Does this camera buffer from CAN a DMA buffer? If so, could you try to leverage our MMAPI sample and use cuda to do the overlay?

Though the mmapi sample is based on csi camera and usb cam(v4l).

The camera is connected to csi.

I think you may have essentially answered my question. Leverage from the MMAPI sample and use the cuda.

I was planning on having a thread render a frame buffer based on the CAN values and ping pong the frame buffers to be blended into the cameras frames. Maybe I don’t need to bother with ping pong if using the cuda!?

I will look into this. Thank you for your support.

Cheers.