HW Video Processing

I have a USB3 camera that provides raw bayer data. Is there any way to use the HW video processing provided by the ISP to handle debayer, gains, gamma, etc. ? In theory yes, the ISP should be able to get data from memory but I don’t know if the code to do it is available somewhere.

Thank you,
David

When capturing frames from the camera, you have to specify the data type. Have you looked at the vx_df_image_e reference in the documentation?

Sorry, I missed that answer. The camera I have comes with it’s own driver library and only RAW is available from the camera. Which means that I have to handle the processing in the Tegra. vx_df_image_e is for VisionWorks, right? Do you think VisionWorks will use the ISP (image signal processor) for hardware acceleration? I had the impression it would only use the GPU. It looks like the GStreamer plugins might use the ISP.

I too was wondering this. A gstreamer element to stream image data from memory and use the ISP’s memory-to-memory interface to perform debayering (and maybe other image adjustment) would be nice. It stinks having an ASIC designed for debayering sitting idle while we thrash the CPU to perform debayering.

Hi,

If I get access to the API to create an ISP gstreamer element I can create it for you, where can I find the API?

-David

@David, that’s the catch. I would create it too if API docs existed like they do for the OMAP ISP (nice to see RidgeRun working with TX1 though).

As far as I’ve seen in the documentation the ISP details are not public domain. There is some reference information regarding the ISP architecture that suggests memory-memory streaming is supported, but it says the implementation details are not public.

Yeap, trying to help with our gstreamer knowledge:)

I see, same happened with qualcomm and the snapdragon, they have the ISP closed, I hope NVIDIA will make the documentation public at some point.

I suppose that at least if you use a “Smart sensor” with a ISP built-in, it would be possible to capture YUV directly, is that your understanding too? I have several questions about the capture part, could you check them and see if you have an idea about them?

https://devtalk.nvidia.com/default/topic/898129/enabling-camera-on-jetson-tx1-board/?offset=22#4838539

Thanks

It seems like, for the most part, support is there for handling the CSI port setup and feeding data to the ISP (nVidia calls it the Vi3 / Vi / video input unit). Chapter 29 of the TRM describes the CSI port, and Chapter 31 goes over the VI module but pretty explicitly says:

Browsing through the kernel sources and looking at commits by the nVidia team is illustrative and may yield the necessary insight into what is required to implement another camera driver. That said I’m not sure we have enough information on the Vi to develop memory-to-memory streaming support.