David> This is possible. In general there are two ways to capture using Tegra X1 and both using gstreamer:
*nvcamerasrc element: This element will pass the data through the ISP and convert it from bayer to YUV suitable for the encoders.
*v4l2src element: This element basically does what you need, it would baypass the ISP and give you the data in bayer. In order to get it working with your custom camera you need to create a V4L2 media controller driver that needs to be integrated with the VI driver provided by NVIDIA. There is good information about how to do it in the documentation provided by NVIDIA with Jetpack as well in the L4T Documentation package that you can download from [1]. We have also created several drivers for different cameras that maybe could speed up your development process [2][3] and we can create your custom driver as well.
How many sensors are you going to use?
David> Yes, V4L2 is what you need to use. We have tested up to 6 2-lanes cameras 1080x15fps with v4l2src running at the same time capturing RAW (bayer). You can find some pipelines in the wikis above. We used the J20 board from Auvidea + the Jetson board.
David>This sounds good. You could wrap your logic into a gstreamer element and that will allow you to run multiple tests when the GPU or CPU logic is included or not in the pipeline. You can for instance research about the nviviafilter provided by nvidia, It might help you or give you an idea on where you can put your GPU logic. RidgeRun can also work on this or you can get the frames with v4l2 and then send them to your algorithm in your application. I am not sure if you need to encode or mux the data at some point so that is why I recommend you the gstremaer based approach so you can take advantage of the elements already available.
If you are storing the RAW data I definetly recommend you to run some proof of concept on the maximum bandwidth available when writing data to your storage device. If the frame resolution is high even low framerates will cause a huge amount of data to be written into the storage device. SSD is recommended. Likely you would need to tune the kernel settings for this, you can follow the advises on this wiki:
David>Nvidia includes a couple of nice tables describing the framerates according to the resolutions and lanes used. You can check it in the technical reference manual that you can download from [1]. If you need to create your own driver for the camera I recommend you to do it based in V4L2 media controller which supported in latest Jetpack 2.4/L4T R24.2 since the old main V4L2 driver based in SoC Camera could describe framerate problems as you can read in [4]. In [4] You can find the page numbers for the tables that I am talking about.
If you are using 2-lanes sensors I think you can achieve 1080p30fps, for higher resolutions I recommend you to use 4-lanes sensors. Anyway, you can do the math with the information provided in [4] and with the datasheet of your sensor. Which camera are you going to use? Maybe we already have the driver for it.
Hope this helps,
-David
[1]https://developer.nvidia.com/embedded/downloads
[2]Sony IMX219 Linux driver for Jetson TX1 - RidgeRun Developer Wiki
[3]Galileo2 module driver for Tegra X1 - RidgeRun Developer Wiki
[4]Has someone tried to capture more than 80fps 1080p? - Jetson TX1 - NVIDIA Developer Forums