CSI camera support for TYPE31 format

Dear All,

I’ve Jetson TX1 board and having MIPI CSI camera which also supports TYPE31 format (user define data format).
What Linux driver changes I have to do for this camera ?

Also, my application won’t provide the width and height but maximum frame size.
So sensor can send lesser than requested maximum frame data, then driver should detect the actual data received from sensor and send to application.

Any help greatly appreciated.

Thanks for your support.

Regards,
Titus S.

TX1 only support RGB and YUV format. And the width and height must be known before issue the capture.

Thanks for your answer.
Okay, then I will calculate the width and height from frame size as I did for iMX6 processor.
TX1 does having any IPU (image processing unit) or any other block ?

Actually we have started to develop user define supported MIPI CSI camera with iMX processor but iMX6’s IPU has some restriction which makes image corruption received frames.

So we are moving to TX1 NVIDIA platform, can you please provide any feedbacks on this ?

Regards,
Titus S.

TX1 have ISP however to receive the data from the mipi bus the cis/vi only support Bayer/RGB/YUV data format. What’s type31? and what kind of sensor is it?

TYPE31 is user defined data type format. We can define the image how it can be.
For TX1, it will be like raw data (junk) and application layer identify and parse the required information from it.

In iMX6, I have configured their IPU to RAW format to receive TYPE31 images from it, after that I pass on to application layer. Worked good.

As the type31 frames height & width and also the actual camera frame size unknown, application gives the maximum frame size to driver.

For generic cameras, frame size will be calculated using height x width x depth as below and our driver does the same.
frame_size = width * height * 3;//for RGB24

iMX MIPI driver and IPU takes height and width as input to get the frame.

For our MIPI TYPE31 camera, we have tried to get/calculate the width and height (which IPU needs) from the maximum frame size which comes from application.

As of now, we will take square root for the given maximum frame size and will adjust to below conditions

that numbers to be multiples of 8
and that numbers to be divisible by 4
Please refer to this post:

Say for ex (success):
If we set maximum frame size is 1440000 then driver will set the width and height.
size = 1440000;
height = width = int_sqrt(size);
if(height % size)
if(stride limitation)
break;
else
<code to calculate the width and height to be divisible by 8>
else
<code to calculate the width and height to be divisible by 8>

Finally, 1200x1200 = 1440000

If camera also send 1440000bytes of data what we have requested then, we will get the expected frame in application.

But iMX6 IPU will wait till the memory fills requested frame size bytes of data in frame buffer.
Suppose frame size is sent by camera lesser than requested, error occurs.

Say for ex (Failure):
If we set maximum frame size is 1450000 then driver will set the width and height.
size = 1450000;
height = width = int_sqrt(size);//i.e integer square root
if(height % size)
if(stride limitation)
break;
else
<code to calculate the width and height to be divisible by 8>
else
<code to calculate the width and height to be divisible by 8>

Finally, 1208x1208 = 1459264 (1450000)

Now driver request frames 1208 x 1208 but actual frame size is 1450000 bytes of data.

My expectation is that if camera sends frame lesser than expected then it should fill zeros and send to application.
Application truncate those zeros and use rest of the required data.

Question:
If our v4l2 application is requesting 800x640 frame but camera sends 640x480, how TX1 would behave for this case ??

If the sensor output width and height is different with the request it will cause timeout to receive the data from the mipi bus.

Hi titusece,

Have your issue been clarified and resolved?
Any suggestion required?

Thanks

Thanks.
Still I have questions.
Can we use any value for width and height ?
Say 123x123, 1204x1204 ??
If not, how can I confirm that it won’t work, can you please point out the section and chapter in reference manual.
Thanks for your support.

Regards,
Titus S.

Yes, you can use any resolution. Another things is you may report a color format that the data size is the same with your user define data.

Thanks for your answer.
Our user defined data is just junk of data, and Jetson ISP should/need not process or anything.
It should send the data as it is.
Do we have any settings for that in Jetson TX1 MIPI registers ?

Thanks for your answer.
Our user defined data is just junk of data, and Jetson ISP should/need not process or anything.
It should send the data as it is.
Do we have any settings for that in Jetson TX1 MIPI registers ?

Can you please support us on this ?

Regards,
Titus S.

Hi titusece
You can reference to the old vi driver …/drivers/media/platform/soc_camera/tegra_camera/vi2.c , there’s some code to set the image_size base on the color format. < vi2_channel_capture_setup() >

Thanks.
But I need to know how to set register that it supports user defined data type (GENERIC type which IPU will not process).

I don’t think that will help. I believe you need to set the correct image size to let the pixel_parser to get the raw data and post process the user define data by yourself.

Okay, thanks for your support.
I let you know if I need any help.
Thanks again.

Hi titusece,

Were you able to capture using the Type 0x31?

-David