Custom ext_dev driver

Hi,
we have implemented our own ext_dev driver to add support for a MAXIM deserialiser not supported by the nVIDIA Drive platform - and we are trying to transmit RGB data directly.

I managed to get to the point I successfully established a video link (at least the MAX9286 seems happy about it) and I know it is working because changing the timing of our video signal (between rows) I can get the Drive to complain there are not enough pixels in a row or too many pixels in a row 30 times per second (and I found a setting in the middle where the nVIDIA does not complain).

However, I am not receiving frames - NvMediaEglStreamProducerGetImage returns a timeout - and after bashing my head for 2 weeks now I cannot see any obvious way to find out why it is not working as the entire process between initialising the serialiser/deserialiser and getting a frame seems completely opaque.

Could you please suggest some troubleshooting steps?

Regards,
Piero

Dear p.filippin,

Please refer to below link for your topic. Thanks.

https://devtalk.nvidia.com/default/topic/1036460/general/how-to-use-eglstream-to-support-multiple-producers-and-consumers-on-px2/

Hi SteveNV,
sorry - I fail to see how the link you posted is relevant to my problem?

Piero

Dear p.filippin,

Can I know which type sensor used in their system? YUV or RCCB.
First check point is the sample application is launching with new sensor driver. If full step is correct, the application is shown the preview.
If possible, I would like to know detail step which file add/modify and build.
Thanks.

Dear SteveNV,
the device I am using is a MAX9291 connected to an HDMI source. I can feed RGB or YUV, and I have a great freedom into setting timings, resolution, pixel clock etc.
For the sake of this exercise I am trying to reproduce something that is as close to the Sekonix as possible (as the code gives me a lot of clues in what it is expected)

I have implemented a “driver” for the couple MAX9286/MAX9291 based on the ref_max9286_96705_ar0231_rccb code, compiled into a custom libnv_extimgdev.so

My code “works” - the MAX9286 manages to connect and initialise the MAX9291, and it locks on a video stream (NvMediaISCCheckLink returns NVMEDIA_STATUS_OK).

At that point, depending on the timings of my signal I get on the console either PIXEL_SHORT_LINE or PIXEL_LONG_LINE 30 times per second (my FPS) - and by trial and error I found out exactly the timings that makes nVIDIA “happy”.

After that - the video stream is completely outside the control of my code and seems to go inside an nVIDIA “black box”.

Where the video stream should come out from the “black box” - I have modified the ipp_yuv sample (aptly renamed to ipp_rgb) to display RGB data, loading my own libnv_extimgdev.so.

I set a producer and a consumer, but in the producer thread the function NvMediaIPPComponentGetOutput returns NVMEDIA_STATUS_TIMED_OUT.

The ipp_rgb application opens a window and prints a lot of debug code (from both my own libnv_extimgdev.so and ipp_rgb) but because the producer fails the window content is never painted.

There is clearly something wrong in what I am doing (and in my videostream), but
a) I do not get any error message
b) I have no way to see what the nVIDIA code is expecting

so - how can I troubleshoot this? I have now spent two weeks on this problem, but without even an indication I am moving in the right direction there is no way I can find where the problem is.

Piero

Dear p.filippin,

MAX9291 is serializer IC from HDMI to GMSL. And you owned the driver file.

  • I think you used MAX9286 for desirialize which is same on reference board. This means Tegra side is same component. I assume the data flow is here. So Tegra side same condition. Sensor output HDMI -> MAX9291 convert to GMSL -> MAX9286 convert to CSI -> Tegra CSI-Rx Please let me know if wrong.
  • You mentioned you can feed RGB or YUV. This means new sensor module is similar with YUV sensor. So I think the reference code is wrong. Please refer the ref_max9286_9271_ov10635.c/h file.
  • Could you let me know if you implement all driver? We need three drivers for sensor, serialize and desirialize. Sensor driver : drive-t186ref-linux/samples/nvmedia/ext_dev_prgm/drv/isc_.c/h Serialize driver : drive-t186ref-linux/samples/nvmedia/ext_dev_prgm/drv/isc_.c/h De-serialize driver : drive-t186ref-linux/samples/nvmedia/ext_dev_prgm/drv/isc_.c/h Module driver : drive-t186ref-linux/samples/nvmedia/ext_dev_prgm/img_dev/maxim/.c/h
     I think the following driver code is better reference code because the sensor is able to feed RGB or YUV instead of `RCCB`.
     Sensor driver : drive-t186ref-linux/sample/nvmedia/ext_dev_prgm/drv/isc_ov10635.c/h
     Serialize driver : drive-t186ref-linux/sample/nvmedia/ext_dev_prgm/drv/isc_9271.c/h
     De-serialize driver : drive-t186ref-linux/sample/nvmedia/ext_dev_prgm/drv/isc_max9286.c/h
     Module driver : drive-t186ref-linux/sample/nvmedia/ext_dev_prgm/img_dev/maxim/ref_max9286_9271_ov10635.c/h
    
  • New sensor module needs to add in dev_list.c/h. This is sample for new module. ImgDevDriver *ImgGetDevice(char *moduleName) { NvS32 i; ImgDevDriver *devices[MAX_IMG_DEV_SUPPORTED] = { GetDriver_ref_max9286_9271_ov10635(), GetDriver_ref_max9286_9271_ov10640(), GetDriver_c_max9286_9271_ov10640lsoff(), GetDriver_c_max9286_9271_ov10640(), GetDriver_d_max9286_9271_mt9v024(), GetDriver_ref_max9286_96705_ar0231rccb(), GetDriver_ref_max9286_96705_ar0231(), ++ GetDriver_sample_max9286_9271_ov10635(), GetDriver_tpg() };

    typedef enum {
    REF_MAX9286_9271_OV10635,
    REF_MAX9286_9271_OV10640,
    C_MAX9286_9271_OV10640LSOFF,
    C_MAX9286_9271_OV10640,
    D_MAX9286_9271_MT9V024,
    REF_MAX9286_96705_AR0231RCCB,
    REF_MAX9286_96705_AR0231,
    REF_MAX9288_96705_OV10635,
    M_MAX9288_96705_AR0140,
    ++SAMPLE_MAX9286_9271_OV10635,
    TPG,
    MAX_IMG_DEV_SUPPORTED,
    } ImgDevicesList;

  • Next check point is configuration file and kernel device tree. I think this is done but please confirm this. New sensor needs to modify or add “.conf” file such as “ov10635_t186.conf”

I suggested the following condition setup for verification the driver.
Sensor output : YUV
ipp_yuv can preview the camera frame.

Hi Steve,
you are correct - I am using the MAX9286 because that’s inside the PX2 and I have no choice.

  1. yes - it is Sensor output HDMI → MAX9291 convert to GMSL → MAX9286 convert to CSI → Tegra CSI-Rx

I think my issue is the link CSI → Tegra CSI-Rx, that is what I called “black box”.

  1. I used the ref_max9286_96705_ar0231_rccb because the max96705 seems more similar to the MAX9291. The driver sets a communication channel between the max9286 deserialiser and the serialiser (MAX9291 in my case) as it is the max9286 setting up the parameters in the MAX9291 through the control channel in the GMSL cable. This is working.

The supplied drivers also set a channel between the max9286 and the camera sensor (through the serialiser) so the max9286 can change settings in the sensor - that is not needed in my case (the HDMI channel is programmed separately)

However - I will double check the differences between ref_max9286_96705_ar0231_rccb and ref_max9286_9271_ov10635 to see if I missed something.

  1. yes, I did all of that - my code is called when launching the various camera applications.

  2. I have done that as well

  3. done - I modified the .conf file so my driver is instantiated - that is working correctly as well

I added a lot of debug code - I am checking the status of every call, printing the result and dumping all the control channel writes on the console, and I can also connect to both the serialiser and deserialiser with the MAXIM SDK app and confirm the parameters are actually set.

I am at the point where both a control channel and a video channel are established between the MAX9286 and MAX9291 as they are happily exchanging data (including video data).

If I set the wrong timing, I get: on the console

ChanselFault : 0x00000200
PIXEL_SHORT_LINE [ 9]: 1
A line ends with fewer pixels than expected.
Current line in frame [31:16]: 0

or the corresponding PIXEL_LONG_LINE (from memory)

so definitely video data is transmitted and the nVIDIA side can make sense of it.

I can set my timing to be too short and I get PIXEL_SHORT_LINE, too long and I get PIXEL_LONG_LINE and I found a timing in the middle where I do not get any output - which I assume is good.

But I cannot see the code generating that error and what it is actually checking.

A system-wide grep of “ChanselFault” returns

Binary file drive-t186ref-linux/lib-target/libnvm_vicsi_v3.so matches
Binary file drive-t186ref-linux/lib-target/libnvmedia.so matches

both supplied as binary blobs.

Dear p.filippin,

I think the sensor driver is good status. You are done each driver step. However, the configuration is mis-matched between sensor and tegra.

The error code is for CSI timing issue short and log pixel.
This mean the pixel data is fewer or longer than Tegra-Rx expect setting.
PIXEL_SHORT_LINE: If Tegra-Rx will expect the 1280 pixel, the transmitted pixel is 1279 case.
PIXEL_LONG_LINE : If Tegra-Rx will expect the 1280 pixel, the transmitted pixel is 1201 case.

Next checkpoint is checking between sensor output resolution and Tegra-Rx setting in conf file.
Current line in frame [31:16]: 0 ← This means Tegra-Rx can’t receive any line. The first line of frame is occurring the issue.

If possible, I would like to know sensor output format(YUV422, YUV420SP, RGB24, ARGB32, etc…)/Resolution and tegra .conf file.
The following parameters will happen the issue in this case. This is DPX2 sample configuration.

input_format   = "422p"         # Input frame format
                                # Valid values for ov10640: raw12
                                # Valid values for ov10635: 422p

surface_format = "yv16"         # CSI surface format. Options: yv16/rgb/raw8/raw10/raw12
                                # Valid values for 422p input format: yv16
                                # Valid values for rgb input format: rgb
                                # Valid values for raw12 input format: raw12

resolution     = "1280x800"     # Input frame resolution

csi_lanes      = 4              # CSI interface lanes
                                # options supported: <1/2/4>
interface      = "csi-ab"       # Capture Interface
                                # options supported: <csi-a/csi-b/csi-c/csi-d/csi-e/csi-f/csi-ab/csi-cd/csi-ef>

Could you let me know if you can see the preview with sensor yuv output and ipp_yuv application?
Or it has same issue?

Hi SteveNV,
I am sending 1920 pixels in both cases - what seems to make the difference is the timing between rows (in HDMI terms is the “horizontal blanking”) and I found a timing that does work (180 PCLK - so I have to send 2100 pixels in total for it to see 1920).

“Current line in frame [31:16]: 0” - I thought that was “can’t read the first line”, I know I am getting “frames” because that error repeats exactly 30 times per second (there is a timestamp in microseconds) - so the tegra:

  • reads the first line
  • does not like it
  • tries the first line of the next frame

which makes me think the tegra understand the division in frames in my stream.

in my configuration I am using
input_format = “rgb”
surface_format = “rgb”
resolution = “1920x1080”
csi_lanes = 4
interface = “csi-ab”

(please note - I am using TegraB)

I am using a pixel clock of 86MHz, 30 FPS and RGB888 (I could use YCrCb but that would add further complications, I have tried without any appreciable difference)

Note that nowhere in the “driver” code the “1920x1080” is used as a resolution. I mean, it is matched as a string against a list of supported "IMG_PROPERTY_ENTRY"s but it seems the tegra figures out the resolution from the pixel clock, the frame rate and the “shape” of the data it receives somehow.

Only in the ipp_raw/yuv/rgb code the “1920x1080” string is split and parsed as 2 integers width and height.

As a note, I tried ipp_raw/yuv/rgb with the Sekonix camera - ipp_raw works, ipp_yuv and ipp_rgb return a format error.

If I use them with my driver, in all 3 cases I get a “timeout” in the producer, function NvMediaIPPComponentGetOutput

The last two sentences make me think the problem is not in ipp_raw/yuv/rgb but in the driver side - the driver sends the vide stream to the tegra but the tegra never generates a “frame” to be sent down the line for some reason.

Dear p.filippin,

DPX2 doesn’t support input-type=rgb. Please see https://docs.nvidia.com/drive/active/5.0.10.3L/nvvib_docs/index.html#page/NVIDIA%2520DRIVE%2520Linux%2520SDK%2520Development%2520Guide%2FNvMedia%2520Sample%2520Apps%2Fnvmedia_nvmimg_cap.html%23wwpID0E0JH0HA

input_format | Specifies the capture format of the sensor | “422p”, “raw12”, “raw16log”

I think the following condition is better than rgb.
input_format=422p”, “surface_foramt=yv16” and ipp_yuv.

Hi SteveNV,
I thought that was a possibility - however, all the function calls are there?

Eg NvMediaICPInputFormatType has NVMEDIA_IMAGE_CAPTURE_INPUT_FORMAT_TYPE_RGB888, in IMG_PROPERTY_ENTRY rgb/rgba are a valid value for inputFormat and pixelOrder, and configurePixelInfoMAX9286 has a type ISC_DATA_TYPE_MAX9286_RGB888. Even in the IPP sources all the functions dealing with data types come in set of 3 ( xxxxxxRAW, xxxxxYUV and xxxxxRGB) eg: NVM_SURF_FMT_SET_ATTR_RAW - NVM_SURF_FMT_SET_ATTR_YUV - NVM_SURF_FMT_SET_ATTR_RGBA

So, can you please confirm the RGB functionality just stubs?

I will try switching to YUV (will have to be after Christmas) and will let you know!

Hi SteveNV,
I eventually managed to find a couple of PC’s with native HDMI output, so I was able to enable the YUV output - but it didn’t make any difference, the NvMediaIPPComponentGetOutput still keeps timing out.

What would you suggest I try next? I still have a video steam detected but I don’t receive any frame.

Dear p.filippin,

The MAX9286 will convert from GMSL data to 422p format(Maybe UYUV) and transmitted via CSI-2.(1pixel has 16bit data.)
The input format should need to match between MAX9286 output format and Tegra_CSI-Rx.

Hi SteveNV,
unfortunately one of the difficulties I am having is I don’t have any documentation for the MAX9286 (only available under NDA with MAXIM) so I am programming against a black box - however looking at the .h the MAX9286 does not seem to support YUV422-16, but only 8 and 10 bits.