Sekonix camera image rectification

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.5.0.7774
other

Host Machine Version
native Ubuntu 18.04
other


Hello,
I would like to get rectified images from my sekonix cameras. I have calibrated the cameras and I have the RIG files.

In my driver I get the jpg data with NvMediaIJPEGetBits so ideally I would need to get undistorted jpg frames. How can I do that ?

I tried to look in the documentation or the samples but I couldn’t find it.

Is this possible ?

Also, why does NvMediaIJPEGetBits is not in the documentation of NVIDIA DRIVE OS 5.2 Linux SDK Developer Guide ?

Thanks !

Hi @maxandre.ogeret ,

Please refer to DriveWorks SDK Reference: Video Rectification Sample. Thanks.

@VickNV Thanks for your answer.

Is it possible to use the plumb_bob model instead of the ftheta model ?

We use the ROS plumb bob model with an 3x3 intrinsic camera matrix K

#     [fx  0 cx]
# K = [ 0 fy cy]
#     [ 0  0  1]

And 5 distortion parameters :

(k1, k2, t1, t2, k3)

Thanks !

In your first post, wasn’t your camera calibrated with DriveWorks SDK Reference: Camera Calibration Tools? How did you get the plumb_bob model?

We have generated the calibration file with the DriveWorks SDK Reference: Camera Calibration Tools which gave us RIG files containing the ftheta model.
But we have also calibrated with the ROS calibration tool which uses a plumb bob model. I wanted to know if it is also possible to use the plumb bob model to rectify the images.

We never try it. You may take a look at the parameters of the Pinhole camera model in /usr/local/driveworks-3.5/data/samples/stereo/full.json on your host system (installed with DriveWorks 3.5) and see if helps on your problem.

@VickNV Thanks for your answer, but I have another question.

It seems I have a discrepancy between my camera real FOV and the pinpoint model that I generated.

Here is the unrectified image from our Sekonix SF3324 camera :

The camera output resolution is 1920x1208.

According to sekonix documentation it should have a FOV of 120. But I dont really know if it’s the FOV_x or the FOV_y.

Also, after calibrating and generating a pinpoint model I get those values for the focals :

fx = 990.80469
fy = 994.04219

And according to the formula found in the sample :

fov_x = 2 * atan(size_x / 2* fx)
fov_y = 2 * atan(size_x / 2* fx)

But the problem is that with my current fx and fy values I get a FOV of approximately 90 degrees. How is that possible ?


Also then what value should I use for fx and fy for my output camera ?

When I set the output camera with a fov_x = 90 and fov_y = 90 I get this :

Which looks very good but I would like to get rid of the green border.

But when I set the FOV of the output camera to 120 I get this very poor image :


Here’s how I set the FOV of my output camera :

dwVector2f focalOut = focalFromFOV({ 90, 90 }, { 1920, 1208 });
pinholeConfOut_.focalX = focalOut.x;
pinholeConfOut_.focalY = focalOut.y;

focalFromFOV comes from the sample.

Any idea what I am doing wrong and what are the good focalX and focalY values ?

Thanks !

Is 120 degrees its HFOV?
Are you using sample_video_rectifier?
Could you provide your sample_video_rectifier command and everything to reproduce your problem?
Thanks.

@VickNV Thanks for your answer. It’s the Sekonix camera with 120 degree lens. I don’t know what is the HFOV.

Also this is my own code written by taking the rectifier sample as an example. It uses this function to warp the images : dwRectifier_warpNvMedia

And here is how I setup the models :

// Init camera model IN todo : get data from calib file yaml.
  pinholeConfIn_.distortion[0] = -0.288450f;  // k1
  pinholeConfIn_.distortion[1] = 0.060301f;   // k2
  pinholeConfIn_.distortion[2] = 0.000000;    // k3

  pinholeConfIn_.u0 = 924.40372f; // cx
  pinholeConfIn_.v0 = 625.78742f; // cy
  pinholeConfIn_.focalX = 990.80469f; //fx
  pinholeConfIn_.focalY = 994.04219f; //fy

  pinholeConfIn_.width = 1920;
  pinholeConfIn_.height = 1208;

  CHECK_DW_ERROR_ROS(
      dwCameraModel_initializePinhole(&cameraModelIn_, &pinholeConfIn_, driveworksApiWrapper_->context_handle_))

  // Init camera model OUT
  pinholeConfOut_.distortion[0] = 0.f;
  pinholeConfOut_.distortion[1] = 0.f;
  pinholeConfOut_.distortion[2] = 0.f;

  pinholeConfOut_.u0 = static_cast<float32_t>(1920.f / 2);
  pinholeConfOut_.v0 = static_cast<float32_t>(1208.f / 2);
  pinholeConfOut_.width = 1920;
  pinholeConfOut_.height = 1208;

  dwVector2f focalOut = focalFromFOV({ 90, 90 }, { pinholeConfOut_.width, pinholeConfOut_.height });
  pinholeConfOut_.focalX = focalOut.x;
  pinholeConfOut_.focalY = focalOut.y;

  CHECK_DW_ERROR_ROS(
      dwCameraModel_initializePinhole(&cameraModelOut_, &pinholeConfOut_, driveworksApiWrapper_->context_handle_))

Could you give some insight for the values of pinholeConfOut_.focalX and pinholeConfOut_.focalX ?

Thanks a lot !

Hi @maxandre.ogeret,
Could you share the rig files you generated using DW calibration tool?
And can you tell which checker board you used for calibration? Thanks.

Here’s the pinhole file :

{
    "rig": {
        "sensors": [
            {
                "name": "18L146047",
                "nominalSensor2RigU": {
                    "quaternion": [
                        0.5,
                        -0.5,
                        0.5,
                        0.5
                    ],
                    "t": [
                        0.0,
                        0.0,
                        0.0
                    ]
                },
                "parameter": "",
                "properties": {
                    "Model": "pinhole",
                    "cx": "924.40372",
                    "cy": "625.78742",
                    "distortion": "-2.88450e-01 0.060301e-02",
                    "fx": "990.80469f",
                    "fy": "994.04219f",
                    "height": "1208",
                    "params": "",
                    "width": "1920"
                },
                "protocol": "camera.gmsl"
            }
        ],
        "vehicle": {
            "valid": false
        },
        "vehicleio": []
    },
    "version": 7
}

Here’s the ftheta file :

{
    "rig": {
        "sensors": [
            {   
                "name": "18L146047",
                "nominalSensor2Rig_FLU": {
                    "quaternion": [
                        0.5,
                        -0.5,
                        0.5,
                        0.5
                    ],
                    "t": [
                        0.0,
                        0.0,
                        0.0
                    ]
                },
                "parameter": "",
                "properties": {
                    "Model": "ftheta",
                    "bw-poly": "0.000000000000000 1.02693296503276e-3 1.18603864507350e-8 3.25173880877383e-11 1.64898900334139e-14 ",
                    "cx": "968.176270",
                    "cy": "633.333191",
                    "height": "1208",
                    "width": "1920"
                },
                "protocol": "camera.gmsl"
            }
        ],
        "vehicle": {
            "valid": false
        },
        "vehicleio": []
    },
    "version": 7
}

We used a 11x9 chessboard with 60mm square width.

I had several problems :

  • The function focalFromFOV needs rads and not degrees.
  • The HFOV and VFOV can be found from sekonix documentation accessible under the very misleading ‘data’ button.

Thanks for the help !