I am using LaneNet to get information about the lanes and the lines. I would need to transform the information( points that construct the line) from pixels to meters(ego coordinates, if possible). Can you please extend on some workflows that extract the real distance from the output of the LaneNet or anything similar?(any approximation is also really appreciated)(platform used: Pegasus, Drive OS 10)

Dear JohannesWestphai,

AFAIK, there is no such API in Lanenet module to give your real distances. I will check internally if there is anyway we can get that information.

Dear Siva,

Thank you for your fast reply.

I would expect that you use some kind of transformation(pixels to meters)that regards camera params(camera rig) to validate your results in real world.

Anything shared on this topic would be helpful.

Dear JohannesWestphal,

Please check https://devtalk.nvidia.com/default/topic/1043877/driveworks/how-to-transform-points-from-image-space-to-world-space- if this helps

Dear Siva,

Thank you for the thread you sent me.It was insightful and amazing.

My problem is still unsolved, so let me expand on that: when I use pixel2ray, from the u,v(pixel coordinates) to x,y,z(corresponding image coordinates in IS(meters). x, y, z are unreliable as the Z coordinates is always =/= 0, but it should always be 0(at least on the dataset I’m using). Can I set the Z coordinate to 0 and compute the other 2(x,y) from u,v? Or can you point me towards other solutions for getting coordinates that are usable, comparable in the real world?

Thank you so much!

pixel2ray does what the name implies - gives your ray shooting out from your camera. This is only a direction (with a distance of 1 meter from camera). In order to convert it to vehicle coordinates (real-world coordinates in relation to vehicle origin), you need following information:

- camera position in vehicle coordinates => the position where the ray is shooting out;
- camera rotation in vehicle coordinates (vehicle space would be more correct term) => how much you need to rotate ray so that it points in vehicle space.

Now, you know from where and what direction pixel of interest is located in real coordinates and to find x,y,z-coordinates you need to define a ground where you’d think the pixel lays. A fairly accurate assumption is to pick a vehicle ground where z = 0; so the problem becomes finding x,y when z = 0;

Taking all that into account, you are left with:

x = Cx + Rx * t

y = Cy + Ry * t

0 = Cz + Rz * t (where Cx,Cy,Cz are camera position in vehicle space, Rx, Ry, Rz are ray in vehicle space and t is distance parameter to the ground - time it takes for ray to travel until z=0 when traveling 1m/s)

From there, t can be found as t = -Cz / Rz (be careful when Rz is close to zero as the ray never meets the ground, also rays pointing up in the sky intersect with ground when looking behind - negative t, these must be ignored).

By substituting back:

x = Cx - Rx * Cz / Rz;

y = Cy - Ry * Cz / Rz;

And there you have it!

If you have any complications with rotating the ray (as I briefly mentioned that but it’s very important to get it right else you’ll have completely wrong results), you can ask me privately.

Best!