Stereo VO output, what type of proto is it?

Hello Everyone,

I am working on using the Stereo Vision Odometry but cannot seem to capture the current_pose messages. I have checked the documentation but it does not say what type the outgoing proto message is that contains the pose.

Does anyone know what it is?

Thank you.

Reading the documentation again I find this:

The Isaac SDK includes Elbrus tracker code in the form of a dynamic library, wrapped by a codelet. The Isaac codelet wrapping Elbrus stereo tracker takes a pair of input images, and camera intrinsics. The camera pose is represented by a quaternion and a translation ivector, relative to the location of the camera.

so the output is represented by a quaternion and transltaion (ivector). This would mean it needs to be a Pose3dProto or Pose3fProto. But the translation is stating it comes in the form of an ivector.
I have tried the following with no success:
ISAAC_PROTO_RX(Pose3dProto, visualOdometryInput)
ISAAC_PROTO_RX(Pose3fProto, visualOdometryInput)
ISAAC_PROTO_RX(StateProto, visualOdometryInput)

Is there anyone that can help with this issue? Unfortunately it is holding up our development process and questioning the use of Isaac moving forward due to the missing basic information required.

Thank You!

I have a small update on this. I have been digging through the small amount of source availible and the BUILD files. I found this in elbrus.h

/*

  • Transformation from camera space to world space.
  • Rotation matrix is column-major.
    */
    struct ELBRUS_Pose
    {
    float r[9];
    float t[3];
    };

So there must be a proto somewhere that relates to this.

Slightly more progress today. I decided since there has been no response from nvidia that I would start to brute force this. I decided the best way would be to find a place to penetrate the message ledger. Seeing as there is not much documentation around how message ledger works internally I started to try to find a place where message ledger would have to expose itself to an industy format.

This brought me to websight.

Websight is transmitting this data to be visualized, so I decided to hack into this and try to expose the proto. This is what I found:

2019-07-20 15:59:16.290 INFO packages/sight/WebsightServer.cpp@269: Sight Message:
2019-07-20 15:59:16.290 INFO packages/sight/WebsightServer.cpp@271: {“t”:26.434623583,“type”:“sop”,“v”:{“d”:[{“d”:[{“p”:[[3.0,2.0,1.0],[3.0,-2.0,1.0],[3.0,-2.0,-1.0],[3.0,2.0,-1.0],[3.0,2.0,1.0]],“t”:“pnts”}],“s”:{“c”:"#00ffff",“f”:true},“t”:“sop”},{“d”:[{“p”:[[0.0,0.0,0.0],[3.0,2.0,1.0]],“t”:“line”}],“s”:{“c”:"#ffff00"},“t”:“sop”},{“d”:[{“p”:[[0.0,0.0,0.0],[3.0,-2.0,1.0]],“t”:“line”}],“s”:{“c”:"#ffff00"},“t”:“sop”},{“d”:[{“p”:[[0.0,0.0,0.0],[3.0,-2.0,-1.0]],“t”:“line”}],“s”:{“c”:"#ff00ff"},“t”:“sop”},{“d”:[{“p”:[[0.0,0.0,0.0],[3.0,2.0,-1.0]],“t”:“line”}],“s”:{“c”:"#ff00ff"},“t”:“sop”}],“p”:{“p”:[0.6757039876890727,-0.048408744153813583,-0.7324081508751246,0.06825697797454224,3.3350894021042357,-0.9839516644573139,-3.8694007826796986],“t”:“3d”},“t”:“sop”}}

Now the part that we are actually interested in is this part:

“p”:{“p”:[0.6757039876890727,-0.048408744153813583,-0.7324081508751246,0.06825697797454224,3.3350894021042357,-0.9839516644573139,-3.8694007826796986],“t”:“3d”}

These values change as I move the camera around I am guessing they represent x,y,z and thetaX, thetaY, and thetaZ. I will continue to investigate and update untill I solve this or someone from Nvidia simply answers the question :)

Now I think I have found the actual proto data. Here is the message going between the edges:

2019-07-20 16:18:53.546 INFO packages/sight/WebsightServer.cpp@263: Json Message:
2019-07-20 16:18:53.546 INFO packages/sight/WebsightServer.cpp@266: {“edges”:[[0,1,2913149156.0,[0.9941355252341224,-0.02011035794097105,0.018287639317440017,-0.10466944740064728,-0.01279101919405115,-0.41078817761076214,-0.006269168369695072]],[1,0,2913149156.0,[0.9941355252341224,0.02011035794097105,-0.018287639317440017,0.10466944740064728,-0.07349115052293637,0.4038328462372477,0.021631518753424925]]],“nodes”:[“odom”,“elbrus”],“time”:2.950751428}

The question is how do we extract this.

The StereoVisualOdometry codelet writes the tracking result to the application-wide pose tree. The coordinate frame is “odom_T_elbrus”. You can access the pose in any other codelet by adding “ISAAC_POSE3(odom, elbrus)” to the class (similar to ISAAC_PROTO_RX). You can then access the pose with “get_odom_T_elbrus()”.

The pose information seems to be missing in the documentation. We will fix that. We are working on some improvements around handling poses for the next release.

Thanks for the reply, I will try this. Also did new documentation get added on this for the new release that just came out, or will it be in the next one? Thanks again.

Hello,

I seem to be getting the following “error: no matching function for call” when I attempt to use “get_odom_T_elbrus()”. I have also looked through much of the source/documentation and there does not seem to be clear information of how to get rotation and translation data out of get_odom_T_elbrus() or if it has .toProto() or not.

Would you be able to show a show a minimal code example of how to derive the rotation and translation from ISAAC_POSE3(odom, elbrus)?

Our end goal is to plot the data similar to the video below and also use it for additional calculations.

https://www.youtube.com/watch?v=_zZuejQeuz4

Thank you!

Yes, documentation was updated: https://docs.nvidia.com/isaac/isaac/doc/index.html

Easiest way to access the pose

  1. Add this to you codelet class declaration: ISAAC_POSE3(odom, elbrus).
  2. In tick function you can call get_odom_T_elbrus()

We are working on improving the pose system for 2019.3

Thanks David,

Would you be able to answer the error above:

"I seem to be getting the following “error: no matching function for call” when I attempt to use “get_odom_T_elbrus()”. I have also looked through much of the source/documentation and there does not seem to be clear information of how to get rotation and translation data out of get_odom_T_elbrus() or if it has .toProto() or not.

Would you be able to show a show a minimal code example of how to derive the rotation and translation from ISAAC_POSE3(odom, elbrus)? "

Does Poses section of the documentation located at https://docs.nvidia.com/isaac/isaac/doc/engine/components.html#poses help?

You can see the usage, for example, in apps/samples/navigation_rosbridge/NavigationRosBridge.{hpp,cpp} of Isaac SDK 2019.2 relase.

When David suggested get_odom_T_elbrus(), he meant “use function named get_odom_T_elbrus”. As shown in the documentation and in the sample above, you need an argument in the function call. Specifically, please try “get_odom_T_elbrus(getTickTime())”. Hopefully this will make it compile.

Once you get a Pose3d (not Pose2d since you are using ISAAC_POSE3 macro), please check engine/core/math/pose3.hpp. You can get rotation of type SO3 (check engine/core/math/so3.hpp) and translation of type Vector3 (https://eigen.tuxfamily.org/dox/group__TutorialMatrixArithmetic.html).

To illustrate, you can do:
// hopefully, I am not making a typo here
const Pose3d odom_T_elbrus = get_odom_T_elbrus(getTickTime());
const double x = odom_T_elbrus.translation.x();
const Vector3 euler_angles_rpy = odom_T_elbrus.rotation.eulerAnglesRPY(); // see so3.hpp for other methods

1 Like

Where do you download the Stereo VO source code ? I wanna test this VO with my own dataset. Thank you very much.

You can certainly test it. Please see https://docs.nvidia.com/isaac/isaac/packages/perception/doc/visual_odometry.html