Need help understanding the principles and safety features of Carter navigation

The Carter reference design has IMU, Lidar, Zed camera (depth and RGB), Realsense camera (depth and RGB), and receives wheel information back from the segway.

There are several things I’m confused about

Does Isaac give the Segway high level commands like telling it to go to a certain position, having the Segway calculate an appropriate acceleration/movement/deceleration profile, or does Isaac take care of those calculations and issues only lower level commands to Segway to move the wheel certain number of clicks at specific speeds ?

When everything is ok, the information coming back from the wheel position encoder should match the command that was given to the wheel motor, and the IMU should register an appropriate acceleration, and the Lidar, RealSense and Zed should all confirm the updated position info as well.

But how are conflicts resolved if the various data sources vary by a certain percentage ? Wheels could slip, maybe the IMU’s compass could be influenced by strong magnets, some of the other sensors could be confused by glass doors or very big object moving towards the Carter.

Is there some kind of internal voting among the data sources or is there a priority list of most trusted info to least trusted ?

Are the 2 depth cameras (Realsense and Zed) provided to demonstrate their respective SDKs, or are both necessary because the Zed has longer range than Realsense ? Or is there some other difference that’s used in the Carter navigation stack ?

What kind of safety features are there ? Will Carter refuse to go into a wall or bump into a person or smaller object if one of the cameras or the Lidar indicates an obstacle is in the way? What size object ?

We have IKEA glass tabletops that extend beyond the supports and want to make sure Carter doesn’t bump into them. I know that maps can be augmented with arbitrary polygons designating “restricted areas” but don’t know how much of that would be needed for a typical office environment.

Thanks

hi infox22oo,
i can tell you anything about the real robot, just about the unity simulation (with i assume recives similar information)

The Unity model of Carter recives (from Isaac) the commanded forward speed and the commanded angular speed. The Unity scrips than calculates the velocity for each wheel.

In Unity there is (to my knowleadge) no check between the different sensors. Is gets the speed and acceleration from the robot model and sends it back to Isaac.

For the rest of your questions i sadly dont know the answer…
Best
Markus

1 Like

Hi @kurzschlussidi, thanks for your comments. What you say certainly makes sense and would be sufficient for basic operation in simulation or even in the real world. But we have the RealSense and the Zed and the FAQ for the Carter says that if the Carter isn’t moving make sure the RealSense is connected so it must be providing important information.

I know that Jetbot basically runs open loop, and when we tried one it veers slightly to one side when one motor is more powerful than the other, and there is no correction.

Kaya has the Realsense camera and also has feedback from the encoders on the Robotis motors so it could detect if one motor is stronger than another, and could get feedback from the Realsense camera on whether it is going straight or unintentionally veers to one side. But don’t know if it does

Hi Markus / @kurzschlussidi you’re correct, turns out the “Carter” app on the “Carter” robot doesn’t use the Realsense or Zed cameras at all.

1 Like