The Carter reference design has IMU, Lidar, Zed camera (depth and RGB), Realsense camera (depth and RGB), and receives wheel information back from the segway.
There are several things I’m confused about
Does Isaac give the Segway high level commands like telling it to go to a certain position, having the Segway calculate an appropriate acceleration/movement/deceleration profile, or does Isaac take care of those calculations and issues only lower level commands to Segway to move the wheel certain number of clicks at specific speeds ?
When everything is ok, the information coming back from the wheel position encoder should match the command that was given to the wheel motor, and the IMU should register an appropriate acceleration, and the Lidar, RealSense and Zed should all confirm the updated position info as well.
But how are conflicts resolved if the various data sources vary by a certain percentage ? Wheels could slip, maybe the IMU’s compass could be influenced by strong magnets, some of the other sensors could be confused by glass doors or very big object moving towards the Carter.
Is there some kind of internal voting among the data sources or is there a priority list of most trusted info to least trusted ?
Are the 2 depth cameras (Realsense and Zed) provided to demonstrate their respective SDKs, or are both necessary because the Zed has longer range than Realsense ? Or is there some other difference that’s used in the Carter navigation stack ?
What kind of safety features are there ? Will Carter refuse to go into a wall or bump into a person or smaller object if one of the cameras or the Lidar indicates an obstacle is in the way? What size object ?
We have IKEA glass tabletops that extend beyond the supports and want to make sure Carter doesn’t bump into them. I know that maps can be augmented with arbitrary polygons designating “restricted areas” but don’t know how much of that would be needed for a typical office environment.