I am using the Xavier on my rover/robot which interfaces to several sensors and processes half the load on the system and the rest on a server.
For SLAM (Simultaneous Localization and Mapping) I wish to use ROS as it provides good mapping pipelines and simple methods to save/load the maps. But I am worried about the capability of the Xavier and wonder if it can handle all these tasks in parallel.
Does someone have experience with a simliar case ?
You can refer to NVIDIA Isaac SDK | NVIDIA Developer for developing, and can find some reference there, then no need to worry the AGX Xavier capability.
Xavier is capable, we run LiDAR, INS, GPS and Stereo Cameras for 2D Landmark SLAM and end to end (sensor input to vehicle actuation request) takes at most 60ms. You just have to make sure you are using the ROS transport hints tcpNoDelay() to reduce latency between nodes, using udp() is usually the best however this doesn’t work on xavier for some reason (just crashes).
Not sure if you referring to copyright issues but it is usable for both prototyping (in our case) and for production.
Images we were using are actually taken images by HECTOR SLAM (rviz) of our building.