these are all the modules listed for Isaac preceptor as quoted in their documentation:
Isaac Perceptor leverages multiple Isaac ROS modules:
Isaac ROS Nova for time-synchronized multi-cam data.
Isaac ROS Visual SLAM for GPU-accelerated camera-based odometry.
Isaac ROS Depth Estimation for learning-based stereo-depth estimation.
Isaac ROS Nvblox for GPU-accelerated local 3D reconstruction.
Isaac ROS Image Pipeline for GPU-accelerated image processing.
The documentations for the Isaac pereptor modules commonly reference Zed, intel Realsense, and Hawk cameras.
for example on this page under prerequisites it states: Compatible stereo camera, and the only cameras that are mentioned are the ones I listed above, with some modules like Isaac nova requiring specific firmware and sdk versions (this I understand though because I think it actually launches and creates the cameras nodes that publish the data and thats why it is required). But if other cameras can be used, they should state something like:
Other cameras may be used by altering the source code and configuring it to your cameras.
but I do not see why any camera cannot be used for the other modules, as long as the input image data structure is the same to the VSLAM visual odometry function. so is it that the Isaac ros packages are just made specific for these cameras? or are there config files that I could go change to give it information about my camera im using?
or do I just have to use the non-ros nvidia cuVLAM and create my own ros2 package? I am trying to optimize my navigation and localization stack on my jetson device, but I don’t want to have to dig through all the source code for every module just to get my answer. its very confusing because some modules need strict compliance.
my set up is a intel realsense 435i and two luxonis oak-d pro’s (I have a quadruped and the the intel realsesne is facing forward at the head of the robot, and the two luxonis cameras are midway between the body of the robot facing left and right)
I was thinking of just using the imaging processing pipeline nvidia provides to process the images, and then provide it directly to the nvblox node (which states it jsut subscribes to specific topics of the image data), the only problem is, I assume I would have to handle the synchronization myself? can someone help and try to explain to me what the best setup would be for my situation and maybe help me a little on how these modules are working. should I use the non-ros equivalent package and jsut build my own nodes instead of trying to look through the source code for the ros packages and seeing if I can make it work for my case?
Isaac ROS supports all cameras designed to run on ROS 2 Humble (as of the time I’m writing this post).
The most important thing for each camera is to have the proper outputs required by the Isaac ROS node. In some specific cases, the camera needs to provide specific outputs at a minimum frequency. An example of this is the Isaac ROS vslam Camera requirements
In the link I included, it tells you what topics nvblox ros subscribes to, but it doesnt say if the images should all be the same resolution.
can you tell me if we should leave all cameras at full resolution, or if we should downsample them. also does it matter if all images are the same resolution? other frameworks require this. also it states that the value of “I” can only be 0-3, so the maximum amount of cameras is 4? I thought the documentation said many more cameras could be used. so is this limit just on the specific implementation for the nvblox_node for ros?
also, if I have 3 cameras, do I need to sync the data to be published at the same time? or will nvblox handle the data by the cameras time stamps to update the map according to the timestamp and therefore syncing the cameras to publish at the same time is not needed?
is the only way to really use the modules is to read the source code and to find out what is going on myself? it seems like the ros nvblox package is very peculiar to the specific examples and not that modular by a few tweaks. even the config files aren’t explained, and if the yaml config files are dependent on the implementation of the ros package, id have to go in the source code and look at this myself. so is the best thing to edit these files, or jus use the python/c++ nvblox package and make my own ros package from it? the documentation doesn’t really tell much about how it works to easily plug and play my own setup. (also please look at my above response too, I thought it said I was replying to you when I posted it, but it appears that was not the case, thank you)
No. Different cameras can have completely different parameters.
Yes, is a limit of camera set to 4.
Not every camera can capture images at different times. However, for optimal results, these images need to be accurately timestamped by the host machine.
do you mean to say “[no,] every camera can capture images at different times”?
in other words, are you saying nvblox implementation accounts for the time stamp to properly reference the correct reference frame of the robot at the time it was taken?
also, the documentation is that useful for me to use it without looking at source code. it is highly abstracted, so the source code is 100% required for me to read to understand what it going on to be able to answer many questions I have. the GitHub repo was not made as an easy plug and play interface with instructions of what needs to be provided for custom setups. so i’ve been working on understanding the source code instead which has helped me a lot more than the documentation link you provided. I hope this helps users with to understand that the documentation is a high level abstraction and it does not talk about how you would use it for your use-case outside of the scope of the examples it provides.