How to use covariance matrix in Visual SLAM node in ROS2 for multiple cameras?

I am using the Isaac ROS 2 Visual Slam on a Jetson Xavier AGX. I have setup two ZED 2 cameras one in the front and on in the back of a robot that is using the Xavier. I can launch the two stereo cameras in ROS 2 humble with multiple nodes and needed advice on how to setup the Visual SLAM node(s) to get input from the cameras and output a pose common to the robot based on the front camera SLAM and back camera SLAM? How do I use the covariance matrix for this case, and do you have examples and literature I can use for multiple cameras.

You will need to run one instance of the Isaac ROS Visual SLAM node per stereo camera pair (ie, one instance for the front camera and one instance for the back camera). Make sure to set the parameters publish_odom_to_base_tf and publish_map_to_base_tf to false on both instances. This prevents the two instances from providing conflicting transforms to the pose tree.

Instead, you will need to use a third-party package like robot_localization to perform fusion using the output poses with covariance published to visual_slam/tracking/vo_pose_covariance by the two Visual SLAM instances.