It helps if you can simulate photo-realistic visual inputs using a simulator for training agents, using deep reinforcement learning techniques.
The Gazebo simulator that comes with ROS distro’s isn’t photorealistic enough.
The Quixel ReBirth video is a good example of the level of realism that can be achieved using UE4. https://www.youtube.com/watch?v=9fC20NWhx4s
NVIDIA have made advances in using techniques such as Structured Domain Randomization https://research.nvidia.com/publication/2018-10_Structured-Domain-Randomization, to use pure simulator data to train neural networks using photorealistic simulator visual inputs and perform well on standard datasets like KITTI.
If you need photorealistic visual input, then your choices are more or less limited to UnrealEngine 4 or Unity. Others have developed photorealistic simulators around UE4, such as Microsoft with AirSim https://github.com/microsoft/AirSim and DeepDrive.io https://deepdrive.io/.
While the UE4 license is proprietary, you do have access to the full source code, once you sign up. There is a 5% royalty if you commercialize your product based on the UE4 engine. Epic Games put in a phenomenal amount of support behind the engine, in terms of constant updates to the engine sources, tutorials, roadshows and evangelization.
There is a Isaac SDK ROS Bridge example here:https://docs.nvidia.com/isaac/isaac/apps/samples/navigation_rosbridge/doc/ros.html
Microsoft AirSim similarily has some ROS integration examples here
So, you can still continue to use ROS, but drop using the Gazebo as simulator at some point and use UE4 instead for training and simulation.