I am trying to set up a vision-based RL environment with two cameras. When I use the normal Camera sensor, I can get two camera feeds, but I cannot scale to simulating many environments in parallel. In contrast, I can use TiledCamera to have many parallel environments running at the same time, but if I try to add two cameras, I get the following error: “[omni.sensors.tiled.plugin] CUDA error 1: cudaErrorInvalidValue - invalid arg”.
Has anyone else managed to set up two tiled cameras in a scene?