Hello, i am currently developing a procedural outdoor environment for robot testing as a python extension. my scene is quite heavy and running a large amount of instances and such. My simulation was quite fine even with an OS1 lidar running in real-time.
However, after installing an RGB camera on my husky robot, my simulation became too slow. I’m currently using isaacsim.ros2.bridge with rep.writers.get(rv + “ROS2PublishImage”) to publish my camera data. I am aware that camera data is quite heavy and significantly slows down simulations.
does extracting the camera data and publishing it directly via python increase my performance by much compared to using rep.writers module and omnigraphs to do the job for me? does this also affect my performance if i were to run my simulation in headless mode?
Here is my current camera creation/publishing code for reference:
def Create_cam(self):
import omni.replicator.core as rep
from isaacsim.sensors.camera import Camera
import numpy as np
import isaacsim.core.utils.numpy.rotations as rot_utils
width, height = 480,320
camera_matrix = [[455.8, 0.0, 943.8], [0.0, 454.7, 602.3], [0.0, 0.0, 1.0]]
distortion_coefficients = [0.05, 0.01, -0.003, -0.0005]
pixel_size = 3 # microns
f_stop = 1.8
focus_distance = 0.6
diagonal_fov = 235
camera = Camera(
prim_path="/World/Robot/base_link/camera",
translation=(0, 0, 1),
frequency=10,
resolution=(width, height),
orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)
camera.initialize()
((fx, _, cx), (_, fy, cy), _) = camera_matrix
horizontal_aperture = pixel_size * 1e-3 * width
vertical_aperture = pixel_size * 1e-3 * height
focal_length_x = fx * pixel_size * 1e-3
focal_length_y = fy * pixel_size * 1e-3
focal_length = (focal_length_x + focal_length_y) / 2
camera.set_focal_length(focal_length / 10.0)
camera.set_focus_distance(focus_distance)
camera.set_lens_aperture(f_stop * 100.0)
camera.set_horizontal_aperture(horizontal_aperture / 10.0)
camera.set_vertical_aperture(vertical_aperture / 10.0)
camera.set_clipping_range(0.05, 1.0e5)
camera.set_projection_type("fisheyePolynomial")
camera.set_kannala_brandt_properties(width, height, cx, cy, diagonal_fov, distortion_coefficients)
#Publishing to ros2
import omni.syntheticdata._syntheticdata as sd
import omni.graph.core as og
render_product = camera.get_render_product_path()
rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
writer = rep.writers.get(rv + "ROS2PublishImage")
writer.initialize(
frameId= "Base_link",
nodeNamespace= "",
queueSize= 1,
topicName= "/camera/image_raw"
)
writer.attach([render_product])
Any other simulation and optimization tips would be greatly appreciated
Thank you!
