Isaac sim lane following from jetbot

this link shows about training jetbot using Isaac sim had plenty of tutorials on the NVIDIA website which are no longer there.
i am trying to train a jetbot to do lane following using the camera module.

this link sadly has plenty of deprecate code
for instance “omnikithelper” is no longer used. there is “from omni.isaac.kit import SimulationApp”

i use the camera function from the from “omni.isaac.sensor import Camera”

camera = Camera(
prim_path=“/World/Jetbot/chassis/rgb_camera/jetbot_camera”,
resolution=(256, 256),
)

my_world.scene.add_default_ground_plane()
my_controller = DifferentialController(name=“simple_control”, wheel_radius=0.03, wheel_base=0.1125)
my_world.reset()
camera.initialize()

i = 0
camera.add_motion_vectors_to_frame()

while simulation_app.is_running():
my_world.step(render=True)
print(camera.get_current_frame())
if my_world.is_playing():
if my_world.current_time_step_index == 0:
my_world.reset()
my_controller.reset()

    if i == 100:
        imgplot = plt.imsave(r"C:\Users\AI_Admin\Downloads\jetbot\Jebot\image\im.png",camera.get_rgba()[:, :, :3])
        
        plt.show()
        print(camera.get_current_frame()["motion_vectors"])

i followed some of the tutorials which just shows standalone to train the bot to reach an object.
but sadly nothing on using camera from the jetbot to do real time decision using ML model.
any suggestions are welcome here.

There is a bug in the current camera API that will be addressed in the next release. I have a work around for this though! You will need to use the replicator API directly though

First, we need to define the render product and attach it to an annotator.

import omni.replicator.core as rep

render_product = rep.create.render_product("/World/jetbot/chassis/rgb_camera/jetbot_camera", (100, 100))
rgb_annot = rep.AnnotatorRegistry.get_annotator("rgb")
rgb_annot.attach([render_product])

then, when we want to capture we need to step the orchestrator, and retrieve the data from our annotator

rep.orchestrator.step()        
rgb_data = self._rgb_annot.get_data()

This is a much more elegant solution than the hack I came up with for my live stream :/ such is life!