Blurry Camera Images

Hello there,

I am using an implementation similar to Isaac Orbit to create a camera for visualizing my reinforcement learning environment. The camera creation code looks like:

class IsaacEnv:
    def render(self, mode: str="human"):
        if mode == "human":
            return None
        elif mode == "rgb_array":
            # check if viewport is enabled -- if not, then complain because we won't get any data
            if not self.enable_viewport:
                raise RuntimeError(
                    f"Cannot render '{mode}' when enable viewport is False. Please check the provided"
                    "arguments to the environment class at initialization."
                )
            # obtain the rgb data
            rgb_data = self._rgb_annotator.get_data()
            # convert to numpy array
            rgb_data = np.frombuffer(rgb_data, dtype=np.uint8).reshape(*rgb_data.shape)
            # return the rgb data
            return rgb_data[:, :, :3]
        else:
            raise NotImplementedError(
                f"Render mode '{mode}' is not supported. Please use: {self.metadata['render.modes']}."
            )

    def _create_viewport_render_product(self):
        """Create a render product of the viewport for rendering."""
        # set camera view for "/OmniverseKit_Persp" camera
        set_camera_view(eye=self.cfg.viewer.eye, target=self.cfg.viewer.lookat)

        # check if flatcache is enabled
        # this is needed to flush the flatcache data into Hydra manually when calling `env.render()`
        # ref: https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_physics.html
        if  self.sim.get_physics_context().use_flatcache:
            from omni.physxflatcache import get_physx_flatcache_interface

            # acquire flatcache interface
            self._flatcache_iface = get_physx_flatcache_interface()

        # check if viewport is enabled before creating render product
        if self.enable_viewport:
            import omni.replicator.core as rep

            # create render product
            self._render_product = rep.create.render_product(
                "/OmniverseKit_Persp", tuple(self.cfg.viewer.resolution)
            )
            # create rgb annotator -- used to read data from the render product
            self._rgb_annotator = rep.AnnotatorRegistry.get_annotator("rgb", device="cpu")
            self._rgb_annotator.attach([self._render_product])
        else:
            carb.log_info("Viewport is disabled. Skipping creation of render product.")

But the captured images hence the video looked to have motion blurs:

I ensured physics_dt and rendering_dt of the SimulationContext are identical.

What is the cause of the problem, and how can I address it?

Thanks.

Hi @btx0424 - The motion blur in your captured images could be due to a few reasons:

  1. Camera Settings: Check if the camera settings in your environment have motion blur enabled. If so, you might want to disable it for clearer images.
  2. Frame Rate: If the frame rate of your simulation is not high enough, it could cause motion blur in the captured images. Try increasing the frame rate.
  3. Physics Simulation: If the physics simulation is not accurate or is running at a different rate than the rendering, it could cause motion blur. You mentioned that you have ensured physics_dt and rendering_dt of the SimulationContext are identical, which is good. However, you might want to check other physics settings as well.
  4. Rendering Settings: Check the rendering settings in your environment. Some rendering settings or techniques could cause motion blur.

To address the issue, you could try the following:

  1. Disable motion blur in the camera settings.
  2. Increase the frame rate of your simulation.
  3. Check and adjust the physics settings in your environment.
  4. Check and adjust the rendering settings in your environment.

After further playing with the settings, here are my findings:

  • Render Settings -> Post Processing -> Motion Blur is be default off and has no obvious affect when turned on.
  • If I switch Renderer->Rendering Mode from default to wireframe, the blurry effects disappear. But the resulted visuals are in general not what we want.
  • Strange enough, I found that if I zoom in or move the camera so that the viewing direction is more perpendicular to the motion direction, the blurry effects disappear or become less visible.
  • I was using the standalone workflow for doing RL. However I failed to reproduce the blurry effects in GUI workflow, which suggests it might be related to how Isaac Sim was start up. But I have no clue what are the settings that make the difference and how to check for them.

Can you offer some more specific guidelines?

Hi there,

are you using RayTracedLighting or PathTracing for rendering? Did you check if this could be a RTSubframes issue?

Can you try pausing the simulation before getting the annotator data?

Let me know how it goes.

Best,
Andrei

Hi. Were you even able to solve this issue? I’m having the same problem. (See yellow ghost image when yellow item was moved)

Hi there, this seems like the ghosting issue appearing when the object is teleported and the renderer needs more rendered frames to clear out the temporal data. You can do this via rt_subframes, which will render the extra subframes for every capture.

You can do this either when capturing via the step function:

  • rep.orchestrator.step(rt_subframes=16)

Or using rep.orchestrator.set_next_rt_subframes(16)

def set_next_rt_subframes(rt_subframes: int) -> None:
    """Specify the number of subframes to render

    Specify the number of subframes to render. During subframe generation, the simulation is paused.
    This is often beneficial when large scene changes occur to reduce rendering artifacts or to allow materials
    to fully load. This setting is enabled for both RTX Real-Time and Path Tracing render modes. Values must be
    greater than ``0``.

    Args:
        rt_subframes: Number of subframes to render for the next frame. Resets on every frame.

Or through global settings, this will render the extra frames for every capture though:

import carb
carb_settings = carb.settings.get_settings()
carb_settings.set("/omni/replicator/RTSubframes", 16)
1 Like

Hi. I later solved the issue by changing the antialiasing settings when creating the SimulationApp.

Could you provide the changes you did for other users that might encounter the same issue? Thanks!

1 Like

@ahaidu thanks for the suggestion to add subframes. My code is based on Scripting extension template and thus cannot run rep.orchestrator.step(rt_subframes=16) but I am able to use the global settings with carb. I compared rt_subframes = 1, 16, 32 and with 32, my bare eyes can no longer see the shadow. The caveat is the time cost. The run with rt_subframes=32 took 30X longer than that with rt_subframes=1. It’s a deep cost if I wanted to run it in scale. With rt_subframes=16, I can see much reduced, but not entirely removed shadow.

You can also use rep.orchestrator.step_async()for the extension workflow.

How to use rep.orchestrator.step_async() in extension properly? I tried with

asyncio.ensure_future(rep.orchestrator.step_async())

It stuck with Worker thread in the infinite loop to dequeue.

You can call it via await rep.orchestrator.step_async() from an async function. How does your extension workflow look like?

You can take a look at the synthetic data recorder as an extension workflow example: omni.isaac.synthetic_recorder/omni/isaac/synthetic_recorder/synthetic_recorder_extension.py

@ahaidu I am new to Issac Sim. I am currently using Isaac Sim 4.0.0 and my workflow was created from Scripting Template, and then I added my custom code into it.

I’ve just created a brand new copy of Scripting Template. When I added this small piece of code into goto_position() function. This code will stuck and arm will no longer move to the target cube.

I tried to play with the state button “Run/Stop”. For every “Run”, goto_position() will run and then stuck by asyncio.ensure_future(rep.orchestrator.step_async()).

goto_position() is not an async function and thus I cannot add await rep.orchestrator.step_async() into it directly. How should I use rep.orchestrator.step_async() in Scripting Template?

Depending on what you are trying to achieve there might be multiple approaches on achieving it.

It might also make sense to go over the Script Editor versions of the basic snippets to get a better understanding on the workflows:

@ahaidu Unfortunately, I cannot run any of the example codes in the link in my extension code because they are targeting Standalone or Script Editor. Do you have any example for extension specifically?

Or did I make the wrong choice to use extension over Standalone app? It’s still not too late to change, if support for extension is not as good as Standalone app. Please advise. Thank you!

The choices depend on what you are trying to achieve:

  • script editor – quick testing of functionalities, starting demos, running worklows tha can be written in one script
  • standalone mode - when you want full control of when to update the UI, renderer etc
  • extension + UI - when you want to run things interactively through UI, mostly in async mode

Here is an example running explictily in through extension + UI:

  • omni.isaac.synthetic_recorder/omni/isaac/synthetic_recorder/synthetic_recorder_extension.py
  • you should use the latest isaac sim which uses rep.orchestrator.step_async instead of rep.orchestrator.run_async
  • keep in mind to set rep.orchestrator.set_capture_on_play(False) when capturing at specific steps only (e.g. step_async) otherwise your data will be written avery advanced timeline frame if you use a writer.

Thank you @ahaidu for these information. I was able to resolve “hanging” issue by adding “yield()” right after asyncio.ensure_future() like below. Unfortunately, I can notice clear delay from the image captured to the code executed. I think the image is at least 1 or 2 frame delayed.

I also run rep.orchestrator.set_capture_on_play(False) at initialization step as well.
I didn’t use any replicator writer; instead I used camera.get_current_frame(clone=False) to get RGB data.

                async def get_camera_shot_async():
                    # Insert subframes to eliminate shadow. Default is 1. Try 16 or 32.
                    # https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/subframes_examples.html
                    subframes = 16
                    await rep.orchestrator.step_async(
                        rt_subframes=subframes, pause_timeline=False
                    )
                    screenshot_data = self.cell.get_camerashot(ts)
                    self.output_writer.record_data(screenshot_data)

                asyncio.ensure_future(get_camera_shot_async())

                # TODO: yield() is necessary here to allow async function to run without
                # putting the main thread into hanging. But why?
                yield ()

Hi there,

the stage – frame delay should not happen when using the step function to capture the data, the step function waits for the frames in flight and only accesses them once in sync with the stage/simulation.

To further narrow down the issue, can you try using an annotator directly to save the data instead of the output_writer.

Can you also provide information on how get_camera_shot_async()is being called?

Have you tried this on the latest isaac sim version?

@ahaidu

I am using Isaac Sim 4.0.0, not the latest 4.2.0. We decided to stay on 4.0.0 because we had much higher crash rate when we first tried out 4.2.0. But if 4.2.0 is necessary to eliminate the image delay, I will definitely switch. Please let me know your thought.

Sorry that my previous code snippet was not clear on how I captured the image. I moved the actual code into the snippet and apologize in advance that now the code is a bit long to review.

Basically, in _stage1, self.msg_processor = RosBagMsgProcessor(task.input_path) will add three colorful bins to the table, and no image is captured. In _stage2, msg_count, ts = self.msg_processor.process_next_message() will try move item positions, and start capturing images. Note msg_count value starts with 1.

I did not set up an annotator myself (will do a bit later); instead, I leveraged the Camera API. Looking at the source code of Camera, I thought it has an internal annotator that feeds data into get_current_frame(). Maybe it did not work well with rep.orchestrator.async_step()?

    def update_scenario(self, step: float) -> bool:
        try:
            next(self._script_generator)
            return False
        except StopIteration:
            return True

    def _script(self):
        for task in self.batch_manager.tasks:
            omni.log.info(f"Start processing: {task}")
            yield from self._stage1(task)
            yield from self._stage2()
            yield self.cell.post_reset()

    def _stage1(self, task):
        # Run 10 empty steps for camera to be ready
        # otherwise you might trigger exception at
        # demo_data_fuzzing\utilities\camera.py:8
        for _ in range(10):
            yield ()

        for op in task.fuzzing_operations:
            op()
            # If not done on this frame, yield() to pause execution
            # of this function until the next frame.
            yield ()

        self.msg_processor = RosBagMsgProcessor(task.input_path)

        omni.log.info(
            f"input_path={task.input_path}, #action={self.msg_processor.get_length}"
        )

        self.output_writer = PickleWriter(task.output_path)

        # Return True/False when the entire function is completed.
        return True

    def _stage2(self):
        msg_count = -1
        while True:
            # Record camera shots, if the message count from the PREVIOUS step is valid.
            # If you want to use msg_count value within get_camera_shot_async(), please
            # pass it in as an argument. Do not use it directly as it will be updated in
            # main thread while async function is running.
            if msg_count > 0 and msg_count % self.record_every_message == 0:
                omni.log.verbose(f"Saving camera shots at msg#={msg_count}")

                async def get_camera_shot_async(mc):
                    # Insert subframes to eliminate shadow. Default is 1. Try 16 or 32.
                    # https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/subframes_examples.html
                    subframes = 16
                    await rep.orchestrator.step_async(
                        rt_subframes=subframes, pause_timeline=False
                    )
                    # screenshot_data = self.cell.get_camerashot(ts)
                    # self.output_writer.record_data(screenshot_data)
                    from PIL import Image

                    camera = self.cell.cell.ceiling_camera.camera_left_prim
                    rgb = camera.get_current_frame()["rgba"][:, :, 0:3]
                    rgb_img = Image.fromarray(rgb, "RGB")
                    rgb_img.save(f"c:/users/meiyang/documents/output/message_{mc}.jpeg")

                asyncio.ensure_future(get_camera_shot_async(msg_count))

                # TODO: yield() is necessary here to allow async function to run without
                # putting the main thread into hanging. But why?
                yield ()

            # TODO: ts can be invalid -1 because item position msg does not have timestamp
            msg_count, ts = self.msg_processor.process_next_message()

            # Create a little message at the top left corner to show that is the
            # current message count. It is less distracting than notification component.
            post_viewport_message(get_active_viewport(), f"Message # {msg_count}")

            if msg_count < 0:
                # Run 10 empty steps for writer to finish before closing it
                # because the last camera shot is recorded in an async function.
                for _ in range(10):
                    yield ()
                self.output_writer.close()
                self.output_writer = None
                return True

            # If not done on this frame, yield() to pause execution
            # of this function until the next frame.
            yield ()

These are the first 2 images captured, for msg_coutn = 1, 2 by the code above accordingly. You can see that there is NO bins on message_1.jpeg (this is the status BEFORE _stage1). I was expecting message_1.jpeg looks like message_2.jpeg with 3 colorful bins on the table. This is why I think there is at least 1 frame delay.

message_1.jpeg

message_2.jped

Ah, I found these two APIs of Camera are different get_rgb() vs get_current_frame(). I was using get_current_frame and seeing delays. Maybe the fix is simply to switch to get_rgb.

Checking out the two images captured at the same time, get_current_frame seems delayed (since it’s showing an empty table). Are these two APIs meant for different purpose?

                async def get_camera_shot_async(mc):
                    # Insert subframes to eliminate shadow. Default is 1. Try 16 or 32.
                    # https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/subframes_examples.html
                    subframes = 16
                    await rep.orchestrator.step_async(
                        rt_subframes=subframes, pause_timeline=False
                    )
                    # screenshot_data = self.cell.get_camerashot(ts)
                    # self.output_writer.record_data(screenshot_data)
                    from PIL import Image

                    camera = self.cell.cell.ceiling_camera.camera_left_prim
                    rgb = camera.get_rgb()
                    Image.fromarray(rgb, "RGB").save(
                        f"c:/users/meiyang/documents/output/getrgb_{mc}.jpeg"
                    )

                    rgb = camera.get_current_frame()["rgba"][:, :, 0:3]
                    Image.fromarray(rgb, "RGB").save(
                        f"c:/users/meiyang/documents/output/getcurrentframe_{mc}.jpeg"
                    )

                asyncio.ensure_future(get_camera_shot_async(msg_count))

get_rgb()

get_current_frame()