[Error] [omni.livestream-websocket.plugin] Failed to encode picture

Environment details:

  • Isaac Sim 2021.2.1
  • Running on a Docker Container. Image can be found here
  • Live Stream clients used: WebRTC, WebSocket (on Google Chrome & FireFox)
  • Ubuntu 20.04

I am trying to attach two RGB sensors to two different cameras on my robot. For this, I am following this tutorial, which uses the synthetic data interface to attach a RGB sensor to a viewport.

When I follow the logic in this example, I am able to save the corresponding rgb data (png file), however, my WebClient fails with the following error on terminal [Error] [omni.livestream-websocket.plugin] Failed to encode picture, which essentially leads to a black screen on my web-browser. When I played around with the code, I realized that adding one sensor (syntheticdata_interface.create_sensor(gt.SensorType.Rgb, viewport)) does not result in any error, but adding two sensors ends up with a black screen on WebRTC/WebSocket.

I looked up this error, and someone had made a post about it before, but the root cause (GPU model) of that does not seem to apply in this situation, since I have a working WebClient which only breaks when I add two RGB sensors. I have checked the developer console on my browser, and it does not provide me with any relevant details. I have also updated my web-browsers, and still had no success.

Any help is appreciated! Thanks.

Does this error still occur on the latest Isaac Sim release 2022.1.0?

Just migrated to 2022.1.0, and now facing a different issue with the same setup. I am facing the following issues from this tutorial:

  • Please update the method to obtain the viewport interface. I got this error, and had to use the method mentioned in the post
  • While creating an RGB sensor, I used the following code:
    • syntheticdata_interface = gt.acquire_syntheticdata_interface()
    • rgb_sensor = syntheticdata_interface.create_sensor(gt.SensorType.Rgb, viewport).
      While this runs successfully, when I checked the type of the returned object (rgb_sensor), it was of type SensorType.Invalid, whereas it should be SensorType.Rgb
  • The tutorial goes on to describe how to use the synthetic data interface to obtain the RGB data, using the following code: rgb_data = syntheticdata_interface.get_sensor_host_uint32_texture_array( rgb_sensor). This code itself worked in 2021.2.1, however, did not in 2022.1.0, because the constructor requires a ViewPort object (Not mentioned in the tutorial). Despite that, this method returns [] and I am assuming that is because the rgb_sensor is of type SensorType.Invalid causing the rest of the code to break.

Essentially, this tutorial does not seem to be up-to-date with 2022.1.0. Please advise on what code/tutorial to use to add an RGB sensor to a depth camera, along with generating synthetic data for it. Also, please list clearly where documentation can be found for viewport and synthetic data related code, as I am having trouble looking that up.

Also, it would be nice if there was an omni.kit.command for import a (depth) camera with an RGB sensor, similar to IsaacSensorCreateImuSensor for creating an IMU, and RangeSensorCreateLidar for LIDAR

For those interested, v2022.1.0 has an extension called omni.isaac.synthetic_utils which has a class called SyntheticDataHelper which is useful to attach an RGB and depth sensor to a viewport (camera). It also has a method (get_groundtruth) to retrieve the corresponding data

An example of how this would work:

import omni.isaac.synthetic_utils as sd

# create camera & viewport (using stage.DefinePrim() and viewport_legacy_interface()

# initialize sensors
sd_helper = sd.SyntheticDataHelper()
sd_helper.initialize(["rgb", "depth"], viewport)

# retrieve data
data = sd_helper.get_groundtruth(["rgb", "depth"], viewport)

The snippets have been updated to reflect the current APIs, thanks for catching this

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.