Capture frames from camera via Python scripting

Hello, I was wondering how can we manage camera via python scripting in a simulation? I mean, do you have an example of a python script for Omniverse which loads a camera into the scene and can process frames captured?
Thanks in advance

Hi…

There is an example about capture synthetic data from sensors like rgb, depth, instanceSegmentation, boundingBox3D, camera, etc. in the next path. Also, check the README to setup the environment.

/isaac-sim/_build/linux-x86_64/release/exts/omni.isaac.samples/omni/isaac/samples/scripts/syntheticdata

You can read something about it in Synthetic Data

1 Like

Also, this code can be help.
It captures a RGB image from the current camera and write it in /isaac-sim folder

import cv2
import numpy as np
import omni.syntheticdata

sd = omni.syntheticdata._syntheticdata
sdi = sd.acquire_syntheticdata_interface()

def _get_sensor_data(sensor, dtype):
    width = sdi.get_sensor_width(sensor)
    height = sdi.get_sensor_height(sensor)
    row_size = sdi.get_sensor_row_size(sensor)

    get_sensor = {
        "uint32": sdi.get_sensor_host_uint32_texture_array,
        "float": sdi.get_sensor_host_float_texture_array,
    }
    return get_sensor[dtype](sensor, width, height, row_size)
    
data = _get_sensor_data(sd.SensorType.Rgb, "uint32")
image = np.frombuffer(data, dtype=np.uint8).reshape(*data.shape, -1)

cv2.imwrite("/isaac-sim/image.png", image)
print(image)

One way to run it is to have an opened scenario (e.g. /Isaac/Samples/Leonardo/Stage/ur10_bin_filling.usd), write the code in the Script Editor tab (Window > Script Editor) and enable the RGB sensor in Synthetic Data Sensors tab like the next image

After that, you can run the script (Script Editor > Command > Execute) and you can see the stored image like the next… Note: the color difference is because OpenCV works with BGR color instead of RGB color space (use image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) to fix it)

4 Likes

According to Synthetic Data examples, sensor buffers need at least two frames to be correctly populated.

In the example, the next code is used before request sensor data

for _ in range(2):
    app.update(0.0)  # sensor buffers need two frames to be correctly populated.

where

import omni.kit.app
app = omni.kit.app.get_app_interface()

but, acquiring the app interface with get_app_interface() and call app.udpate(0.0) will freeze the kit… Why? I don’t know… just try to execute the script several times until the sensor data be populated

1 Like

Ok, I solved just selecting the camera from the Stage.

Just another quick question, for adding a new camera also directly via Python script?

You can add a camera as a primitive of type “Camera”.
For example, the next code creates a camera and move it to +100 in Z-axis

import omni
from pxr import UsdGeom

stage=omni.usd.get_context().get_stage()

path="/World/Camera/MyCamera"
prim_type="Camera"
translation=(0,0,100)

camera_prim = stage.DefinePrim(path, prim_type)

xform_api=UsdGeom.XformCommonAPI(camera_prim)
xform_api.SetTranslate(translation)

You can use pxr.UsdGeom.XformCommonAPI to rotate, scale and translate the primitive…

1 Like

Ok, got it, and last thing (for real this time), how can I get data from this camera using the script you provided before for syntheticdata?

Also, regarding the camera parameters, if I wanted to have the instrinsic I should assume fx=fy=focalLength, but for cx and cy?

To get the data from the new camera, just change the active camera to the new using the kit.viewport module like:

vpi = omni.kit.viewport.get_viewport_interface()
vpi.get_viewport_window().set_active_camera(str(camera_prim.GetPath()))

Also, you can find more info about the Omniverse Kit Python bindings in the manual here

You should use for fx, fx, cx, and cy the values obtained from a previous calibration. But if you cannot do a calibration, you can use for cx, cy the width/2.0 and height/2.0 respectively (note that this is not true for all cameras)

1 Like

Ok for the intrinsics. I can assume cx and cy as you said and fx and fy using the focalLenght parameter, or do a proper calibration if I need more precise results.

Regarding the data from the new camera ok for the viewport, I’ll dig more into the documentation also to learn something more, but after setting the active camera I don’t know how to use the previous script, which uses the synthetic data interface. But here i don’t have one from which select for example the RGB sensor.

Ok, I enabled the omni.synthetic extensions from the menu and now I can use the omni.syntheticdata directly in my application. I just need to know how to enable the RGB sensor directly from python

I found all I needed in the syntheticdata script that you suggested. You have been very helpful, thank you very much ;)

Thanks a lot Toni. You have been super helpful.

@toni.sm
Could I sneak in a question here also?

So I see in 2020.2 ISAAC/ISAAC_SIM/_build/linux-x86_64/release/exts/omni.isaac.synthetic_utils/omni/isaac/synthetic_utils/scripts/syntheticdata.py
at get_camera_params() function:

prim = stage.GetPrimAtPath(self.editor.get_active_camera())

But it seems vpi.get_viewport_window().set_active_camera(str(camera_prim.GetPath())) does not fill in that active camera value.

So I then tried to use editor.set_active_camera(...) but unable to manage to find the correct argument type after trying with pxr.Usd.Prim (which I create with “Camera” as prim type) or Sdf.Path.

I then tried with just prim.GetPath().pathString then it seems correct but then still got a complaint from the Sim as invoking get_camera_params():

[omni.client.plugin]  Main: usd_plugin: Ill-formed SdfPath <No Camera>: syntax error
[omni.usd] Warning: in SdfPath at line 97 of /opt/buildagent-share/work/da639afa0455b478/USD/pxr/usd/lib/sdf/path.cpp -- Ill-formed SdfPath <No Camera>: syntax error

and the mismatched argument type:

[Error] [omni.usd.python] ArgumentError: Python argument types in
    Xformable.__init__(Xformable, Object)
did not match C++ signature:
    __init__(_object*, pxrInternal_v0_19__pxrReserved__::UsdSchemaBase schemaObj)
    __init__(_object*, pxrInternal_v0_19__pxrReserved__::UsdPrim prim)
    __init__(_object*)

It looks like the function’s spec could not be fetched by inspect also.

Thank you.

Hi @Tadinu

Sorry, I could not understand very well what do you want to ask…
In any case, I executed the next code that set the active camera using two methods from the viewport and the editor. Both methods work without any complaint from Isaac Sim. You can see the output below.

Note: I created in the stage a camera at path /World/Camera. Also, to call the SyntheticDataHelper it is necessary enable, at least, the RGB sensor from Synthetic Data Sensors tab

import omni
from omni.isaac.synthetic_utils import SyntheticDataHelper

stage = omni.usd.get_context().get_stage()
editor = omni.kit.editor.get_editor_interface()
vpi = omni.kit.viewport.get_viewport_interface()
sdh = SyntheticDataHelper()

prim = stage.GetPrimAtPath("/World/Camera")

# -------------------------------------------------------------------
vpi.get_viewport_window().set_active_camera(str(prim.GetPath()))
editor.set_active_camera(prim.GetPath().pathString)
# -------------------------------------------------------------------

params = sdh.get_camera_params()

print("camera:", editor.get_active_camera())
for k in params:
    print("{}: {}".format(k, params[k]))
camera: /World/Camera
pose: [[-8.75343316e-03  9.99961688e-01 -1.79835870e-08  0.00000000e+00]
 [-5.82216243e-01 -5.09657161e-03  8.13018002e-01  0.00000000e+00]
 [ 8.12986854e-01  7.11670921e-03  5.82238549e-01  0.00000000e+00]
 [ 9.91427876e+01  3.07373063e-01  3.97395907e+01  1.00000000e+00]]
fov: 1.0471975628049432
focal_length: 18.14756202697754
horizontal_aperture: 20.954999923706055
view_projection_matrix: [[ 1.51613908e-02 -1.79276107e+00  8.12986935e-01  8.12986854e-01]
 [-1.73198443e+00 -1.56933704e-02  7.11670992e-03  7.11670921e-03]
 [ 3.11484859e-08  2.50344617e+00  5.82238608e-01  5.82238549e-01]
 [-9.70778425e-01  7.82582274e+01 -1.02741902e+02 -1.03741892e+02]]
resolution: {'width': 1280, 'height': 720}
clipping_range: (1.0, 10000000.0)

Hi @toni.sm,

Thanks for your quick answer!

Sorry for the ambiguity, this is actually what I have done:

  • Start the omnikit with the default stage
  • Load a new stage in from an external usdby:
    usd_context = omni.usd.get_context()
    usd_context.open_stage(in_map_usd_path, on_map_loaded)

then inside on_map_loaded, I spawn a new camera with a custom name (say ACamera) then try to use editor.set_active_camera() & vpi.get_viewport_window().set_active_camera()

then access it using editor.get_active_camera(), which I’m not clear why still returns the camera name as No Camera.
Besides, is that we could not prefix a prim name by a number like 0_Camera?

Thank you.

Hi @Tadinu

You can create the camera inside the on_map_loaded method, but to set it as the main camera call the editor.set_active_camera() or vpi.get_viewport_window().set_active_camera() method inside an event subscribed by omni.usd.get_context().get_stage_event_stream().create_subscription_to_pop(on_stage_event) under the flag omni.usd.StageEventType.OPENED. This event will be called with the flag OPENED after on_map_loaded.

Note the delay between set the active camera and request its name

def on_stage_event(self, event):
    if event.type == int(omni.usd.StageEventType.OPENED):
        editor.set_active_camera("/World/Camera")
        time.sleep(1)
        print("CAMERA", self._editor.get_active_camera())
1 Like

Hi @toni.sm

That is quite a revelation. Thanks a lot for your answer!
It works now. Indeed we could only set the newly created camera active in on_stage_event.

Besides, since I’m running the Sim in headless mode and in order to catch on_stage_event, would you mind giving a bit of advice on how to make main function wait while the new stage is loading in the above situation?

I’ve managed to do it by:

omni_kit.play()
frames_num = 0
while (frames_num <= 60) or omni_kit.is_loading():
   omni_kit.update(1 / 60.0)
   frames_num = frames_num + 1

(with omni_kit as an instance of OmnikitHelper in ISAAC_SIM/_build/linux-x86_64/release/exts/omni.isaac.synthetic_utils/omni/isaac/synthetic_utils/scripts/omnikit.py)

But I’m not really sure it is the official way and it seems that basically we need calls to omnikit.update() in order to catch the on_stage_event, isn’t it?

Thank you and sorry for the chain of questions during the holidays…

Hi @toni.sm

Upon further tests, it looks like the sleep might not be stable, as running on my PC…I still somehow got No Camera inside on_stage_event. So could this be another signal specifically on the new camera being activated I guess…?

@ltorabi @toni.sm

I am really sorry for the pestering questions, so let me summarize the pending points to be confirmed:

(1) Could we prefix a prim name by a number like 0_Camera? I tried but the Sim just rejected it.

(2) How to make main() function wait for the new stage is loading in the above situation? And is that we have to keep invoking omnikit.update() in order to catch on_stage_event()?

As loading a new stage in Headless mode, in on_stage_event(), I could catch only omni.usd.StageEventType.OPENED but NOT omni.usd.StageEventType.ASSETS_LOADED, which seems to be only signaled in the Editor mode, like running as an extension.

(3) I’m actually loading ISAAC_SIM_ASSETS/Environments/Simple_Warehouse/warehouse.usd stage in Headless mode, though having spawned and activated my dynamic camera at omni.usd.StageEventType.OPENED, the query still returns No Camera.

I suppose it is due to the stage being heavy, and thus the Sim would take more time before a camera activation command could work, but not sure though.

Thanks for your supports!

Hi @Tadinu,

Sorry for the pending posts. Holidays, you know :)

Could you share your script/scripts and the proceeding you are using to run it in headless mode?