I am trying to get the byte code from the camera on Isaac Sim using the following
capture = viewport_api.schedule_capture(ByteCapture(on_capture_completed))
def on_capture_completed(buffer, buffer_size, width, height, format):
print(f’PixelData resolution: {width} x {height}‘)
print(f’PixelData format: {format}’)
The buffer return type is a PyCapsule class. How do we convert this to a suitable JPG image or a byte array that can be futher used for object detection.
Hi mati @mati-nvidia , I have done some work similar like this . I use omni.kit.viewport.utility.capture_viewport_to_buffer to capture images to do a object detection work and then render these images to my Extension UI with a ui.ImageWithProvider().
In the process, I found that frame rate especially low in viewport:
my code(object detection work(about 20ms) and render images to UI )is run in capture_viewport_to_buffe’s on_captrue_fn.
Is there a solution to solve it?or I need a better computer to do this.
Is there an example to display an image using ui.ImageProvider? I wish to use the Image.tobytes() or an numpy.ndarray to display the image within a UI.