Image Streamer: getting CUDA image from display with rendered shapes

Software Version
NVIDIA DRIVE™ Software 10.0 (Linux)

Target Operating System
Linux

Hardware Platform
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)

SDK Manager Version
1.4.0.7363

Host Machine Version
native Ubuntu 18.04

We plan to use image streamer/consumer as following:

  1. We have an original camera (CUDA) image
  2. Using dwRenderEngine draw on screen (additionally drawing with OpenGL commands) on camera image (RGBA)
  3. Getting camera image + rendered shapes together as one CUDA final image

We would like to know if that’s possible.

Hi @dmatic ,

Please check “Image Streamer” section in the document to see if any concern of your use case. Thanks.

/usr/local/driveworks-2.2/doc/nvsdk_html/image_mainsection.html

Hi,

Thanks for your reply!

I checked the documentation but I’m still confused.
I’d like to get everything rendered on screen as an image. I understand that it is possible to stream from GL to CUDA but I need what I draw on screen saved in an image.

I looked into image streamer sample but I couldn’t find anything about getting rendered values as image pixel values. Is that possible? Is there a sample that demonstrates this?

Dear @dmatic,

Note that, consumer on the Image Streamer can not modify the original received image. You need to create a copy of the image on consumer end and modify the image with your rendered shapes and then send it to CUDA for processing.

I see two Image Streamers in your use case. NvMedia(camera image) → GL( copy image, display new image and modify it and save using GL)
GL(new image)-> CUDA(process the image).

Please check https://docs.nvidia.com/drive/driveworks-3.5/dwx_dnn_plugin_sample.html also if it helps

Thanks, your processing flow comment helped to understand a bit.

I am still interested though, regarding drawing on screen, you said “GL( copy image, display new image and modify it and save using GL)”. I looked into image capture sample (frame capture) and it seems that it is not possible to modify and save an image after using GL… It only exists on screen? I.e., how to get rendered shapes from screen if the screen is off? (–offscreen=1)

Maybe to rephrase:

Is it possible to do ‘render-to-texture’ processing using driveworks?

That way maybe I could use frame capture with --offscreen=1 since it will render textures?

Please check Streaming rendered AGX output to see if this is what you are looking for. Thanks.

I checked image capture sample which uses frame capture (as your posted link suggested) but I still have the same question. Is it possible to use it without actually opening a screen?

I run sample_image_capture with --offscreen=1 and it took a screenshot of my host laptop terminal. Also I read FrameCapture.h and it explains how it obtains a screenshot of a screen. So I understand the way it works when it has a screen to take a frame capture of, but what if I don’t have an opened screen?

Is it possible to get rendered shapes on top of an image without opening a display screen/window?

With --offscreen=2 options (no window created), sample_image_capture still works on my side. FYI.

Maybe I understood something wrong… With both --offscreen=1 and 2 it captures what is opened on my screen (e.g. my terminal).

I’d like to know if it can capture image+rendered shapes that would have been shown in this window that I am not opening. So, what I need is, without actually creating a display window, getting this image+rendered shapes.

It seems that, when the computer monitor display is turned off (when there is nothing to take a screenshot of), sample_image_capture with --offscreen=1 or 2 indeed captures what would’ve been shown on the window including rendered shapes.

So thanks a lot for your assistance!!

Good to hear your problem was solved.