Getting External Image Data into the SDK

Hello,

Im Currently developing for the drive and i’ve come to a problem where i couldn’t find help in the documentation and examples.
So what i need to achieve is getting image data from outside the NVidia SDK into it so it functions just like the camera server tool.
I have two use cases:
Live Mode → i have a program that follows the multi camera example, grab the images, compresses them, gets the compressed data and Publishes them onto a ROS topic, from where its then recorded as bag file.
Replay Mode → i have a ROS bag player, so i have the compressed images in a raw format.

Additionally to that we want to have a Image processing enabled alongside the Recording or Replay.
Also we want to record this data as rosbag, as we have to record some data as bag anyway (e. g. Ibeo Lidar data, as the sdk cannot provide the object data it provides), we have come to the conclusion that recording everything using bag files is better then mixing some data on ROS bag and some data in the SDKs format, as replaying all of it in a synchronized matter would be a mess.

So what i need at this point is a way to take the raw image data and pass it to the SDK so the image processing tool can receive them in the same way it would receive them when we would run it he camera server. I imagine in Live Mode that can be achieved by passing some option to the SDK to tell it to provide that data to other components, but im not sure how to achieve it in Replay Mode.
So to make that question independent of ROS, is i have a raw image of any sort (can be ros, can be opencv, so we have metadata and raw pixel data):
How can i pass that Image data to the NVidia SDK so that the Image Processing can use them the same way as described in the camera examples, so that basically out tool can serve as a camera server. Or is there even a way to do that?

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.5.0.7774
other

Host Machine Version
native Ubuntu 18.04
other

Dear @tino.weidenmueller,
If you want to use DW Image processing module, the image should be in one of the image type mentioned in DriveWorks SDK Reference: Image . You need to convert your image data to one of image data type. For example, if you have opencv image buffer, see if you can leverage from Convert cv::Mat to dwImageCPU

Hi @SivaRamaKrishnaNV

converting the raw data to the SDKs Image type is clear so far. I can use that structure, but that would required a direct exchange in one process as far as i understand it. The camera server however is a different process than the processing tool, so there seems to be some way of inter-process transmission of these images. Similar to what the Server/client setup does.
If thats not possible i can still create a plugin structure around it and load that plugin directly into the image converter nod so no inter process communication is required, but i was wondering if the SDK provides an interface to pass that image over so it can be used by other processes.

Dear @tino.weidenmueller,
It is not clear what exactly you need. DW has IPC Module(DriveWorks SDK Reference: IPC) which can be used for inter process communication. Does it fit your requirement?

Ok, so let me change my question a bit.
For getting Camera data, there is the Server/client Setup, where you launch a camera server, that connects to multiple cameras and distributes that data to multiple other programms. This one:
https://docs.nvidia.com/drive/driveworks-3.5/sensor_distribution_tool.html
So i basically want to have a similar setup but my own tool should take the role of the camera server.
How do i set that up?

Additionally, i want to verify something, someone from our project stated that the NVidia SDK would be able to transfer that data (inter-process) as GPU Memory, so you leave the image on the gpu and just pass that memory over to somewhere else, so you dont need to download it from the GPU send it via some interface and re-upload it again. Is that actually true or was i given wrong assumptions? If the Camera Server/Client setup is a simple socket transmission then all of my questions basically resolve by themselfes, but if there a special, more efficient interface there, i’d like to use that interface.

Dear @tino.weidenmueller,
DW SDK has ImageStramer module which can stream Image data across CUDA/NvMedia/OpenGL APIs with out data transfer(only if needed). Please see DriveWorks SDK Reference: Image for more details on Image Streamer. The camera server tool also uses Image Streamer to transfer data. Please check Image Streamer samples for usage.
Currently, If you plan to use DW IPC APIs, as they are based on socket programming, you should be able to send data over ethernet. But you may hit network bottleneck issues.
If you use use ImageStreamer, both process should be on SoC.

Hi @SivaRamaKrishnaNV
Ive looked through the documentation of the Image Streamer and i have a few questions about that.
As ive seen in the multi process example there is a way to transmit images across processes, but that functions seem to be locked to Vibrante Builds (at least judging by the header files). This means the api can only be accessed when building for the drive? Im still not so sure what exactly the vibrante Build option is.
Dows does Image Streamer allways work cross process, if so, how do i connect an image streamer to a specific processing node, like we have 12 cameras, how do i assign each stream to the right processing node (the cross Process verision has the socket file but that suggests to me that cross process stuff is working via sockets only and the " with out data transfer" portion wont kick in there as you need to copy via sockets.
Im I understanding it correctly or did I get something wrong there?

Dear @tino.weidenmueller,
Im still not so sure what exactly the vibrante Build option is

This code section is expected to work only on target

ImageStreamer works if both processes on same Tegra. Note that, all the buffers created on Tegra using CPU, CUDA,NVmedia… present on the same DRAM. But CPU can not access CUDA/Nvmedia buffers as it does not know how to read those buffer strcucts. So, to have interoperability across APIs, we have EGLStreams. EGLStream does not do any data transfer. It just transfer the metadata from one API to other and facilitates to map buffers from one API to another. ImageStreamer is wrapper over EGLStream. Socket in cross process is used to share some metadata across processes. But there is no actual data transfer from process to another as they present on same SoC DRAM memory. You can create one socket file each EGLStream (corresponds to each camera stream of data).

I hope this clarifies.

Hi @SivaRamaKrishnaNV,

Ok i think most things are clear to me now, there are just some final questions from my side.
So the cross process EGL Stream works only via the dwImageStreamerGL_initializeCrossProcess Initialization function right? And that function is only available on the target and cannot be tested locally on x86?
An one final question:
That function takes a Socket as name (Sochetfile) but in the examples, these are never given as names, but as ports. How do i do that?

Dear @tino.weidenmueller,
Note that NvMedia is not available for x86. You can use CUDA/GL as producer/consumer to test on x86.

I could see in the example, socket file name is used a parameter. So for each ImageStream connection, you need to create a seperate file.

Hi @SivaRamaKrishnaNV
But dwImageStreamerGL_initializeCrossProcess is only available on Vibrante, hence my conclusion that the cross process EGL Stream is only available on the Drive. So the cross process portion cannot be tested on x86?

I did get the part about the socket name but the documentation for the dwImageStreamerGL_initializeCrossProcess socketname parameter states that its only a socket name by default but it can also be a key-value list. That suggests to me that you can set it to a port instead of a socketfile. The documentation however doesnt explain what parameters are available when using that key-value pairs.

Dear @tino.weidenmueller,
cross process portion cannot be tested on x86?

Yes. Cross process is available only on target.

You can set mode,filename,fifo-size in key value pair list.

Hi @SivaRamaKrishnaNV

I think i have all the infos i need. Thanks for your help.