Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
Target Operating System
Linux
QNX
other
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other
SDK Manager Version
1.9.1.10844
other
Host Machine Version
native Ubuntu 18.04
other
Hi,
Sorry, this may be a long post as I have a mixture of specific issues and general questions, but in a nutshell- using NvROS, we are trying to receive images from 4 GMSL cameras, do some processing and display a single output. We have had little success with this having tried a number of things that maybe I shouldn’t elaborate on in too much detail right now, because I would love to first hear if there is an intended way of going about doing this? Here are some of my thoughts/issues/questions that once answered could hopefully help resolve our problem, or at least give me a better understanding of things-
-
nvros_cam_cap_multistream - missing ddpx-a.conf file
This is among the base nvros packages that looks the ideal solution, but it fails to run as it needs the following file /opt/nvidia/nvros/install_isolated/etc/ddpx-a.conf (line 277 of nvros_cam_cap_multistream.cpp). This seems to have been raised in topic and I have the same case where the ddpx-a.conf is not present after installing the SDK. May I know where I can find this file?
I also notice that the multistream code is quite different from nvros_cam_cap that uses SIPL (which we are able to run for a single camera), which leads me to my next point- -
Type error when instantiating multi-camera streams using NvRosSIPL and invoking test_egl_cuda_io_multistream for some processing (MultiStreamDisplay in this case)
Not sure if this is expected to work, but we effectively did this by instantiating test_nvros_sipl node withcamMask = "0x1111 0x0000 0x0000 0x0000"
, on another node invoked test_egl_cuda_io_multistream with modified socket paths to match those created by NvRosSIPL, and finally an nvm_eglstream_out node for the display. On execution, there was a YUV to RGBA conversion error thrown by NvMedia2DBlitEx. Where I think this broke down is in the fact that NvRosSIPL posts images of typeNvMediaImage
but test_egl_cuda_io_multistream expects images of typeCUarray
. Is there a workaround for this- what seems to be a type conversion problem? Which leads me to my final question- -
Understanding the NvMediaImage datatype
This may be a silly question, but the definition of NvMediaImage innvmedia_image.h
has me a little confused. It shows a structure with parameters for image size etc. but none (as far as I could tell) seem to be related to any pixel information (like RGBA) or a pointer to something like that. So I’m wondering how this information is actually passed through, for instance to an EGL stream?
Apologies again for the lengthy post. I’m fairly new to this. I understand if I can’t get a response in full, but any thoughts on this would be much appreciated :)
Thanks,
Ken