How to publish GMSL camera images to ROS?

I followed some instructions to install opencv 3.1 on PX2. But how to compile that against driveworks? Or how to cross compile opencv on the host machine?

Or, does anyone know how to use gmsl camera in opencv? Or, does anyone know how to publish gmsl camera images to ROS node?

Solving any of those questions could help us a lot. Thanks!!!

I have been compiling ros nodes (which link with driveworks binaries) on the tegra host. I was hoping to document this well but haven’t taken the time. Most of the configuration needed is in the CMakeLists. Here is a link to the changes I made, I hope this helps.

Thanks llorenzdes,

Based on your work, we have done some changes and integrated ROS and opencv. I haven’t cleaned up properly and not sure whether this is working for every machine or not. But happy to share.


Thanks for sharing the driver at
I am currently using it with ROS Kinetic on a PX2 AutoChauffeur (Ubuntu 16.04);
using 2 gmsl cameras.

However, there seems to be a bit of lag in the display output.
The CPU usage is also very high (~200%).

Did anyone manage to get a better performance?
How may I optimize this?

Many thanks in advance.


About publishing ROS images:

  • Get a processed NVMEDIA image from camera.
  • Stream the image into CUDA to perform format conversions (for example, YUV420 -> RGB) if it is necessary (using dwImageFormatConverter)
  • Stream the image into CPU and publish. I used cudaMemcpy() directly into sensor_msgs::Image::data to avoid extra memory copy

In the simple case that will take about 100 lines of code. Also you might get raw images from camera and use Raw Pipeline to process them. Don’t forget about CameraInfo also.

About OpenCV: Have you build OpenCV for Tegra? Does VisionWorks fit for you?

Dear nevermoreao4sv,

Thank you for your suggestion; I will look into checking it out.

May I know how much CPU utilisation should I aim for/does your code use?

About OpenCV: I have OpenCV3 & VisionWorks installed


I think you cannot heavily load / overload CPU when you are using hardware accelerated API :)

As an example, my camera node performs:

  • NVMEDIA -> CUDA streaming
  • NVMEDIA JPEG encoding
  • CUDA YUV420 -> RGB conversion
  • Publishing all messages including camera info

I have desired 30 fps on 4 GMSL (at 1 CSI port) cameras and load is (in terms of top) is about 35%.

Dear nevermoreao4sv,

I am having trouble on the first part: getting the stream from NvMedia to CUDA.
I found the documentation on the dwImageCUDAMemoryType but the information is not enough for me to call the function properly. May I have an example of how to do so?

Many thanks in advance

Hello, mpechon

I am trying this problem.
Have you found a way to solve this problem?

Hi groot,

Not completely solved. I wrote some code on github but the current performance is at 30% CPU usage per camera. We have found that memCpy is the expensive function; and so we are probably going to try to stream from NvMedia instead of sending image messages over ros but you can find the ros way at github linked below.

For any number of cameras above 3, I would get frame drops like crazy so I think you may need to work on the code a little more. Perhaps launching several nodes would help spread the load across the cpu cores; but it would still take up a significant amount of the cpu. Definitely change it to a nodelet if you are considering using this approach.

The github repo, forked from cshort101:

Dear cshort101
I saw your text on NVIDIA and your git hub about PX2 / ROS
I have a question about your OpenCV(on PX2)
I Know that ROS include OpenCV, but i doesn’t trust that ported on PX2 perfect
And i saw that you installed OpenCV on PX2.
Is OpenCV you used that included on ROS or that installed additionaly?
If you using OpenCV installed additionaly, then how can i install that? and how to remove ROS’s OpenCV?
thank you for reading my question

Hi dlehdrb3909,
I used to use self compiled opencv, but now we just use ROS opencv.
the instruction if you wanna build your own opencv is: or
I think both solutions should work…

Hi cshort and mpechon,

I am using Drive PX2 Autochauffeur with Ubuntu 16.04 LTS and ROS kinetic. I just brought up the system a day back and trying to compile the gmsl driver from here:

As per the CMakeLists.txt, I have copiedFindDriveworks.cmake, ArchConfiguration.cmake, and LibFindMacros.cmake from /usr/local/driveworks/samples/cmake/ to catkin_ws/src/gmsl_driver/cmake/

Also, as per the CMakeLists.txt, I have modified ArchConfiguration.cmake to contain following lines instead of error message in line 17


When I try to build ROS catkin workspace, I get the following error:

In file included from /home/nvidia/catkin_ws/src/gmsl_driver/src/main.cpp:62:0:
/home/nvidia/catkin_ws/src/gmsl_driver/src/WindowGLFW.hpp:41:24: fatal error: GLFW/glfw3.h: No such file or directory

I don’t have “GLFW” folder anywhere. There is “/usr/include/GL/glfw.h”. Should I use this instead? Do I need to install anything before this works?

To get around the above error, I used “GL/glfw.h” in “/usr/include/”. Now I am getting this error:

In file included from /usr/local/driveworks/include/dw/image/ImageStreamer.h:53:0,
                 from /home/nvidia/catkin_ws/src/gmsl_driver/src/ResourceManager.hpp:37,
                 from /home/nvidia/catkin_ws/src/gmsl_driver/src/main.cpp:66:
/usr/local/driveworks/include/dw/image/Image.h:73:27: fatal error: nvmedia_image.h: No such file or directory

This time I don’t have “nvmedia_image.h” anywhere in Drive PX2 system. This makes me think I might be missing some libraries. Any help is appreciated.

This error is because visionworks has been removed in the recent SDK… the way to work around it is, find that file/folder in your host machine and copy it to your px2… I don’t know if there’s other better way but this will solve your problem at the moment…

Thanks cshort. I have the visionworks files in the Drive PX2 now. But I started finding some API incompatibilities between old and new visionworks. Because of this, ROS driver isn’t compiling. I started fixing some incompatibilities and eventually found out there are too many of them and might take more time than I anticipated. I am wondering if there already exists an updated ROS driver which works with latest DPX2 SDK.

Yep, we re-developed the driver based on the latest camera samples and split the thread for different tasks. However, I don’t think I can publish that. The old one is just to give an idea to start. You might need to do further changes based on different driveworks versions. Sorry about that.

Dear Keeerthi,

I am using the driver on the second-latest Driveworks (before Apr 2018). I do not think there ought to be much difference between our versions. Perhaps if you would elaborate on the complexities you face, I could help where I can?

Also, in case you require an alternative solution, I solved the glfw dependency by installing glfw 3.2 from source.


Hi mpechon,

Thanks for your response. To begin with, the Driveworks didn’t have visionworks. So the ROS driver couldn’t find “nvmedia_image.h” and some other header files. I then copied visionworks header files from host computer as mentioned by cshort above. Now, some of the nvmedia related API’s used in the driver aren’t same as what the header files have in them. I don’t have access to DPX2 right now. I will post some examples tomorrow.

May I know the Driveworks and visionworks version number that you are using?

Did you have to modify anything in the driver at ?


Hi mpechon,

Some of the incompatibilities I am finding between the “gmsl_camera” ROS driver and nvmedia library are :

  1. NvMediaImageCreate() function in “Camera.cpp” doesn’t exist. There is a new function called NvMediaImageCreateNew() in “nvmedia_image.h” and it is not straight forward to move to this API.

  2. “GLFWwindow” class or struct doesn’t exist anymore and it seems to be changed to “GLFWimage”.

  3. “GLFW_KEY_ESCAPE” has been changed to “GLFW_KEY_ESC” and some similar changes in glfw.h, which were easy to fix.

  4. “NVMEDIA_IMAGE_CLASS_SINGLE_IMAGE” isn’t defined anymore. I am still trying to figure out what it has been changed to.

I might be using an old visionworks library. If you can provide me the Driveworks and visionworks version number that you are using, it would help a lot.