How to let deepstream-6.0 use all gpu cards

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.01
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1.6
• NVIDIA GPU Driver Version (valid for GPU only). 510.47.03
• CUDA version 11.4
• Issue Type( questions, new requirements, bugs). Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I just upgraded deepstream from 5.0 to 6.01, and found one weird issue. When I was using deepstream-5.0, I set up gpu-id=1 in [property] configuration, then it will be able to use the two gpu cards, I can see one process id showed up on both GPU 0 and GPU 1, and I can get expected result.

But when I updated to deepstream-6.01, and with the same configuration gpu-id=1 in [property], it also showed one process id on both GPU 0 and GPU 1, yet I cannot get expected result for the object detection. If I changed to gpu-id=0, the process will only show up on GPU 0, and I will get expected result.

I don’t know what caused the difference. Does anyone know in deepstream-6.0 how we can use all gpu cards?

our team will do the investigation and provide suggestions soon. Please stay tuned. Thanks

Thanks for investigation.

I got more severe problems.

  1. The output video was normal in old version deepstream-5.0. After I switched to deepstream-6.0 (or deepstream-5.1), the output video was abnormal in the first few seconds. The output video was generated with the frames from the rtsp stream after post-processing object detection results.

  2. When I put a few streams in one pipeline, the output video was messed up with frames from each stream in the first few seconds.

I don’t know if this issue was reported before. It seems reproducible.

I had a similair problem here: DS 6.0 Shared memory multiple GPU issue

Regarding the issue 1, I outputed the frames to individual files and noticed there were many duplicated ones, it turned out the python binding function got wrong data, below is what I’m using.

batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
l_frame = batch_meta.frame_meta_list
     while l_frame is not None:
          ...
          frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
          n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

Did anyone run into the similar issue?
This happened in both deepstream 5.1 and deepstream 6.01.

Did anyone happen to run into similar issue?

@gabe_ddi , Thank you for confirming one of the issues I got.

The gpu-id is to make the application to run the corresponding component on the designated GPU. So if you can only see one GPU is working, that means you assigned the tasks in the GPU.

Thanks @Fiona.Chen . Based on my testing, I have 2 gpus, when I set gpu-id=1, the task will run on both gpu 0 and gpu 1, not only run on gpu 1. That has been working for a while on deepstream-5.0.

This is not changed in deepstream-5.1 and deepstream-6.0, right?

You can choose to run different components on different GPUs in the deepstream-app config file, but you need to make sure the nvbuf memory type is nvbuf-mem-cuda-unified.

An example is attached
test.txt (4.8 KB)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.