could you also provide a pipeline graph dot to confirm? Thanks! from the graph, only decoder will use GPU. and please use nvidia-smi to check if the application used two GPU.
I have attached pipeline graph and the nvidia-smi snapshot where it is clearly spawning the same gpu1 process in gpu0 as well. please check below zip file for pipeline graph and nvidia-smi
pipeline_graph_and_nvidia_smi.zip (206.2 KB)
The same gpu1 process getting spawned in gpu0 only when I give start-sr
signal.
Thanks for the sharing! I can reproduce the issue.
- seems the memory size on other GPUs will not continue to increase. will this issue block you? may I know you company name?
- if you don’t need sr-done message, here is a workaround.
2.1 open gstdsnvurisrcbin.cpp in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvurisrcbin. In populate_rtsp_bin, comment out “params.callback = smart_record_callback;”. then build libnvdsgst_nvurisrcbin.so according to readme.
2.3 backup /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_nvurisrcbin.so, then cp libnvdsgst_nvurisrcbin.so /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_nvurisrcbin.so.
Hello @fanzh !
Can you explain what this means?
so it is the sr-done message causing the issue?
Were you able to wait for couple of minutes for the process to spawn on GPU-0, and were you able to rectify this, and make the process only spawn on GPU-1, and not GPU-0?
- Sorry, I mean, when setting the same gpu-id for all plugins and using smart record, the application do use another GPU. From my test, the GPU usage on another GPU will not continue to increase.
- did you verify the workaround? it is not the sr-done causing the issue. the reason is, after generating recording, smart recording will do some tasks(this step causes the issue), then emit a sr-done message. the workaround will remove doing some tasks and sending sr-done. if you can’t accept this workaround, since the issue occurs in the closed source, we will fix in the latter versions. Or may I know you company name? maybe we can provide the fix for your company in advance.
- about “make the process only spawn on GPU-1, and not GPU-0”, we will fix in the latter versions. Thanks!
It does not increase but, it matches the GPU-1’s memory, what if I already have something running on GPU-0, it will affect it.
I will try it, and update you on this.
gst-nvurisrcbin is not present in deepstream-7.0
Well, I tried on deepstream7.1, the workaround works, the process spawns on GPU-1, and continues to be on GPU-1..
I want to try this for deepstream-7.0 what can I do for deepstream-7.0?
Thanks for the sharing! nvurisrcbin is closed source on DS7.0. you can copy the directory to DS7.0 for rebuilding. maybe there are some compilation errors.
am I missing something?
can I just use the ‘SO’ file /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_nvurisrcbin.so in deepstream-7.0, that should be fine I guess?