Hello,
We’re using EGL Stream API to share video frames from single input to multiple applications. There is one process running as a video server, responsible for capturing and sharing frames. Server runs constantly in the background as systemd service. Also, up to 3 additional processes could be started anytime as a video clients.
Issue - On the server (producer) side there is a DMAbuf file descriptors leak after disconnection/reconnection of the clients.
Setup - Jetson TX2-4GB running L4T 32.4.4 with CUDA 10.2.
Test setup
I used EGLStream_CUDA_CrossGPU sample code from cuda-samples repo (latest master branch) as a reference.
Slightly modified Makefile to fix build for CUDA 10.2. As well as did quick modifications to main.cpp. Processes don’t stop and run until termination to emulate many reconnections to producer. Changes prod-cons.patch (3.5 KB) are attached.
Build command - cd Samples/2_Concepts_and_Techniques/EGLStream_CUDA_CrossGPU && make CUDA_PATH=/usr/local/cuda-10.2/ dbg=1
. Compiled binary EGLStream_CUDA_CrossGPU (804.2 KB) is attached.
Steps to reproduce:
Start consumer first - ./EGLStream_CUDA_CrossGPU
Consumer waits for producer.
In the second terminal start producer - ./EGLStream_CUDA_CrossGPU -proctype prod
Run script to collect the FD usage statistics for both processes - ./cuda_test_fd.sh
.
Script cuda_test_fd.sh (398 Bytes) is attached.
Press Enter button in Consumer terminal to repeat iteration. Collect statistics again and check that DMAbuf FDs are leaking.
According to the code, stream and frames should be destroyed (consequently, DMAbuf FDs should be closed) after each iteration, but in fact number of FDs grows very fast.
Where could be the issue? Is something wrong in sample code with sequence of Stream termination?
Any help is appreciated.
Thanks!