I have an app that receives images and operates on those images via cuda before compression into an h264 video stream that’s saved to a file.
This works, but after a variable (generally between 5 and 1000) number of frames there’s a deadlock.
The structure of the app is:
- images are async uploaded into the GPU
- CUDA kernels are queued
- A callback is registered for when the kernel completes
- Once the callback is fired, the finished cuda frame is put in a thread safe queue where a dedicated thread passes the cuda frame to NVENC to be synchronously compressed.
If my app receives a new frame before the compression of the previous frame is finished, the new frame is uploaded into a different cuda stream
When the deadlock occurs, the first thread freezes on “cuMemcpyHtoDAsync” and the compression thread freezes on “nvEncMapInputResource”. To me, this suggests that some addition synchronization is needed between CUDA and NVENC when cuda is using more than one stream. The only relevant mention I can find to this is https://devtalk.nvidia.com/default/topic/791948/gpu-accelerated-libraries/nvenc-and-synchronization/ but despite being a four year old thread there doesn’t seem to be any consensus.
Does anyone have any suggestions for what additional synchronization is needed between CUDA and NVENC?
Does the compression thread need to have the cuda context pushed before interacting with NVENC?
Any and all advice/tips for this would be helpful, as I’ve been stuck on this problem for a while now.