Hi there!
We’re trying to build a transcoding server at the moment and are trying to take advantage of 2 Titan X’s installed in a single machine running Ubuntu 15.10 using FFMPEG (as per the instructions here: http://developer.download.nvidia.com/compute/redist/ffmpeg/1511-patch/FFMPEG-with-NVIDIA-Acceleration-on-Ubuntu_UG_v01.pdf).
Both cards are visible when running nvidia-smi.
When running an FFMPEG job with up to 2 outputs using nvenc and no GPU explicitly specified and when GPU 0 is explicitly specified, it runs fine, but barely even touches the first card’s GPU (GPU 0’s encoding utilisation runs at around 20% max, where GPU 1 doesn’t even budge - it sits at 0%).
Simplified example:
ffmpeg -y -i inputfile.mov -filter_complex "nvresize=2:s=hd1080\|hd720:readback=0[out1][out2] \
-map "[out1]" -gpu 0 -b:v 10M -c:v nvenc -c:a copy output1.mp4 \
-map "[out2]" -gpu 0 -b:v 8M -c:v nvenc -c:a copy output2.mp4
When explicitly telling FFMPEG to use GPU 1 as follows:
ffmpeg -y -i inputfile.mov -filter_complex "nvresize=2:s=hd1080\|hd720:readback=0[out1][out2] \
-map "[out1]" -gpu 1 -b:v 10M -c:v nvenc -c:a copy output1.mp4 \
-map "[out2]" -gpu 1 -b:v 8M -c:v nvenc -c:a copy output2.mp4
it gives the following error:
[libavfilter/vf_nvresize.c:412]dl_func->cu_launch_kernel(...) has returned CUDA error 400
We also get a similar error when attempting to generate more than 2 outputs using nvenc.
Why is it that we’re not able to use the second GPU? And why can we not generate more than 2 outputs? Is it because of the 2-session limit on nvenc/nvfilter? Also, there’s no SLI bridge between the two cards at the moment.