We have run into issues running multi-camera streaming for long times with Argus on our custom hardware. We were recommended – by Nvidia – to try disabling the multiprocess functionality (i.e. removing the argus-daemon from the equation).
In lieu of running the sample applications (We get SCF capture errors when we run with
--module="Multi Session") , we ran the DISABLE_MULTIPROCESS on our application. We can stream video, but we are getting constant Cuda errors from our Deepstream NN implementation. We noticed that when DISABLE_MULTIPROCESS=ON we link against “libargus.so” and when running with DISABLE_MULTIPROCESS=OFF the program links against libargus_socketclient.so per the Nvidia recommendations (FindArgus.cmake). libargus_socketclient.so does not depend on cuda at all. However, libargus.so does depend on cuda
ldd /usr/lib/aarch64-linux-gnu/tegra/libargus.so *libcuda.so.1 => /usr/lib/aarch64-linux-gnu/tegra/libcuda.so.1 (0x0000007f9df26000)).
Do we know what the libargus.so version is doing with CUDA? Could it be stepping on the toes of the Cuda implementation we are using within our neural network?
Here are the Cuda errors we receive everytime we try to “doInference()”.
error | kERROR: CUDA cask failure at execution for trt_maxwell_scudnn_128x32_relu_medium_nn_v1. 000771 | 20:51:42.897 | 22317 | error | kERROR: caskConvolutionLayer.cpp (235) - Cuda Error in execute: 33 000772 | 20:51:42.897 | 22317 | error | kERROR: caskConvolutionLayer.cpp (235) - Cuda Error in execute: 33