Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU • DeepStream Version
6.3.0 • JetPack Version (valid for Jetson only) • TensorRT Version
8.4 • NVIDIA GPU Driver Version (valid for GPU only)
535.54.03 • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
My server was able to run 100-channel monitoring in previous tests, but today I suddenly found that the multi-channel monitoring program reported an error. I didn’t change the code, and I probably didn’t change the system settings (the machine is public, I’m not sure about this). Why did it appear as follows?
For some reasons, my driver version is a little higher. However, when I installed the environment, the program ran as expected, and I have a machine with the same configuration that is running the business normally, but this machine suddenly reported an exception for no reason.
With my experience, such error happens with the corrupted process stack. Elements of a process (swarthmore.edu). We don’t know the reason without related information or clues.
Hello, I reinstalled the environment according to the tutorial, but why are the tensorRT and cuDNN versions still 8.4? How to uninstall them completely?
deepstream-app version 6.3.0
DeepStreamSDK 6.3.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.1
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3
I uninstalled cuda according to the following steps in the link you gave, and then installed cuda according to the official tutorial. After the installation, the cuda version is 12.1, but the cuDNN version is 8.4. Why is this cuDNN version wrong?
The ‘libv4l2: error attempting to open more than 16 video devices’ exception appears in the program. Is it possible that the cuDNN version and tensorRT version are too low? Because in my other machine with the same configuration and normal operation, its cuDNN version is 8.7, tensorRT version is 8.5, and other software environments are the same. Apart from this, I can’t think of any other reasons for the exception.
I completely uninstalled TensorRT, and then executed the following command to install TensorRT according to the Deepstream 6.3 tutorial. Theoretically it should be version 8.5, but after actual installation, it was still 8.4. I was very confused about this:
I tried many solutions, including installing the R525.125.06 graphics card driver strictly following the official Deepstream 6.3 tutorial. However, after executing the cuda and tensorRT instructions in the tutorial, the cuDNN version 8.4 is finally displayed, and tensorRT is also version 8.4. why is it like this?