Deepstream on Nvidia 3080

Hi,
We have got 2 new 3080 machines but having issues while installing and running Deepstream on it. So i want to clarify whether deepstream supports with 3080 machines or not

1 Like

We maitain and test on Tesla series cards and Jetson devices, but user in the forum can run with gtx series cards.
can you paste the error log here?

1 Like

User-1 Error log:
root@euclid-Z390-D:/home/euclid# deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100

(gst-plugin-scanner:31959): GStreamer-WARNING **: 23:47:56.956: Failed to load plugin ‘/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so’: libtrtserver.so: cannot open shared object file: No such file or directory
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
nvbufsurftransform:cuInit failed : 100
** ERROR: <create_multi_source_bin:1057>: Failed to create element ‘src_bin_muxer’
** ERROR: <create_multi_source_bin:1132>: create_multi_source_bin failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:636: Failed to create pipeline
Quitting
App run failed

User-2 Error log while running deepstream on 3080:

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: Assertion failed: Unsupported SM.
…/rtSafe/cuda/caskUtils.cpp:80
Aborting…
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: …/rtSafe/cuda/caskUtils.cpp (80) - Assertion Error in trtSmToCask: 0 (Unsupported SM.)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1186 Build engine failed from config file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:884 failed to build trt engine.
0:17:18.799430598 33 0x55badddd1460 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:17:18.799555061 33 0x55badddd1460 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:17:18.799593514 33 0x55badddd1460 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:17:18.799615724 33 0x55badddd1460 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:17:18.799619877 33 0x55badddd1460 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

From your first log,

nvbufsurftransform:cuInit failed : 100

cudaErrorNoDevice = 100

This indicates that no CUDA-capable devices were detected by the installed CUDA driver.

Please check if you install driver properly, you can run sample deviceQuery to check the status, or run nvidia-smi, ls -l /dev/nvidia* to check devices created or not.

Let’s try to fix this first

as for user case 2, Assertion Error in trtSmToCask: 0 (Unsupported SM.), RTX 3080 is with ampere architecture, we support this on version TR7.1 cuda11, not sure about DS if support so far, will check, and get back to you.