Issue Running deepstream-imagedata-multistream.py with DeepStream 8.0 - CUDA Device Property Error and Missing Attribute in pyds

System Information:

  • Operating System: Ubuntu 24.04.1 LTS
  • Distributor ID: Ubuntu
  • Release: 24.04
  • DeepStream Version: 8.0.0
  • DeepStream SDK: 8.0.0
  • CUDA Driver Version: 12.2
  • CUDA Runtime Version: 12.9
  • TensorRT Version: 10.9
  • cuDNN Version: 9.8
  • libNVWarp360 Version: 2.0.1d3

Hi ,

I’m working with DeepStream 8.0 and running into two major issues when trying to execute the deepstream-imagedata-multistream.py example:

Command:

python3 deepstream_imagedata-multistream.py rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 frames4

Errors:

root@dell:/opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream# python3 deepstream_imagedata-multistream.py rtsp://127.0.0.1/video1 rtsp://127.0.0.1/video2 frames4
Frames will be saved in  frames4
Creating Pipeline 

Creating streamux 

Creating source_bin  0  

Creating source bin
source-bin-00
Creating source_bin  1  

Creating source bin
source-bin-01
Creating Pgie 

Creating nvvidconv1 

Creating filter1 

Creating tiler 

Creating nvvidconv 

Creating nvosd 

ERROR: Getting cuda device property failed: 35
Creating EGLSink 

Atleast one of the sources is live
WARNING: Overriding infer-config batch-size 1  with number of sources  2  

ERROR: Getting cuda device property failed: 35
Traceback (most recent call last):
  File "/opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py", line 459, in <module>
    sys.exit(main(sys.argv))
             ^^^^^^^^^^^^^^
  File "/opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py", line 393, in main
    mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'pyds' has no attribute 'NVBUF_MEM_CUDA_UNIFIED'

Additional Information:

  • My NVIDIA driver 535.274.02 worked fine with DeepStream 7.1, but it seems incompatible with DeepStream 8.0.
  • I also tried to install NVIDIA driver 570.133.20 to match DeepStream 8.0 requirements, but I was unable to install it successfully.

How to Solve failed (35) CUDA Error and Fix Missing NVBUF_MEM_CUDA_UNIFIED in pyds in DeepStream 8.0?

DS-8.0 requires CUDA-12.9; please install the GPU driver correctly first.

Then install pyds version 1.2.2. I have tested it and it works correctly.

You could consider using Docker; executing the following script within a container will fully build the dependencies.

/opt/nvidia/deepstream/deepstream/user_deepstream_python_apps_install.sh -b -r v1.2.2

Hi junshengy,

Thank you for your suggestions! I followed your instructions:

However, when I run the multi-stream Python sample (deepstream_imagedata-multistream.py), I encounter errors:


Frames will be saved in  frames4
Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating source_bin  1  
 
Creating source bin
source-bin-01
Creating Pgie 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
ERROR: Getting cuda device property failed: 35
Creating EGLSink 

Atleast one of the sources is live
WARNING: Overriding infer-config batch-size 1  with number of sources  2  

ERROR: Getting cuda device property failed: 35
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  rtsp://127.0.0.1/video1
2 :  rtsp://127.0.0.1/video2
Starting pipeline 

WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1261 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine open error
0:00:00.301037719   649      0xb7fe850 WARN                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2097> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine failed
0:00:00.301052316   649      0xb7fe850 WARN                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2202> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-8.0/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream/../../../../samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.301059179   649      0xb7fe850 INFO                 nvinfer gstnvinfer.cpp:685:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2123> [UID = 1]: Trying to create engine from model files
0:00:27.132126717   649      0xb7fe850 INFO                 nvinfer gstnvinfer.cpp:685:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2155> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-8.0/samples/models/Primary_Detector/resnet18_trafficcamnet_pruned.onnx_b2_gpu0_fp16.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:363 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1:0       3x544x960       min: 1x3x544x960     opt: 2x3x544x960     Max: 2x3x544x960     
1   OUTPUT kFLOAT output_cov/Sigmoid:0 4x34x60         min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT output_bbox/BiasAdd:0 16x34x60        min: 0               opt: 0               Max: 0               

0:00:27.240286391   649      0xb7fe850 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest_imagedata_config.txt sucessfully
Decodebin child added: source 

ERROR: Getting cuda device property failed: 35
Decodebin child added: source 

ERROR: Getting cuda device property failed: 35

**PERF:  {'stream0': 0.0, 'stream1': 0.0} 

Error: gst-resource-error-quark: Could not open resource for reading and writing. (7): ../gst/rtsp/gstrtspsrc.c(8442): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Failed to connect. (Generic error)
Exiting app

ERROR: Getting cuda device property failed: 35

Error: gst-resource-error-quark: Could not open resource for reading and writing. (7): ../gst/rtsp/gstrtspsrc.c(8442): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
Failed to connect. (Generic error)
Exiting app

Refer to this topic. you can try to update CUDA toolkit. Use docker is a better choice, the corresponding CUDA toolkit is already installed in the Docker image.

Make sure your RTSP source is working; you can test it using a local file first.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.