DeepStream 6.1.1 Pyds.so location

Hello, rather new to this. Needed some help with upgrading from DeepStream 5 to 6

Re: background on the issue, our app needs to install the python bindings: pydso to compile the app we have created. This was automatically done in our docker container; post the installation of pydso. The installation was done using the following lines in a shell script:

cp /opt/nvidia/deepstream/deepstream/lib/pyds.so /app/
cp /opt/nvidia/deepstream/deepstream/lib/setup.py /app/
python3 setup.py install

The directory paths shown above work for deepstream 5, however it fails in deepstream 6. pyds.so and setup.py can’t be found. Our main issue is simply to get the correct directory paths for these files so that we can complete the compilation of the app.

There is no pre-installed pyds.so in DeepStream6, please refer to the guide in deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com) to install the python bindings.

1 Like

Thanks so much. Gonna give it a shot and update the thread accordingly

Thanks so much for the suggestion. That problem no longer exists now, but I have a different issue now. When I run the app, I’m using some stock PyTorch models on it, which I put on the gpu device. However I get the following error

RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination

Is it because specific cuda drivers are being used, and there’s a mismatch between my PyTorch installation & cuda pre-installed in the Deepstream6.1.1 docker image?

fyi; I’m using: torch==1.12.0+cu116 torchvision==0.13.0+cu116

Can you share the full log of executing your app? Does your deepstream program import modules from torch and torchvision? Not sure which part causes this error, you may run deepstream-test1 python example in your environment to check if the python environment for deepstream is OK.

Please also provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hello, of course, the following is the error I receive.

reader = FUNCTION(ARG) 
  File "ROOT DIR BLOCKED FOR PRIVACY", line 55, in __init__
    self.model = self.model.to(self.device)
  File "/usr/local/lib/python3.8/dist-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 54, in to
    return super().to(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 927, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 579, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 602, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 925, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py", line 217, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination

My app does import from torch and torch vision both, in particular it fails when I put my model on the GPU device i.e. the following line:

self.model = self.model.to(self.device)

where the device arg is “cuda”. The hardware of the device (not Jetson based), basically a laptop with a NVIDIA GPU:

  • CPU: Intel® Xeon(R) CPU
  • GPU: 1 x NVIDIA RTX A5000 GPU
  • Memory: 62.5 GiB

Deepstream 6 is installed in a docker container, and PyTorch + torch vision installed is for cuda 11.6:

torch==1.12.0+cu116 torchvision==0.13.0+cu116

Seems it has nothing to do with pyds or DeepStream. It may relate to Pytorch.

Can you google by yourself?

E.G. RuntimeError: Unexpected error from cudaGetDeviceCount() - torch.package / torch::deploy - PyTorch Forums

Yes, different issues; tried a few things. I’ll keep digging.
I’ll close this issue as the Pyds.so has been solved :)

Thanks for much for your help hitherto

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.