Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU: Tesla T4 (AWS’s g4dn-xlarge)
• DeepStream Version 6.4 (nvcr.io/nvidia/deepstream:6.4-samples-multiarch)
• TensorRT Version 8.6.1 (from nvcr.io/nvidia/deepstream:6.4-samples-multiarch)
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) build attached dockerfile, run the attached test within the docker container
Hi,
We’ve been looking into upgrading our deepstream 6.2 based solution to deepstream 6.4 (note, 7.0 was released in the middle of this upgrade effort). However, it seems that tensorrt doesn’t work when used together with deepstream within a python app, when pyds bindings are used.
I was able to make a minimal case which reproduces the bug, see attached dockerfile and trt-test.py
. Running the trt-test.py
results in:
Aborted (core dumped)
when trying to context.get_binding_shape(i)
.
EDIT: removed irrelevant gdb-backtrace
Note - bug dissapears when we remove import pyds
or from gi.repository import Gst
, but this obviously is not a solution - our code which manages deepstream and dispatches GPU-resident buffers from deepstream into trt (and other places, incl. cupy ops) is a python app, so we need both of those bindings.
Note - attached dockerfile builds pyds itself, but when using precompiled pyds (from https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.1.10/pyds-1.1.10-py3-none-linux_x86_64.whl), the issue still persists
Edit - just checked deepstream 7.0, it exhibits the same behaviour.
Everything works just fine in deepstream 6.2 and trt 8.5.2.2.
trt-test.py.log (995 Bytes)
ds64-trt-test.Dockerfile.log (1.9 KB)