Deepstream Triton Inference Server Error, Segmentation fault (core dumped)

**• Hardware Platform GPU
**• DeepStream Version 6.1.1
**• TensorRT Version
**• NVIDIA GPU Driver Version Quadro RTX 8000
**• Issue Type questions


I’m working with docker image. I was loading my custom yolov5 model. My custom model loaded to triton and it ran on the tritonserver. when tritonserver was starting, I started deepstream. But deepstream gave the error ‘Segmentation fault (core dumped)’.

The Triton model directory tree i uploaded:

|-- models
| -- yolo | – 1
| -- model.savedmodel | |-- saved_model.pb | – variables
| |--
| `-- variables.index

yolov5 config file:

yolov5.txt (1.2 KB)

Deepstream config file:

yolo_detector.txt (852 Bytes)

triton log file:

triton_log.txt (12.1 KB)

Deepstream log file:

log.txt (929 Bytes)

How to solve this prbolem?


Can you share the command to start docker and deepstream application?
Please also share the output of “nvidia-smi” and “deepstream-app --version-all” in the docker environment.

Start Docker Command:

 docker run -it -d --runtime nvidia -p 8555:8554 -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.1 -v /tmp/.X11-unix:/tmp/.X11-unix -v /path/deepstream_python_apps:/data/

Deepstream run command:

deepstream-app -c yolo_detector.txt

deepstream-app --version-all:

deepstream-app version 6.1.1
DeepStreamSDK 6.1.1
CUDA Driver Version: 11.7
CUDA Runtime Version: 11.7
TensorRT Version: 8.4
cuDNN Version: 8.4
libNVWarp360 Version: 2.0.1d3


| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.7     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  Quadro RTX 8000     Off  | 00000000:5E:00.0 Off |                    0 |
| N/A   51C    P0    98W / 250W |   3768MiB / 45556MiB |     22%      Default |
|                               |                      |                  N/A |
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |

  1. there is not enough information in log.txt, could you share more logs? please do “export GST_DEBUG=6”, then run again, you can redirect the logs to a file.
  2. can you use gdb to debug? and please share the crash stack.

GST_DEBUG=6 - log:

log.txt (135.8 KB)

How can I take care of the crash stack?
How can I use gdb?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

  1. here are some commands: 1. gdb ./deepstream-app, 2. set args xxx", 3 execute bt after crash, please google for more details.
  2. please refer to depstream yolov5 sample: GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream, and
    DeepStream SDK FAQ - #24 by mchi

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.