**• Hardware Platform GPU
**• DeepStream Version 6.1.1
**• TensorRT Version 8.4.1.5
**• NVIDIA GPU Driver Version Quadro RTX 8000
**• Issue Type questions
Hi,
I’m working with nvcr.io/nvidia/deepstream:6.1.1-triton docker image. I was loading my custom yolov5 model. My custom model loaded to triton and it ran on the tritonserver. when tritonserver was starting, I started deepstream. But deepstream gave the error ‘Segmentation fault (core dumped)’.
Can you share the command to start docker and deepstream application?
Please also share the output of “nvidia-smi” and “deepstream-app --version-all” in the docker environment.
there is not enough information in log.txt, could you share more logs? please do “export GST_DEBUG=6”, then run again, you can redirect the logs to a file.
can you use gdb to debug? and please share the crash stack.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
here are some commands: 1. gdb ./deepstream-app, 2. set args xxx", 3 execute bt after crash, please google for more details.