Segmentation Fault (Core Dumped) Issue when Running deepstream_test1_rtsp_in_rtsp_out.py Example in DeepStream 6.4, but Not in DeepStream 6.3

• Hardware Platform (Jetson / GPU): dPU A40.
• DeepStream Version: 6.4-triton-multiarch.
• TensorRT Version: 8.6.2.3.
• NVIDIA GPU Driver Version (valid for GPU only): 525.147.05.
• Issue Type( questions, new requirements, bugs): bugs.
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’m using DeepStream with Python and the Docker image nvcr.io/nvidia/deepstream:6.4-triton-multiarch to run the deepstream_test1_rtsp_in_rtsp_out.py for an RTSP input and output example. However, it immediately crashes with Segmentation fault (core dumped) upon starting. But when I switch back to the nvcr.io/nvidia/deepstream:6.3-triton-multiarch image, it runs fine. Is there any way to get it working on version 6.4? Thanks for your support.

I think you can try upgrading the driver and nvidia-container-toolkit

DS-6.4 should work with driver version 535

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#id6

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

I will try it and reply you later, thank you very much.

Sorry, I followed your instructions and upgraded the ubuntu version to 22.04, driver to 535 and cuda to 12.2 but it still doesn’t work. Have you tried reproducing it on deepstream 6.4? I don’t understand why, if the input is an mp4 file, it works, but if it’s an rtsp, it will coredump.

I’ve tried it, it’s all ok.

How do you run the app, can you share your command line ?

Add GST_DEBUG=3 in front of the startup program, like GST_DEBUG=3 example, and then help dump log

This is my log when running GST_DEBUG=3, please help me check it.
log.txt (6.5 KB)

And here is my log when running GST_DEBUG=3 for deepstream 6.3, it work.
log.txt (21.2 KB)

No errors can be seen in the logs. Can you share the command line you used to start docker?

Or you can try the following approach

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream nvcr.io/nvidia/deepstream:6.4-triton-multiarch 

I tried running docker with your command but it still shows the error Segmentation fault (core dumped), I’m using deepstream_python_apps in python, not c++. The strange thing is that I can run the same code in deepstream 6.3 but in 6.4 it has an error. Please help me.

If I don’t use model-engine-file and let it convert, it will give this error.
log.txt (50.9 KB)

I tried on three different servers: 1080ti, T4, A40, all had the same error.

1.Will this problem occur if you install DS-6.4 on the host?

2.What is your docker version? I use Docker version 20.10.8

docker --version
  1. How to install DS 6.4 on the host?
  2. I am using Docker version 25.0.1, build 29cf629.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu

I face the same problem, any updates? @taolanhat @junshengy
if I run deepstream-test3 using C++ is working via rtsp, but not works using deepstream_python_apps in python

Please open a new topic for your issue. Please provide your GPU model and software version, and sample code that can reproduce the problem. @Newbie98

apt update
apt install python3-gi python3-dev python3-gst-1.0 -y
apt-get install libgstrtspserver-1.0-0 gstreamer1.0-rtsp
apt-get install libgirepository1.0-dev
apt-get install gobject-introspection gir1.2-gst-rtsp-server-1.0

Have you tried it yet ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

His problem was solved after restarting docker. Have you tried installing deepstream on the host?

This may be a driver installation problem

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.