DeepStream 6.4 Aborted (core dumped)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU RTX 2080
• DeepStream Version
6.4
• TensorRT Version
default tensorrt in 6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch
• NVIDIA GPU Driver Version (valid for GPU only)
| NVIDIA-SMI 535.183.01
Driver Version: 535.183.01
CUDA Version: 12.2 |
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

with this dockerfile

FROM nvcr.io/nvidia/deepstream:6.4-triton-multiarch AS base

ENV DEBIAN_FRONTEND=noninteractive

RUN /opt/nvidia/deepstream/deepstream/user_additional_install.sh

RUN apt-get update && apt install -y python3.10-venv
RUN python3 -m pip install --upgrade pip
RUN pip3 install redis paho-mqtt opencv-python packaging

RUN curl -O -L https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases/download/v1.1.10/pyds-1.1.10-py3-none-linux_x86_64.whl
RUN pip3 install ./pyds-1.1.10-py3-none-linux_x86_64.whl

then running

Working directory /opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/apps/deepstream-test3.

deepstream_test_3.py -i rtsp://admin:password@192.168.4.170:8554/ppe_basic.mp4 --no-display

i get

0:00:04.011101800 17825 0x55559384e110 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b1_gpu0_int8.engine
[New Thread 0x7fff7d7fe640 (LWP 17843)]
[New Thread 0x7fff7cffd640 (LWP 17844)]
[New Thread 0x7fff67fff640 (LWP 17845)]
0:00:04.014986093 17825 0x55559384e110 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
[New Thread 0x7fff677fe640 (LWP 17846)]
Decodebin child added: source 

[New Thread 0x7fff66ffd640 (LWP 17847)]
[New Thread 0x7fff667fc640 (LWP 17848)]
[New Thread 0x7fff65ffb640 (LWP 17849)]
[New Thread 0x7fff657fa640 (LWP 17850)]

Thread 18 "pool-python3" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff657fa640 (LWP 17850)]
0x00007ffff7ce59fc in pthread_kill () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) Quit
(gdb) pwd

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

RUN /opt/nvidia/deepstream/deepstream/user_deepstream_python_apps_install.sh --build-bindings -r v1.1.10

Please install the pyds dependencies correctly, add the above content to your dockerfile, and remove the following two items

thank you @junshengy,

after updating the nvidia driver to

| NVIDIA-SMI 560.35.03, CUDA Version: 12.6

i tried directly with deepstream 7.1 with this dockerfile and it worked

FROM nvcr.io/nvidia/deepstream:7.1-gc-triton-devel AS base

ENV DS_VERSION=7.1.0

RUN /opt/nvidia/deepstream/deepstream/user_additional_install.sh
RUN /opt/nvidia/deepstream/deepstream/user_deepstream_python_apps_install.sh --version 1.2.0

i had to change something in the pipeline/python probes from my old codebase btw, i think something changed beetween DS 6.2 (which i’m migrating from) and DS 7.1, in particular, this pipeline working in DS 6.2

demux.src_0 ! nvvideoconvert nvbuf-memory-type=3 \
! video/x-raw,format=RGBA \
! nvvideoconvert nvbuf-memory-type=3 name=videoconv 

became

demux.src_0 ! nvvideoconvert name=videoconv 

and the probe to videoconv it’s changed from (working in DS 6.2. but giving SEG FAULT in DS 7.1)

 def videoconv_probe(self, pad, info, u_data):
        try:
            buffer = info.get_buffer()
            if not buffer:
                print("Unable to get GstBuffer")
                return

            caps = pad.get_current_caps()
            success, map_info = buffer.map(Gst.MapFlags.READ)

            try:
                if not success:
                    return Gst.PadProbeReturn.DROP
                batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(buffer))
                l_frame = batch_meta.frame_meta_list

                while l_frame is not None:
                    try:
                        frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
                    except StopIteration:
                        break

                    frame_number = frame_meta.frame_num
                    l_obj = frame_meta.obj_meta_list

                    try:
                        l_frame = l_frame.next
                    except StopIteration:
                        break
            finally:
                buffer.unmap(map_info)

             rgb_frame = np.ndarray(
                shape=(
                    caps.get_structure(0).get_value("height"),
                    caps.get_structure(0).get_value("width"),
                    4,
                ),
                dtype=np.uint8,
                buffer=map_info.data,
            )
        except Exception as e:
            print('caught exception in videoconvs_probe')
            print (traceback.format_exc())

        return Gst.PadProbeReturn.OK

to (following the example in deepstream python apps)

    def videoconv_probe(self, pad, info, u_data):
        frame_number = 0

        print('video conv probe', frame_number)
        num_rects = 0
        gst_buffer = info.get_buffer()
        if not gst_buffer:
            print("Unable to get GstBuffer ")
            return

        batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

        l_frame = batch_meta.frame_meta_list
        while l_frame is not None:
            try:
                # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
                # The casting is done by pyds.NvDsFrameMeta.cast()
                # The casting also keeps ownership of the underlying memory
                # in the C code, so the Python garbage collector will leave
                # it alone.
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            except StopIteration:
                break

            l_obj = frame_meta.obj_meta_list
            frame_number = frame_meta.frame_num
            is_first_obj = True

            while l_obj is not None:
                try:
                    # Casting l_obj.data to pyds.NvDsObjectMeta
                    obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                except StopIteration:
                    break

                if is_first_obj:
                    is_first_obj = False

                    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
                    # convert python array into numpy array format in the copy mode.
                    frame_copy = np.array(n_frame, copy=True, order='C')
                    # convert the array into cv2 default color format
                    rgb_frame = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)       

                try:
                    l_obj = l_obj.next
                except StopIteration:
                    break

            try:
                l_frame = l_frame.next
            except StopIteration:
                break

        return Gst.PadProbeReturn.OK

am i migrating correctly?

another question, now that i installed for DS 7.1

560.35.03 + cuda 12.6

am i able to run DS 6.4 that requires

525.125.06 + cuda 12.1

on the same machine by changing the docker container?

thank you, William

The above pipeline could be simplified, The following pipelines should work for DS-6.4/DS-7.1 on dGPU. Add the probe at capsfilter src pad.

demux.src_0  !  nvvideoconvert  nvbuf-memory-type=3  ! video/x-raw(memory:NVMM),format=RGBA name=capsfilter ! 

The image contains the corresponding version of cuda, no additional installation is required. Generally speaking, if the driver has been installed correctly, you can use the docker images of DS-6.4 and DS-7.1 at the same time

this actually works, what am i missing?

ok i narrowed down the problem

this works in DS 6.2, but not in DS 7.1

gst-launch-1.0 nvstreammux sync-inputs=0 nvbuf-memory-type=3 name=mux enable-padding=true batch-size=1 width=1282 height=722 \
! nvinfer  name="primary_inference" config-file-path="config.txt" batch-size=1 \
model-engine-file="model_b1_gpu0_fp16.engine" \
! nvstreamdemux name=demux \
\
uridecodebin uri="rtsp://admin:password@192.168.4.170:8554/ppe_basic.mp4" name=source_0 \
! nvvideoconvert name=source_videoconv_0 nvbuf-memory-type=3 ! queue name=queue_0 leaky=2 max-size-buffers=1 ! mux.sink_0 \
\
demux.src_0 ! nvvideoconvert name=videoconv

Please follow the method I mentioned above to build the pipeline. pyds.get_nvds_buf_surface only support RGBA/RGB color Format currently.

In addition, there is no need to convert multiple times. Reducing unnecessary conversions optimizes performance

ok i followed your advice and it seems to be working with 1 source

nvstreammux sync-inputs=0 live-source=1 attach-sys-ts=TRUE nvbuf-memory-type=3 
name=mux enable-padding=true batch-size=1 width=1282 height=722
 
! nvinfer name="primary_inference" interval=5 config-file-path=config.txt batch-size=1 
model-engine-file="model_b1_gpu0_fp16.engine"  
  
! nvstreamdemux name=demux
 
nvurisrcbin rtsp-reconnect-attempts=-1 rtsp-reconnect-interval=10 
uri="rtsp://admin:********@192.168.10.185/h264Preview_01_sub" name=source_0 ! mux.sink_0  
 
demux.src_0 ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw(memory:NVMM),format=RGB
! fakesink name=sink0

but it’s not working with multiple sources

nvstreammux sync-inputs=0 live-source=1 attach-sys-ts=TRUE nvbuf-memory-type=3 
name=mux enable-padding=true batch-size=2 width=1282 height=722
 
! nvinfer name="primary_inference" interval=5 config-file-path=config.txt batch-size=2 
model-engine-file="model_b2_gpu0_fp16.engine"  
 
! nvstreamdemux name=demux
 
nvurisrcbin rtsp-reconnect-attempts=-1 rtsp-reconnect-interval=10 uri="rtsp://admin:***************@192.168.10.220:8554/ppe_basic.mp4" name=source_0 ! mux.sink_0
 
nvurisrcbin rtsp-reconnect-attempts=-1 rtsp-reconnect-interval=10 uri="rtsp://admin:****************@192.168.10.185/h264Preview_01_sub" name=source_1 ! mux.sink_1  
 
demux.src_0 ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw(memory:NVMM),format=RGB 
! fakesink name=sink0
 
demux.src_1 ! nvvideoconvert nvbuf-memory-type=3 ! video/x-raw(memory:NVMM),format=RGB 
! fakesink name=sink1

the code of the probe is

    def raw_probe(self, pad, info, u_data):
        gst_buffer = info.get_buffer()
        if not gst_buffer:
            print("Unable to get GstBuffer ")
            return

        batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))

        l_frame = batch_meta.frame_meta_list

        print('raw_probe')

        while l_frame is not None:
            try:
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            except StopIteration:
                break

            l_obj = frame_meta.obj_meta_list
            frame_number = frame_meta.frame_num
            stream_index = frame_meta.pad_index
            is_first_obj = True

            while l_obj is not None:
                try:
                    # Casting l_obj.data to pyds.NvDsObjectMeta
                    obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data)
                except StopIteration:
                    break

                if is_first_obj:
                    is_first_obj = False
                    n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

                try:
                    l_obj = l_obj.next
                except StopIteration:
                    break

            try:
                l_frame = l_frame.next
            except StopIteration:
                break

        return Gst.PadProbeReturn.OK

what happens with two sources is that the print(‘raw_probe’) is executed only once, then it hangs

thank you in advance for your advice

In GStreamer, a queue element is the thread boundary element through which you can force the use of threads.

Queues are recommended after nvstreamdemux source pads

Try the following pipeline

gst-launch-1.0 nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov gpu-id=0 ! mux.sink_0 \
               nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_ride_bike.mov gpu-id=0 ! mux.sink_1 \
               nvstreammux name=mux gpu-id=0 batch-size=2 width=1920 height=1080 live-source=1 batched-push-timeout=40000 nvbuf-memory-type=3 ! \
               nvinfer gpu-id=0  batch-size=2 config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! \
               nvstreamdemux name=demux demux.src_0 ! queue ! nvvideoconvert nvbuf-memory-type=3 ! "video/x-raw(memory:NVMM),format=RGBA" ! fakesink name=sink0 \
                                        demux.src_1 ! queue ! nvvideoconvert nvbuf-memory-type=3 ! "video/x-raw(memory:NVMM),format=RGBA" ! fakesink name=sink1

ok now it works, thank you, can i ask how the batched-push-timeout=40000 is calculated? thank you

This issue should be caused by adding queue instead of setting batched-push-timeout

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.