Numpy error on NGC-Deepstream 6.0-triton Segmentation fault (core dumped)

• Hardware Platform (Jetson / GPU) GPU RTX 2080ti
• DeepStream Version 6.0 triton
• NVIDIA GPU Driver Version (valid for GPU only) 470.103.01
• Python Version: 3.6.15
• Issue Type: bugs

Logs detail

I0311 07:45:17.484594 10848] Created instance detector_0_0_gpu0 on GPU 0 with stream priority 0 and optimization profile default[0];
I0311 07:45:17.484653 10848] successfully loaded 'detector' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:1 initialized model: detector
Decodebin child added: source 

Now playing...
Starting pipeline 

Decodebin child added: decodebin0 

Decodebin child added: rtph264depay0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

In cb_newpad

gstname= video/x-raw
Decodebin linked to pipeline
Segmentation fault (core dumped)

• Reproduce

Funct: tiler_sink_pad_buffer_probe

n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
frame_copy = np.array(n_frame, copy=True, order='C')
rgb_frame = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2RGB)

Successful to install NumPy (1.19.5, 1.19.4, 1.19.3, 1.19.2). However, these NumPy versions are still the cause of the code dumps.
Alternatively, I also tried python3-numpy, but it still doesn’t work.
Hope to get help! Thanks

My Bad. Pls close this issue! Thanks

1 Like

Glad to know it’s not an issue to you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.