Trying to access frame using "pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)" but instead encountering with Segmentation fault

Hi I am working on python application test 1. I want to save bounding box region in memory. I am trying to access frame using below code inside
def osd_sink_pad_buffer_probe(pad,info,u_data):

n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
#convert python array into numy array format. frame_image=np.array(n_frame,copy=True,order=‘C’)
#covert the array into cv2 default color format
frame_image=cv2.cvtColor(frame_image,cv2.COLOR_RGBA2BGRA)

But I am facing segmentation fault error after processing of few frames. The same configuration and app file is working fine if I am not accesing frames.

• Hardware Platform (GPU)–> Tesla k40m
• DeepStream Version 5.0
• NVIDIA GPU Driver Version (valid for GPU only) -->440.100
• Issue Type( bugs)

Hey, we have a demo deepstream_python_apps/deepstream_imagedata-multistream.py at 5cb4cb8be92e079acd07d911d265946580ea81cd · NVIDIA-AI-IOT/deepstream_python_apps · GitHub , have you checked it?

Hi , bcao. I have worked on this demo. But it works for me for few frames then stops, showing “bus error”. On changing some conditions the pipeline stops at starting. The code I have mentioned earlier was taken from this example only.

So the demo can work for you, right?
If yes, can you share all your code for debug?

Yes it is working , but in unreliable way, sometimes it runs to thousands of frames, sometimes just in hundred and stops abruptly.
something like this…

I am using rtsp stream.

My script is–
test_image_deepstream.py (16.0 KB)

I think there is problem while integration of Opencv with gst plugin in my pipeline. I have added below in deepstream reference app after pgie , this is also generating bus error for me
[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
unique-id=15

Hi, I have found this stament in https://docs.nvidia.com/metropolis/deepstream/DeepStream_5.0_Release_Notes.pdf -
“On NVIDIA Tesla T4 with driver 450.51, using NVENC sometimes results into a CUDA
error. An alternative is to use software encoder for file and RTSP output.”

And I am using Tesla k40m. is this the reason for me getting bus error???

Sry for the delay,

Just to confirm, so the original demo which doesn’t include your own modification also stops abruptly and prints a bus error, right?

Yes it ends, when it tries to create the buffer to access the frame
i.e when the conditions satifies to run –
n_frame=pyds.get_nvds_buf_surface(hash(gst_buffer),frame_meta.batch_id).

Sometimes it saves few frames with bboxes, but eventually stops with bus error.

Any reason you need to use the old nvidia driver, we suggest to use:

You must install the following components: - Ubuntu 18.04 - GStreamer 1.14.1 - NVIDIA driver 450.51 - CUDA 10.2 - TensorRT 7.0.X

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html

Dear sir, the above segmentation fault is still showing on my system, I have updated my deepstream 5.0 to 5.1, I am using driver 460.32, Cuda 11.1, My Gpus are Tesla K80, gstreamer 1.14.5,Tensort 7.2.2 , ubuntu 18.04. In my application extracting frame meta is important task, And I am stuck with it. Please provide me a solution, if not possible atleast explain me possible reasons for it so that I could work on it. All other functionality is working except when I try to extract frames using –
n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)

@bhatiyaarpit95 have you solved the problem, does the original deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub work?

Hi bcao, It is still unresolved. My original imagedata- code is working until it meets a condition where it saves images then there is bus error. Maybe our application is not getting the pointer or memory access, i don’t know…

Also @bcao I also have a question regarding custom classification models and object detection models other than provided in samples. I am a python developer so its hard for me to figure out, if possible please help me with custom implementations…

But i can not repro segmentation fault issue or bus error, i use original deepstream-imagedata-multistream and builtin stream sample_720p.h264, is there any difference between us?

Frame Number= 1433 Number of Objects= 8 Vehicle_count= 6 Person_count= 2
Frame Number= 1434 Number of Objects= 9 Vehicle_count= 7 Person_count= 2
Frame Number= 1435 Number of Objects= 9 Vehicle_count= 7 Person_count= 2
Frame Number= 1436 Number of Objects= 9 Vehicle_count= 7 Person_count= 2
Frame Number= 1437 Number of Objects= 9 Vehicle_count= 7 Person_count= 2
Frame Number= 1438 Number of Objects= 10 Vehicle_count= 8 Person_count= 2
Frame Number= 1439 Number of Objects= 8 Vehicle_count= 6 Person_count= 2
Frame Number= 1440 Number of Objects= 8 Vehicle_count= 6 Person_count= 2
Frame Number= 1441 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
End-of-stream
Exiting app