How to save the complete frames and get rtsp out within x_86 system docker container

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU RTX3090
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only)**510.47.03
**• Issue Type( questions, new requirements, bugs)**BUGS
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I want to save images in a x86 system server and get rtsp out,
hence I modify the

Pull a docker dGPU image from:
Base on this image I installed deepstream-python-apps
Edit the to save the frames:

  n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
  frame_number = frame_meta.frame_num
  img_folder = "/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1-rtsp-out/frames"
  img_path = "{}/stream_{}/frame_{}.jpg".format(img_folder, frame_meta.pad_index, frame_number)
  print("frame_shape = ",n_frame.shape)
  cv2.imwrite(img_path, n_frame)

Segmentation fault Error


Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating H264 Encoder
Creating H264 rtppay
Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Starting pipeline

0:00:08.723336403   213      0x3e2ba10 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:11.190440787   213      0x3e2ba10 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:11.229110522   213      0x3e2ba10 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:11.230478778   213      0x3e2ba10 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
frame_shape =  (1080, 1920, 4)

Segmentation fault (core dumped)

Any frame opteration,like print(n_frame), n_frame.copy() ,will cause the segmentation fault…
How to solve this problem? THANKS!

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

We already have the image save sample deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps (, please make sure the “pyds.get_nvds_buf_surface” interface is put to the correct place in the pipeline to get the RGBA frame.

I had tried deepstream_python_apps/apps/deepstream-imagedata-multistream sample with this cmd:

python3 file://../../../../samples/streams/sample_720p.mp4 frames

Nothing was saved in the [frames] folder…

Log is:

Frames will be saved in  frames
Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source bin
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing...
1 :  file://../../../../samples/streams/sample_720p.mp4
Starting pipeline

**PERF:  {'stream0': 0.0}

**PERF:  {'stream0': 0.0}

However, on the Jestson platform,this sample was successful executed…
On the GPU platform, this sample was failed to save images.

What kind of error did you meet on dGPU?

On dGpu , the deepstream-imagedata-multistream sample even can’t go through the tiler_sink_pad_buffer_probe, the proof is that I add the test print message at the beginning of tiler_sink_pad_buffer_probe:

# tiler_sink_pad_buffer_probe  will extract metadata received on tiler src pad
# and update params for drawing rectangle, object information etc.
def tiler_sink_pad_buffer_probe(pad, info, u_data):
    print('tiler_sink_pad_buffer_probe start...')
    frame_number = 0
    num_rects = 0
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")

If there is nothing wrong , the " tiler_sink_pad_buffer_probe start…" should be printed.
As you can see the above log, only print the PERF log in a long time.

**PERF: {‘stream0’: 0.0}

There is no more useful error logs to locate the problem.

We will check our sample on dGPU

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I’ve tried the python sample in dGPU with DeepStream 6.1, it works well.
debug.txt (106.4 KB)

Can you check your dGPU installation? Can other deepstream samples such as deepstream-test1, deepstream-app, … run in the same environment?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.