Segmetations fault occurs when running deepstream7.0 docker with wsl2

• Deep Stream Version

  • 7.0
    • JetPack version (valid only for Jetson)
    • TensorRT version
  • 8.6.1
    • NVIDIA GPU driver version (valid for GPU only)
  • 552.22
    • Issue Type (Question, New Requirement, Bug)
  • Segmentation fault occurs when running deepstream7.0 docker using wsl2.
  • If you apply the following syntax in the process of saving the image in frames after object detection, an error called segmentations fault occurs.
  • Please note that if you delete this phrase, the image will not be saved, but the fps will be normal and the segmentations fault error will not occur.
  • n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
    frame_copy = np.array(n_frame, copy=True, order=‘C’)
    frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_RGBA2BGRA)
    json_data = self.__make_json(cctv_id,object_list)
    cv2.imwrite(img_path, frame_copy)

• How do I reproduce the problem? (This is for a bug. Include which sample app you are using, the contents of the configuration file, the command line used, and other details for reproduction.)
• Requirement details (this is for a new requirement, including the module name - which plugin it is for, which sample application it is for, a description of its functionality)

Please refer to this example. I have tested it on WSL and it works. However, since DeepStream on WSL2 is an alpha release, there are some problems with the OSD output.

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream

Please modify the code according to the following example

  if not platform_info.is_integrated_gpu():
        # Use CUDA unified memory in the pipeline so frames
        # can be easily accessed on CPU in Python.
        mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED)
        streammux.set_property("nvbuf-memory-type", mem_type)
        nvvidconv.set_property("nvbuf-memory-type", mem_type)
        if platform_info.is_wsl():
            #opencv functions like cv2.line and cv2.putText is not able to access NVBUF_MEM_CUDA_UNIFIED memory
            #in WSL systems due to some reason and gives SEGFAULT. Use NVBUF_MEM_CUDA_PINNED memory for such
            #usecases in WSL. Here, nvvidconv1's buffer is used in tiler sink pad probe and cv2 operations are
            #done on that.
            print("using nvbuf_mem_cuda_pinned memory for nvvidconv1\n")
            vc_mem_type = int(pyds.NVBUF_MEM_CUDA_PINNED)
            nvvidconv1.set_property("nvbuf-memory-type", vc_mem_type)
        else:
            nvvidconv1.set_property("nvbuf-memory-type", mem_type)
        tiler.set_property("nvbuf-memory-type", mem_type)

If you run the file in /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-imagedata-multistream, the image will also be saved.
However, there is no difference between that method and my image saving method.
However, a segmentations fault error occurs.
I don’t know why.
wsl so vc_mem_type = int(pyds.NVBUF_MEM_CUDA_PINNED)
I applied nvvidconv1.set_property(“nvbuf-memory-type”, vc_mem_type), but the same problem occurs.
The problematic syntax is n_frame = pyds.get_nvds_buf_surface(hash(gst_buffer), frame_meta.batch_id)
This is part.
I confirmed that values ​​such as hash(gst_buffer) are output as 140705477663088 and frame_meta.batch_id as 0, but the above syntax does not seem to work.

Please share a sample that can reproduce the problem. I think this problem is caused by the difference between your pipeline and deepstream_imagedata-multistream.py.

In addition, as I said above, DS-7.0 on WSL is an alpha release, there will be some bugs. You can test it on native linux.

I found the cause.
This was a problem caused by applying mem_type = int(pyds.NVBUF_MEM_CUDA_UNIFIED) to nvstreammux. The problem occurred because mem_type = int(pyds.NVBUF_MEM_CUDA_PINNED) should have been applied here as well.
Thank you for your interest in the current issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.