Segmentation Fault (SIGABRT) and "Double Free or Corruption" Errors on Jetson NX with Connectech Carrier Board

Hello,

I am encountering segmentation fault (SIGABRT) and “double free or corruption” errors on a Jetson NX board with the Connectech carrier board (Photon). The issue occurs when running a deepstream pipeline that includes both detection and segmentation models, with a tee element before them. It’s important to note that the same pipeline runs successfully on the Jetson NX Developer Kit with the NVIDIA carrier board.

When starting the application as a service using a systemd unit file and a bash script that launches a ROS2 launch command, I receive a segmentation fault error with exit code -11. In previous runs, I also encountered the “double free or corruption” error with exit code -6.

I have enabled deepstream debug and logger options, but no additional debug information was provided regarding the cause of the errors. Furthermore, I noticed the “sochthrm: OC ALARM” message in the syslog, but I’m unsure of its relevance to the segmentation fault and “double free or corruption” errors.

To summarize:

  1. Jetson NX with Connectech carrier board
  2. Deepstream pipeline with detection and segmentation models
  3. Segmentation fault error with exit code -11
  4. “Double free or corruption” error with exit code -6 in previous runs
  5. Enabled deepstream debug and logger options, but no additional information obtained
  6. “sochthrm: OC ALARM” message observed in syslog

I kindly request your guidance to understand the cause of these errors specifically on the Jetson NX with the Connectech carrier board. If there are any known compatibility issues or considerations regarding this combination, I would appreciate being informed. Additionally, any suggestions for further debugging or diagnosis would be greatly appreciated.

Thank you for your assistance.

Best regards,
Daphna

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Jetson NX with Connectech (photon) carrier
DeepStream 6.0
JetPack 4.6
TensorRT 8.0.1.6
Issue type: bug

How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Can you reproduce the failure with any of the DeepStream sample apps? C/C++ Sample Apps Source Details — DeepStream 6.2 Release documentation

I don’t know how to reproduce it with DeepStream samples apps… we work with our own pipeline, including 2 inference plug-ins both detection and segmentation. Jetson NX (not dev kit) with connectech photon carrier.

Please at least provide the GStreamer pipeline graph for your customized app DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.

There is no useful clue in your description for investigating or debugging the segmentation fault with your app.

If you think it is related to the segmentation fault, please tell us whether the DeepStream SDK sample apps work well in your board?

I don’t think it’s related… but I will make sure with the samples.

Why do you use tee to run the two models?

I’ve tried the following pipeline in Jetson NX with DeepStream 6.2. No problem found. So it may not be a DeepStream issue. Please debug your app.
gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 name=d d.src_0 ! mux.sink_0 nvstreammux batch-size=1 width=720 height=576 name=mux ! tee name=t t.src_0 ! queue ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! nvvideoconvert ! fakesink t.src_1 ! queue ! nvinfer config-file-path=/home/nvidia/deepstream_tao_apps/configs/unet_tao/pgie_unet_tao_config.txt ! nvsegvisual ! fakesink

If I’m not mistaken, since both the detection and segmentation models rely on the original stream to function accurately, the use of the tee element in the pipeline is necessary to provide the unaltered stream as input to both models.

The detection output and segmentation output can co-exists in the NvDsMetadata MetaData in the DeepStream SDK — DeepStream 6.2 Release documentation. so the following pipelien also work:

src -> nvstreammux -> detection -> segmentation -> tee
tee.src_0 -> nvsegvisual -> sink
tee.src_1 -> nvvdeoconvert -> sink

Your pipeline can work too. Your segmentation fault seems have nothing to do with the pipeline.

Hello @Fiona.Chen,

I’m currently working on a DeepStream pipeline and encountering a persistent bug that I’m trying to diagnose and fix. I would greatly appreciate your assistance and expertise in addressing the following questions related to the bug:

  1. I’ve noticed a ‘double free or corruption’ error with exit code -6 in my DeepStream pipeline. I’m trying to understand the possible causes of this error and how to resolve it. Could the placement of the tee element before the inference elements be a contributing factor? If I rearrange the pipeline so that the tee comes after the inference, would it help eliminate this error? Additionally, would this change have any impact on latency or frames per second (fps) of the pipeline?

  2. Is there any difference between creating an engine file on Jetson NX compared to Jetson NX Dev Kit that could potentially lead to the ‘double free or corruption’ error? I’m using different hardware setups and trying to identify any hardware-related factors that might be causing this issue.

  3. Regarding the engine file creation, is there any distinction between using tao-convert for building the engine file compared to running the pipeline with etlt and allowing DeepStream to build it automatically? Could this choice affect the performance or functionality of the pipeline?

I’m actively investigating this bug and any insights, suggestions, or best practices you can provide would be immensely helpful in resolving the issue. Thank you in advance for your valuable input!

Best Regards,
Daphna

No. Have you tried the pipeline I posted in Segmentation Fault (SIGABRT) and "Double Free or Corruption" Errors on Jetson NX with Connectech Carrier Board - #11 by Fiona.Chen? It can work, so there is no problem to use tee before nvinfer.

No.

No.

No.

I would like to handle and log errors in my application, specifically the errors that occur during the execution of the GStreamer pipeline. As I am working without a screen, I am unable to use the print function to display the errors. I would appreciate any suggestions on how and where to implement error handling and logging. My goal is to write the errors into a log file for later analysis. Thank you in advance for your assistance

Do you run the app with a ssh terminal?

We have attempted to catch and handle this error using our current approach, but it appears that the error is not being properly caught. Although the program seems to be running without any visible issues, it is actually not functioning as expected. We are seeking guidance on alternative methods to effectively capture and handle such errors.

Have you tried the pipeline I posted?

Yes, but our intention is to be able to capture and log all DeepStream messages before the program crashes. We are seeking a robust solution that allows us to effectively collect and log these messages in a systematic manner. This will enable us to analyze the errors and troubleshoot the issues more efficiently. Any guidance or recommendations on implementing such a mechanism would be greatly appreciated. Thank you in advance for your assistance.

You can enable DeepStream components log with setting “GST_DEBUG" enviroment variable.

These are the relevant lines of code we have implemented for debugging purposes:>Gst.debug_set_active(True)

Gst.debug_add_log_function(self.pipeline_logger)

Below is the pipeline_logger function where we handle the logging of messages:

--------------------------------------------------------------------------------------------

def pipeline_logger(self, category, level, file, func, line, message, domain):
    try:
        category_name = category.get_name() if category else ""
        object_name = message.get_name() if hasattr(message, 'get_name') else ""
        message_str = str(message) if message else ""
        txt = f"[[{category_name}]: [{object_name}]]\n"

        # Access specific fields or call methods on the GstNvInfer object

        object_dir = dir(message)
        now = datetime.now()
        dt_string = now.strftime("%d/%m/%Y  %H:%M:%S")
        loggerLine = dt_string + ' : ' + txt + '\n'
        f = open(self.deepstream_logger_path, "a")  # opens file with name of "../fileManagerDB/errors_log.txt"
        f.write(object_name)
        f.close()

        ScreenLogger.publish_msg_to_screen(text=str(loggerLine),
                                               msg_type=MessageType.WARNING,
                                               log_level=MsgToScreenDefs.log_level.STATUS)
    except Exception as exp:
        error_text = f"PipelineWithTee -> pipeline_logger [{exp}]"
        getErrorHandler().run_error_routine(error_as_text=error_text)

Our current implementation successfully logs the relevant message information, including the category name and object name, to the log file. However, we are struggling to obtain the genuine error message. We would appreciate your guidance on which specific field of the message object we should extract and include in the log to capture the actual error.

Thank you for your assistance