[DEEPSTREAM] nvds_obj_enc_process Segmentation Fault on Yolo and Jetson nano

• Hardware Platform: Jetson Nano
• DeepStream Version: 6.0
• JetPack Version 4.6
• Segmentation fault
**• running my code **
• jetson nano with ds 6.0

Despite

sources\apps\sample_apps\deepstream-image-meta-test

working well on my jetson nano , when I want to use my own pipeline (that was working before) I got a segmentation fault at nvds_obj_enc_process(ctx, &userData, ip_surf, obj_meta, frame_meta);

the frame saving work on jetson orin and dgpu with the same pipeline. I have no warning or no error else than segmentation fault.

here my pipeline :

/*
 * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <sys/time.h>
#include <math.h>
#include <cuda_runtime_api.h>

#include "gstnvdsmeta.h"
#include "nvbufsurface.h"

#include "nvds_obj_encode.h"
#include "gst-nvmessage.h"

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000

#define TILED_OUTPUT_WIDTH 1920
#define TILED_OUTPUT_HEIGHT 1080

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "RoadSign"
};

#define FPS_PRINT_INTERVAL 300

#define save_img TRUE
#define attach_user_meta TRUE



/* pgie_src_pad_buffer_probe will extract metadata received on pgie src pad
 * and update params for drawing rectangle, object information etc. We also
 * iterate through the object list and encode the cropped objects as jpeg
 * images and attach it as user meta to the respective objects.*/




GstPadProbeReturn pgie_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer user_data)
{
    NvDsObjEncCtxHandle ctx = (NvDsObjEncCtxHandle)user_data;

    GstBuffer *buf = (GstBuffer *) info->data;
    GstMapInfo inmap = GST_MAP_INFO_INIT;
    if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
        GST_ERROR ("input buffer mapinfo failed");
        return GST_PAD_PROBE_OK;
    }
    NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
    gst_buffer_unmap (buf, &inmap);

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
    NvDsMetaList *l_frame = NULL;

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

               for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
        {
            NvDsObjectMeta *obj_meta = (NvDsObjectMeta *)(l_obj->data);
            if (!obj_meta)
                continue;




      NvDsObjEncUsrArgs userData = {0};
      userData.saveImg = true;
      userData.attachUsrMeta = true;
      userData.scaleImg = false;
      userData.quality = 85;
      static int frame_count = 0;
        
      snprintf(userData.fileNameImg, sizeof(userData.fileNameImg), "frame_%d.jpg", frame_count++);
      g_print("obj_ctx_handle: %p\n", ctx);


      nvds_obj_enc_process(ctx, &userData, ip_surf, obj_meta, frame_meta);
    }}

    nvds_obj_enc_finish (ctx);
    return GST_PAD_PROBE_OK;
}


static gboolean bus_call(GstBus *bus, GstMessage *msg, gpointer data) {
    GMainLoop *loop = (GMainLoop *)data;

    switch (GST_MESSAGE_TYPE(msg)) {
        case GST_MESSAGE_EOS:
            g_print("End of stream\n");
            g_main_loop_quit(loop);
            break;
        case GST_MESSAGE_ERROR: {
            gchar *debug;
            GError *error;

            gst_message_parse_error(msg, &error, &debug);
            g_printerr("Error received from element %s: %s\n", GST_OBJECT_NAME(msg->src), error->message);
            g_printerr("Debugging information: %s\n", debug ? debug : "none");
            g_clear_error(&error);
            g_free(debug);
            g_main_loop_quit(loop);
            break;
        }
        default:
            break;
    }
    return TRUE;
}

int main(int argc, char *argv[]) {
    GMainLoop *loop = NULL;
    GstElement *pipeline = NULL;
    GstBus *bus = NULL;
    guint bus_watch_id;

    /* Initialize GStreamer */
    gst_init(&argc, &argv);
    loop = g_main_loop_new(NULL, FALSE);

    /* Define the pipeline string */
    const gchar *pipeline_desc =
    "v4l2src device=\"/dev/video0\" ! "
    "capsfilter caps=\"image/jpeg, width=1920, height=1080, framerate=30/1\" ! "
    "jpegdec ! "
    "videoconvert ! "
    "nvvideoconvert ! "
    "capsfilter caps=\"video/x-raw(memory:NVMM), format=RGBA, width=1920, height=1080, framerate=30/1\" ! "
    "mux.sink_0 nvstreammux name=\"mux\" batch-size=1 width=1920 height=1080 batched-push-timeout=4000000 "
    "live-source=1 num-surfaces-per-frame=1 sync-inputs=0 max-latency=0 ! "
    "nvinfer name=\"primary-inference\" config-file-path=\"/home/vision/cfg/infer_cfg/YOLOV8S.txt\" ! "
    "nvtracker tracker-width=640 tracker-height=384 gpu-id=0 "
    "ll-lib-file=\"/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so\" "
    "ll-config-file=\"/home/vision/cfg/infer_cfg/config_tracker_NvDCF_perf.yml\" ! "
    "nvdsanalytics name=\"analytics\" config-file=\"/home/vision/cfg/infer_cfg/analytics.txt\" ! "
    "nvvideoconvert ! "
    "nvdsosd name=\"onscreendisplay\" ! "
    "nvegltransform ! "
    "nveglglessink sync=\"false\"";

    /* Create the pipeline from the pipeline description */
    pipeline = gst_parse_launch(pipeline_desc, NULL);
    if (!pipeline) {
        g_printerr("Failed to create pipeline\n");
        return -1;
    }

    /* Start playing the pipeline */
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    /* Setup bus watch for messages */
    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
    gst_object_unref(bus);

    NvDsObjEncCtxHandle ctx = nvds_obj_enc_create_context(); // Initialize this based on your context creation needs
    
    // Set up the probe
    GstElement *pgie = gst_bin_get_by_name(GST_BIN(pipeline), "primary-inference");
    GstPad *pgie_src_pad = gst_element_get_static_pad(pgie, "src"); // Get the source pad of the nvinfer element
    gst_pad_add_probe(pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER, pgie_src_pad_buffer_probe, ctx, NULL);
    gst_object_unref(pgie_src_pad);

    /* Run the main loop */
    g_main_loop_run(loop);

    /* Cleanup */
    gst_element_set_state(pipeline, GST_STATE_NULL);
    gst_object_unref(pipeline);
    g_source_remove(bus_watch_id);
    g_main_loop_unref(loop);

    return 0;
}

thanks for any help !

Since that was working before, what changes have caused the crash now?

Also could you use gdb tool to do a preliminary analysis?

$gdb --args <your_command>
$r
$bt

By working before, I mean that the pipeline work normaly without the nvds_obj_enc_process line (so without saving frame)

It was also working with the frame saving in deepstream 7.0 on Jetson orin and x86_64

here the output of gdb --args <your_command>


$ gdb --args ./deepstream-image-meta-test-yolo-noe 
GNU gdb (Ubuntu 10.2-0ubuntu1~18.04~2) 10.2
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "aarch64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
--Type <RET> for more, q to quit, c to continue without paging--
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./deepstream-image-meta-test...
(No debugging symbols found in ./deepstream-image-meta-test)
(gdb) r
Starting program: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test-yolo-noe/build/deepstream-image-meta-test-yolo-noe 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fb28ec1d0 (LWP 6708)]

Using winsys: x11 
[New Thread 0x7f8bc521d0 (LWP 6712)]
[New Thread 0x7f8b2e21d0 (LWP 6713)]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
[New Thread 0x7f8a53b1d0 (LWP 6714)]
[New Thread 0x7f89d3a1d0 (LWP 6715)]
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:10.195044235  6693   0x5555c86d90 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/floware/flwr-vision-bedrock/vision/models/yolov8s/YOLOV8S.engine
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT classes         8400x1          

0:00:10.196330757  6693   0x5555c86d90 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/floware/flwr-vision-bedrock/vision/models/yolov8s/YOLOV8S.engine
[New Thread 0x7f67fff1d0 (LWP 6740)]
[New Thread 0x7f677fe1d0 (LWP 6741)]
[New Thread 0x7f66ffd1d0 (LWP 6742)]
0:00:10.234153777  6693   0x5555c86d90 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/floware/flwr-vision-bedrock/vision/cfg/infer_cfg/YOLOV8S.txt sucessfully
[New Thread 0x7f667fc1d0 (LWP 6743)]
[New Thread 0x7f553341d0 (LWP 6744)]
[New Thread 0x7f54b331d0 (LWP 6746)]
[New Thread 0x7f3ffff1d0 (LWP 6747)]
obj_ctx_handle: 0x556405a8b0
[New Thread 0x7f3eefe1d0 (LWP 6748)]
--Type <RET> for more, q to quit, c to continue without paging--

Thread 14 "pool" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f3eefe1d0 (LWP 6748)]
0x0000007f88ef73fc in jpeg_write_raw_data () from /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so
(gdb) 
(gdb) bt
#0  0x0000007f88ef73fc in jpeg_write_raw_data () at /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so
#1  0x0000007fb7c83100 in  () at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_batch_jpegenc.so
#2  0x0000007fb7d82558 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#3  0x0000007fb7e16e80 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0

Could you try to move the code below in front of the gst_element_set_state(pipeline, GST_STATE_PLAYING);?

    /* Setup bus watch for messages */
    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
    gst_object_unref(bus);

    NvDsObjEncCtxHandle ctx = nvds_obj_enc_create_context(); // Initialize this based on your context creation needs
    
    // Set up the probe
    GstElement *pgie = gst_bin_get_by_name(GST_BIN(pipeline), "primary-inference");
    GstPad *pgie_src_pad = gst_element_get_static_pad(pgie, "src"); // Get the source pad of the nvinfer element
    gst_pad_add_probe(pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER, pgie_src_pad_buffer_probe, ctx, NULL);
    gst_object_unref(pgie_src_pad);

not working, same effect with

  /* Setup bus watch for messages */
    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    bus_watch_id = gst_bus_add_watch(bus, bus_call, loop);
    gst_object_unref(bus);

    NvDsObjEncCtxHandle ctx = nvds_obj_enc_create_context(); // Initialize this based on your context creation needs
    
    // Set up the probe
    GstElement *pgie = gst_bin_get_by_name(GST_BIN(pipeline), "primary-inference");
    GstPad *pgie_src_pad = gst_element_get_static_pad(pgie, "src"); // Get the source pad of the nvinfer element
    gst_pad_add_probe(pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER, pgie_src_pad_buffer_probe, ctx, NULL);
    gst_object_unref(pgie_src_pad);
    gst_element_set_state(pipeline, GST_STATE_PLAYING);

    /* Run the main loop */
    g_main_loop_run(loop);

output gdb :

gdb ./deepstream-image-meta-test 
GNU gdb (Ubuntu 10.2-0ubuntu1~18.04~2) 10.2
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "aarch64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
--Type <RET> for more, q to quit, c to continue without paging--
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./deepstream-image-meta-test...
(No debugging symbols found in ./deepstream-image-meta-test)
(gdb) r
Starting program: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test/build/deepstream-image-meta-test 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fb40bc1d0 (LWP 10521)]
[New Thread 0x7fb38bb1d0 (LWP 10522)]
[New Thread 0x7fb30ba1d0 (LWP 10523)]
[New Thread 0x7fb28b91d0 (LWP 10524)]
[New Thread 0x7fb0afd1d0 (LWP 10525)]
[New Thread 0x7f89cfa1d0 (LWP 10526)]
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
[New Thread 0x7f88e011d0 (LWP 10527)]
[New Thread 0x7f74a491d0 (LWP 10528)]
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.821461664 10507   0x5555c5acc0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test/build/model_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT input           3x640x640       
1   OUTPUT kFLOAT boxes           8400x4          
2   OUTPUT kFLOAT scores          8400x1          
3   OUTPUT kFLOAT classes         8400x1          

0:00:06.822750414 10507   0x5555c5acc0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-image-meta-test/build/model_b1_gpu0_fp16.engine
[New Thread 0x7f67fff1d0 (LWP 10555)]
[New Thread 0x7f677fe1d0 (LWP 10556)]
[New Thread 0x7f66ffd1d0 (LWP 10557)]
0:00:06.840027289 10507   0x5555c5acc0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test/build/ds_image_meta_pgie_config.txt sucessfully
[New Thread 0x7f667fc1d0 (LWP 10558)]
[New Thread 0x7f53fff1d0 (LWP 10559)]
[New Thread 0x7f537fe1d0 (LWP 10560)]
[New Thread 0x7f527fd1d0 (LWP 10561)]
obj_ctx_handle: 0x5555c53e30
[New Thread 0x7f515fc1d0 (LWP 10636)]
--Type <RET> for more, q to quit, c to continue without paging--

Thread 17 "pool" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f515fc1d0 (LWP 10636)]
0x0000007f89d1d3fc in jpeg_write_raw_data () from /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so
(gdb) bt
#0  0x0000007f89d1d3fc in jpeg_write_raw_data () at /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so
#1  0x0000007fb7c83100 in  () at /opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_batch_jpegenc.so
#2  0x0000007fb7d82558 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#3  0x0000007fb7e16e80 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
(gdb) 

I have tried your code on my side. It works properly.

    const gchar *pipeline_desc =
    "uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! "
    "nvvideoconvert ! "
    "capsfilter caps=\"video/x-raw(memory:NVMM), format=RGBA, width=1920, height=1080, framerate=30/1\" ! "
    "mux.sink_0 nvstreammux name=\"mux\" batch-size=1 width=1920 height=1080 batched-push-timeout=4000000 "
    "live-source=1 num-surfaces-per-frame=1 sync-inputs=0 max-latency=0 ! "
    "nvinfer name=\"primary-inference\" config-file-path=\"ds_image_meta_pgie_config.txt\" ! "
    "nvtracker tracker-width=640 tracker-height=384 gpu-id=0 "
    "ll-lib-file=\"/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so\" "
    "ll-config-file=\"/home/vision/cfg/infer_cfg/config_tracker_NvDCF_perf.yml\" ! "
    "nvvideoconvert ! "
    "nvdsosd name=\"onscreendisplay\" ! "
    "nvegltransform ! "
    "fake sync=\"false\"";

It’s normal, your pipeline use a video file, with a video file it work also for me. My pipeline use a usb camera, you should try with a camera as a sources.

Here my pipeline :

pipeline:
  - v4l2src:
      device: /dev/video0
  - capsfilter:
      caps: "image/jpeg, width=1920, height=1080, framerate=30/1"
  - jpegdec: {}
  - videoconvert: {}
  - nvvideoconvert: {}
  - capsfilter:
      caps: "video/x-raw(memory:NVMM), format=RGBA, width=1920, height=1080, framerate=30/1"
  - mux.sink_0:
     nvstreammux:
        name: mux
        batch-size: 1
        width: 1920
        height: 1080
        batched-push-timeout: 4000000
        live-source: 1
        num-surfaces-per-frame: 1
        sync-inputs: 0
        max-latency: 0
  - nvinfer:
      name: primary-inference
      config-file-path: ../infer_cfg/YOLOV8S.yml
  - nvtracker:
      tracker-width: 640
      tracker-height: 384
      gpu-id: 0
      ll-lib-file: /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
      ll-config-file: ../infer_cfg/config_tracker_NvDCF_perf.yml
  - nvdsanalytics:
      name: "analytics"
      config-file: ../infer_cfg/analytics.txt
  - nvvideoconvert: {}
  - nvdsosd:
      name: onscreendisplay
  - fpsdisplaysink:
      name: fps-display
      video-sink: fakesink
      text-overlay: false
      sync: false

I do not have a v4l2 camera that meets your criteria currently. Could you help to narrow down that by modifying your pipeline as follows?

  1. uridecodebin → nvvideoconvert → nvstreammux
  2. change the jpegdec to nvjpegdec
  3. change the source format of your camera

You can refer to our FAQ to learn how to set up the v4l2 camera.

as I said earlier uridecodebin → nvvideoconvert → nvstreammux work, I need to use a camera as a source,

  • change the jpegdec to nvjpegdec : not working + drop fps (from 30 to 2 fps)
  • change the source format of your camera : no change using YUY, still a segmentation fault as the saving of an image, (even in this format, it work on orin and x86, only not working in nano)

The uridecodebin can also use camera v4l2 source. You can just refer to the FAQ I attached before to learn how to use that.

What version works on Orin?

I used v4l2src with the format NV12 instead of RGBA in the capsfilter and it worked on the nano