GStreamer-WARNING **: 04:27:24.058: Invalid caps feature name: Segmentation fault (core dumped) c++

Description

i get the below warning and segmentation fault error .

Starting pipeline 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_ProcessPast not implemented
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:02.462125195 24972 0x564a37bd5cf0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/ubuntu/PoC/model/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt_b1_gpu0_fp32.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x300x300       
1   OUTPUT kFLOAT NMS             1x200x7         
2   OUTPUT kFLOAT NMS_1           1x1x1           

0:00:02.462222944 24972 0x564a37bd5cf0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/ubuntu/PoC/model/Primary_Bottle_SSD/ssd_resnet18_retrained_epoch_040_bo_99_bl_94_rej_84.etlt_b1_gpu0_fp32.engine
0:00:02.465999631 24972 0x564a37bd5cf0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:PoC_pgie_config.txt sucessfully

Decodebin child added: source

Decodebin child added: decodebin0
Running...

Decodebin child added: qtdemux0

Decodebin child added: multiqueue0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: aacparse0

Decodebin child added: avdec_aac0

Decodebin child added: nvv4l2decoder0

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: 

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: @\xbaE5JV

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: \u0002

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: `\xd9D5JV

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: \u0014

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: \xa0=2PO\u007f

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: @\xbaE5JV

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: 

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: `4n5JV

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.057: Invalid caps feature name: `

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.058: Invalid caps feature name: `\xd9D5JV

(deepstream-Bottle-app:24972): GStreamer-WARNING **: 04:27:24.058: Invalid caps feature name: 
Segmentation fault (core dumped)

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
We recommend you to raise this query in TLT forum for better assistance.

Thanks!

If you are using deepstream, please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) ==> GPU , Tesla T4
• DeepStream Version => 1.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)

What deepstream code are you running? Are you running the samples? Can you provide the code?

its a custom biggercode . , Is it the issue with capsfilter ?? below is the short code

data.caps = gst_element_factory_make("capsfilter", caps_c);

    caps = gst_caps_from_string("video/x-raw(memory:NVMM), format=I420");
    g_object_set(G_OBJECT(data.caps), "caps", caps, NULL);

and the linking is

gst_element_link(data.queue, data.nvvidconv_pre);

    gst_element_link(data.nvvidconv_pre, data.nvosd);

    gst_element_link(data.nvosd, data.nvvidconv);

    gst_element_link(data.nvvidconv, data.caps);

    gst_element_link(data.caps, data.encoder);

    gst_element_link(data.encoder, data.rtppay);

        gst_element_link(data.rtppay, data.sink);

I can’t identify anything with this piece of code.

Its not sample code , its an custom code , trying to save inference as an h264 Video file

Please provide a simple sample code which can reproduce the problem. And please provide the detailed steps of reproducing too.

void cb_newpad(GstElement* decodebin, GstPad* decoder_src_pad, gpointer data)

{

    GstCaps* caps;

    GstStructure* gststruct;

    GstPad* bin_ghost_pad;

    GQuark gstname;

    GstCapsFeatures* features, * nvmmMemoryType;

    nvmmMemoryType = gst_caps_features_new("memory:NVMM");

    GstElement* source_bin = GST_ELEMENT(data);

    caps = gst_pad_get_current_caps(decoder_src_pad);

    gststruct = gst_caps_get_structure(caps, 0);

    features = gst_caps_get_features(caps, 0);

    gstname = gststruct->name;  /*gststruct.get_name();*/

    /*# Need to check if the pad created by the decodebin is for video and not

        # audio.*/

    g_print("gstname=%s\n", gstname);

    if (g_strrstr(gst_structure_get_name(gststruct), "video"))

        //if((gstname.find("video") != -1) 

    {

        /*# Link the decodebin pad only if decodebin has picked nvidia

            # decoder plugin nvdec_* .We do this by checking if the pad caps contain

            # NVMM memory features.*/

        g_print("features=", features);

        if (gst_caps_features_is_equal(features, nvmmMemoryType))

        {

            //# Get the source bin ghost pad

            bin_ghost_pad = gst_element_get_static_pad(source_bin, "src");

            if (gst_ghost_pad_set_target(GST_GHOST_PAD(bin_ghost_pad), decoder_src_pad))

            {

                g_printerr("Failed to link decoder src pad to source bin ghost pad\n");

            }

        }

        else

        {

            g_printerr("Error: Decodebin did not pick nvidia decoder plugin \n");

        }

    }

}

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This piece of code can not run. Please provide complete code and make sure the code can run.