DeepStream nvmultiurisrcbin Aborted (core dumped) when adding stream via REST API

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) NVIDIA GeForce RTX 3090 Ti
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 10.3.0.26
• NVIDIA GPU Driver Version (valid for GPU only) 550.163.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’m using a Docker container and want to test nvmultiurisrcbin to understand how to add or remove streams in a DeepStream pipeline. I create the nvmultiurisrcbin element using the following code:

GstElement *SourceBin::create_nv_multi_urisrc_bin(guint index, std::string filenames){
    // (void)filename;
    // (void)index;
    static GstElement *nvmultiurisrcbin;
    gchar nvmultiurisrcbin_name[32] = {};

    g_print("Creating nvmultiurisrcbin for stream_id %d or stream %s \n", index,
            filenames.c_str());
    // g_source_id_list[index] = index;
    g_snprintf(nvmultiurisrcbin_name, 15, "nvmultiurisrc-bin-%02d", index);
    nvmultiurisrcbin = gst_element_factory_make("nvmultiurisrcbin", nvmultiurisrcbin_name);
    if (!nvmultiurisrcbin) {
        std::cerr << "Failed to create nvmultiurisrcbin" << std::endl;
        return NULL;
    }
    g_object_set(G_OBJECT(nvmultiurisrcbin), "uri-list", filenames.c_str(), NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "max-batch-size", 20/*(gint)filenames.size()*/, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "live-source", 1, NULL); //1 for RTSP/camera, 0 for file
    g_object_set(G_OBJECT(nvmultiurisrcbin), "batched-push-timeout", 33000, NULL); //1 for RTSP/camera, 0 for file    
    // g_object_set(G_OBJECT(nvmultiurisrcbin), "rtsp-reconnect-interval", 5, NULL);
    // g_object_set(G_OBJECT(nvmultiurisrcbin), "rtsp-reconnect-attempts", 10, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "drop-pipeline-eos", TRUE, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "drop-frame-interval", 5, NULL); //Skip frames if decoding lags behind.
    g_object_set(G_OBJECT(nvmultiurisrcbin), "file-loop", FALSE, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "gpu-id", GPU_ID, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "width", 1920, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "height", 1080, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "cudadec-memtype", 0, NULL); // Memory type for CUDA decoding (0=default, 1=NVBUF_MEM_CUDA_PINNED, 2=NVBUF_MEM_CUDA_DEVICE, 3=NVBUF_MEM_CUDA_UNIFIED).
    g_object_set(G_OBJECT(nvmultiurisrcbin), "latency", 200, NULL); //Network jitter buffer latency (milliseconds). Used for RTSP.
    g_object_set(G_OBJECT(nvmultiurisrcbin), "sensor-id-list", ""/*"UniqueSensorId1"*/, NULL);
    g_object_set(G_OBJECT(nvmultiurisrcbin), "sensor-name-list", "UniqueSensorName1", NULL);  
    g_object_set(G_OBJECT(nvmultiurisrcbin), "buffer-pool-size", 16, NULL);  
    g_object_set(G_OBJECT(nvmultiurisrcbin), "ip-address", "localhost", NULL);  
    g_object_set(G_OBJECT(nvmultiurisrcbin), "port", "3190", NULL);  // Default: "9000"
    g_object_set(G_OBJECT(nvmultiurisrcbin), "disable-audio", TRUE, NULL);

    if (!nvmultiurisrcbin) {
        std::cerr << "Failed to create nvmultiurisrcbin" << std::endl;
        return NULL;
    }
    return nvmultiurisrcbin;
}

Simpler scenario:

GstElement *SourceBin::create_nv_multi_urisrc_bin(guint index, std::string filenames){
    static GstElement *nvmultiurisrcbin;

    gchar *file_uri = g_strdup("file:///root/Put.mp4");
    g_object_set(G_OBJECT(nvmultiurisrcbin),
                 "uri-list", file_uri,
                 "max-batch-size", 20,
                 "sensor-id-list", "UniqueSensorId1",
                 "width", 1920,
                 "height", 1080,
                 "sensor-name-list", "",
                 "port", "3190",
                 "batched-push-timeout", 33000,
                 NULL);

    if (!nvmultiurisrcbin) {
        std::cerr << "Failed to create nvmultiurisrcbin" << std::endl;
        return NULL;
    }
    return nvmultiurisrcbin;
}

I then attempt to add a new stream using the REST API:

curl -v -XPOST 'http://localhost:3190/api/v1/stream/add' -d '{
  "key": "sensor",
  "value": {
     "camera_id": "uniqueSensorID1",
     "camera_name": "front_door",
     "camera_url": "file:///root/P.mp4",
     "change": "camera_add",
     "metadata": {
         "resolution": "1920 x1080",
         "codec": "h264",
         "framerate": 30
     }
 },
 "headers": {
     "source": "vst",
     "created_at": "2021-06-01T14:34:13.417Z"
 }
}'

However, when running this command in the terminal, the program crashes with:

terminate called after throwing an instance of 'std::logic_error'
  what():  basic_string::_M_construct null not valid
Aborted (core dumped)

So the pipeline fails with a core dump.

These questions may be related to another one I asked Service-maker deepstream_test5_app Unable to set the pipeline to the playing state error and Unable to add streams to DeepStream Server, API endpoints returning 404

As you suggested in The deepstream-server use rest api after adding streaming I find the deepstream-server not to infer, I tried to verify this using gst-launch-1.0:

gst-launch-1.0 nvmultiurisrcbin port=9000 ip-address=localhost \
  batched-push-timeout=33333 max-batch-size=10 drop-pipeline-eos=1 \
  rtsp-reconnect-interval=1 live-source=1 width=1920 height=1080 ! \
  nvmultistreamtiler ! fakesink async=false

Then I attempted to add a stream via the REST API:

curl -v -XPOST 'http://localhost:9000/api/v1/stream/add' -d '{
    "key": "sensor",
    "value": {
        "camera_id": "uniqueSensorID1",
        "camera_name": "front_door",
        "camera_url": "file:///root/P.mp4",
        "change": "camera_add",
        "metadata": {
            "resolution": "1920 x1080",
            "codec": "h264",
            "framerate": 30
        }
    },
    "headers": {
        "source": "vst",
        "created_at": "2021-06-01T14:34:13.417Z"
    }
}'

However, I received the following error:

Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:9000...
* Connected to localhost (127.0.0.1) port 9000 (#0)
> POST /api/v1/stream/add HTTP/1.1
> Host: localhost:9000
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Length: 419
> Content-Type: application/x-www-form-urlencoded
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Content-Length: 21
< Content-Type: text/plain
< 
* Connection #0 to host localhost left intact
{"error":"Not Found"}

Please refer to the sample /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-server for how to use nvmultiurisrcbin. does the file “/root/P.mp4” exist?

yes:

ll /root/P.mp4 
-rw-r--r-- 1 root root 3936955 Sep  3 06:28 /root/P.mp4

I do this test for port 9456 and it was successfully.

curl -v -XPOST 'http://localhost:9456/api/v1/stream/add' -d '{
    "key": "sensor",
    "value": {
        "camera_id": "uniqueSensorID1",
        "camera_name": "front_door",
        "camera_url": "file:///root/P.mp4",
        "change": "camera_add",
        "metadata": {
            "resolution": "1920 x1080",
            "codec": "h264",
            "framerate": 30
        }
    },
    "headers": {
        "source": "vst",
        "created_at": "2021-06-01T14:34:13.417Z"
    }
}'

result:

Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:9456...
* Connected to localhost (127.0.0.1) port 9456 (#0)
> POST /api/v1/stream/add HTTP/1.1
> Host: localhost:9456
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Length: 425
> Content-Type: application/x-www-form-urlencoded
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< Content-Type: text/plain
< Content-Length: 67
< Connection: close
< 
{
	"reason" : "STREAM_ADD_SUCCESS",
	"status" : "HTTP/1.1 200 OK"
* Closing connection 0

and

gst-launch-1.0 nvmultiurisrcbin port=9456 ip-address=localhost   batched-push-timeout=33333 max-batch-size=10 drop-pipeline-eos=1   rtsp-reconnect-interval=1 live-source=1 width=1920 height=1080 !   nvmultistreamtiler ! fakesink async=false
Setting pipeline to PAUSED ...
Civetweb version: v1.16
Server running at port: 9456
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
Redistribute latency...
uri:/api/v1/stream/add
method:POST
Redistribute latency...
Failed to query video capabilities: Invalid argument
Redistribute latency...
Redistribute latency...0.0 %)
nvstreammux: Successfully handled EOS for source_id=0
WARNING: from element /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer: No Sources found at the input of muxer. Waiting for sources.
Additional debug info:
gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer
WARNING: from element /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer: No Sources found at the input of muxer. Waiting for sources.
Additional debug info:
gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer
WARNING: from element /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer: No Sources found at the input of muxer. Waiting for sources.
Additional debug info:
gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer



WARNING: from element /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer: No Sources found at the input of muxer. Waiting for sources.
Additional debug info:
gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer
WARNING: from element /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer: No Sources found at the input of muxer. Waiting for sources.
Additional debug info:
gstnvstreammux.cpp(2893): gst_nvstreammux_src_push_loop (): /GstPipeline:pipeline0/GstDsNvMultiUriBin:dsnvmultiuribin0/GstBin:dsnvmultiuribin0_creator/GstNvStreamMux:src_bin_muxer
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:05:42.977270046
Setting pipeline to NULL ...

I ran another test to verify whether nvmultiurisrcbin works properly. First, I created a simple pipeline using the following gst-launch-1.0 command that I had executed and tested before:

gst-launch-1.0 nvmultiurisrcbin port=9456 ip-address=localhost   batched-push-timeout=33333 max-batch-size=10 drop-pipeline-eos=1   rtsp-reconnect-interval=1 live-source=1 width=1920 height=1080 !   nvmultistreamtiler ! fakesink async=false

Then, I reproduced the same logic in C++:

#include <gst/gst.h>

#include <iostream>
#include <string>

class PipelineManager {
   private:
    GstElement *pipeline;
    GstElement *nvmultiurisrcbin;
    GstElement *nvmultistreamtiler;
    GstElement *fakesink;
    GMainLoop *main_loop;

   public:
    PipelineManager()
        : pipeline(nullptr),
          nvmultiurisrcbin(nullptr),
          nvmultistreamtiler(nullptr),
          fakesink(nullptr),
          main_loop(nullptr) {}

    ~PipelineManager() { cleanup(); }

    bool create_pipeline() {
        // Initialize GStreamer
        gst_init(nullptr, nullptr);

        // Create elements
        pipeline = gst_pipeline_new("multi-uri-pipeline");
        if (!pipeline) {
            std::cerr << "Failed to create pipeline" << std::endl;
            return false;
        }

        nvmultiurisrcbin =
            gst_element_factory_make("nvmultiurisrcbin", "nvmultiurisrcbin");
        if (!nvmultiurisrcbin) {
            std::cerr << "Failed to create nvmultiurisrcbin" << std::endl;
            return false;
        }

        nvmultistreamtiler = gst_element_factory_make("nvmultistreamtiler",
                                                      "nvmultistreamtiler");
        if (!nvmultistreamtiler) {
            std::cerr << "Failed to create nvmultistreamtiler" << std::endl;
            return false;
        }

        fakesink = gst_element_factory_make("fakesink", "fakesink");
        if (!fakesink) {
            std::cerr << "Failed to create fakesink" << std::endl;
            return false;
        }

        // Add elements to pipeline
        gst_bin_add_many(GST_BIN(pipeline), nvmultiurisrcbin,
                         nvmultistreamtiler, fakesink, nullptr);

        // Set properties for nvmultiurisrcbin
        g_object_set(G_OBJECT(nvmultiurisrcbin), "port", "9456", nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "ip-address", "localhost",
                     nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "batched-push-timeout", 33333,
                     nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "max-batch-size", 10, nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "drop-pipeline-eos", 1,
                     nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "rtsp-reconnect-interval", 1,
                     nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "live-source", 1, nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "width", 1920, nullptr);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "height", 1080, nullptr);

        // Try setting a config file that enables REST API
        // g_object_set(G_OBJECT(nvmultiurisrcbin), "config-file-path",
        // "/etc/deepstream/rest_api.conf", NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "uri-list",
                     "file:///root/Put.mp4", NULL);
        // g_object_set(G_OBJECT(nvmultiurisrcbin), "rtsp-reconnect-attempts",
        // 10, NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "drop-frame-interval", 5,
                     NULL);  // Skip frames if decoding lags behind.
        g_object_set(G_OBJECT(nvmultiurisrcbin), "file-loop", FALSE, NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "gpu-id", 0, NULL);
        g_object_set(
            G_OBJECT(nvmultiurisrcbin), "cudadec-memtype", 0,
            NULL);  // Memory type for CUDA decoding (0=default,
                    // 1=NVBUF_MEM_CUDA_PINNED, 2=NVBUF_MEM_CUDA_DEVICE,
                    // 3=NVBUF_MEM_CUDA_UNIFIED).
        g_object_set(G_OBJECT(nvmultiurisrcbin), "latency", 200,
                     NULL);  // Network jitter buffer latency (milliseconds).
                             // Used for RTSP.
        g_object_set(G_OBJECT(nvmultiurisrcbin), "sensor-id-list",
                     "UniqueSensorId1", NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "sensor-name-list",
                     "UniqueSensorName1", NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "buffer-pool-size", 16, NULL);
        g_object_set(G_OBJECT(nvmultiurisrcbin), "disable-audio", TRUE, NULL);

        // Set properties for fakesink
        g_object_set(G_OBJECT(fakesink), "async", FALSE, nullptr);

        // Link elements
        if (!gst_element_link_many(nvmultiurisrcbin, nvmultistreamtiler,
                                   fakesink, nullptr)) {
            std::cerr << "Failed to link elements" << std::endl;
            return false;
        }

        return true;
    }

    bool start_pipeline() {
        // Set pipeline to PLAYING state
        GstStateChangeReturn ret =
            gst_element_set_state(pipeline, GST_STATE_PLAYING);
        if (ret == GST_STATE_CHANGE_FAILURE) {
            std::cerr << "Failed to start pipeline" << std::endl;
            return false;
        }

        // Create and run main loop
        main_loop = g_main_loop_new(nullptr, FALSE);

        // Add bus watch for messages
        GstBus *bus = gst_element_get_bus(pipeline);
        gst_bus_add_watch(bus, bus_callback, this);
        gst_object_unref(bus);

        std::cout << "Pipeline started successfully" << std::endl;
        std::cout << "REST server available at: http://localhost:9456"
                  << std::endl;

        g_main_loop_run(main_loop);

        return true;
    }

    void stop_pipeline() {
        if (pipeline) {
            gst_element_set_state(pipeline, GST_STATE_NULL);
        }
        if (main_loop) {
            g_main_loop_quit(main_loop);
        }
    }

    void cleanup() {
        if (pipeline) {
            gst_object_unref(pipeline);
            pipeline = nullptr;
        }
        if (main_loop) {
            g_main_loop_unref(main_loop);
            main_loop = nullptr;
        }
    }

   private:
    static gboolean bus_callback(GstBus *bus, GstMessage *msg, gpointer data) {
        (void)bus;
        PipelineManager *self = static_cast<PipelineManager *>(data);

        switch (GST_MESSAGE_TYPE(msg)) {
            case GST_MESSAGE_ERROR: {
                gchar *debug;
                GError *error;
                gst_message_parse_error(msg, &error, &debug);
                std::cerr << "Error: " << error->message << std::endl;
                if (debug) {
                    std::cerr << "Debug info: " << debug << std::endl;
                }
                g_error_free(error);
                g_free(debug);
                self->stop_pipeline();
                break;
            }
            case GST_MESSAGE_EOS:
                std::cout << "End of stream" << std::endl;
                self->stop_pipeline();
                break;
            case GST_MESSAGE_STATE_CHANGED: {
                if (GST_MESSAGE_SRC(msg) == GST_OBJECT(self->pipeline)) {
                    GstState old_state, new_state, pending;
                    gst_message_parse_state_changed(msg, &old_state, &new_state,
                                                    &pending);
                    std::cout << "Pipeline state changed from "
                              << gst_element_state_get_name(old_state) << " to "
                              << gst_element_state_get_name(new_state)
                              << std::endl;
                }
                break;
            }
            default:
                break;
        }
        return TRUE;
    }
};

int main(int argc, char *argv[]) {
    (void)argc;
    (void)argv;
    PipelineManager pipeline_manager;

    // Create pipeline
    if (!pipeline_manager.create_pipeline()) {
        std::cerr << "Failed to create pipeline" << std::endl;
        return -1;
    }

    // Start pipeline
    if (!pipeline_manager.start_pipeline()) {
        std::cerr << "Failed to start pipeline" << std::endl;
        return -1;
    }

    // Cleanup
    pipeline_manager.cleanup();

    return 0;
}

I tested adding another stream via curl:

curl -v -XPOST 'http://localhost:9456/api/v1/stream/add' -d '{
    "key": "sensor",
    "value": {
        "camera_id": "uniqueSensorID2",
        "camera_name": "front_door_1",
        "camera_url": "file:///root/P.mp4",
        "change": "camera_add",
        "metadata": {
            "resolution": "1920 x1080",
            "codec": "h264",
            "framerate": 30
        }
    },
    "headers": {
        "source": "vst",
        "created_at": "2021-06-01T14:34:13.417Z"
    }
}'

Program output:

Civetweb version: v1.16
Server running at port: 9456
Pipeline started successfully
REST server available at: http://localhost:9456
Pipeline state changed from NULL to READY
Pipeline state changed from READY to PAUSED
Failed to query video capabilities: Invalid argument
Pipeline state changed from PAUSED to PLAYING
uri:/api/v1/stream/add
method:POST
Failed to query video capabilities: Invalid argument
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=0

Curl response:

Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:9456...
* Connected to localhost (127.0.0.1) port 9456 (#0)
> POST /api/v1/stream/add HTTP/1.1
> Host: localhost:9456
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Length: 427
> Content-Type: application/x-www-form-urlencoded
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< Content-Type: text/plain
< Content-Length: 67
< Connection: close
< 
{
	"reason" : "STREAM_ADD_SUCCESS",
	"status" : "HTTP/1.1 200 OK"
* Closing connection 0

This confirms that I was able to successfully add another stream.

As an additional note: in my main project I also use Prometheus for performance monitoring. Could that have any side effects on the behavior of nvmultiurisrcbin or REST server?

Finally, here’s the relevant part of my main pipeline structure for context:


if (sink_manager->display_output < 3) {
    gst_bin_add_many(GST_BIN(pipeline),
                     nv_infer_server_manager->primary_detector,
                     nv_tracker_manager->tracker,
                     face_nv_infer_server_manager->face_detector,
                     // gstds_example_manager->custom_plugin,
                     tiler_manager->tiler, queue_array[2].queue,
                     nv_video_convert_manager->nvvidconv, nv_osd_manager->nvosd,
                     sink_manager->sink, NULL);

    /* we link the elements together
     * nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd ->
     * video-renderer */
    if (!gst_element_link_many(  // streammux_manager->streammux,
            SourceBin::nvmultiurisrcbin, nv_video_convert_manager->nvvidconv,
            nv_infer_server_manager->primary_detector,
            nv_tracker_manager->tracker,
            face_nv_infer_server_manager->face_detector,
            //    gstds_example_manager->custom_plugin,
            tiler_manager->tiler, nv_osd_manager->nvosd, sink_manager->sink,
            NULL)) {
        g_printerr("Elements could not be linked.\n");
        return false;
    }
} else {
    gst_bin_add_many(
        GST_BIN(pipeline), nv_infer_server_manager->primary_detector,
        nv_tracker_manager->tracker,
        face_nv_infer_server_manager->face_detector,
        // gstds_example_manager->custom_plugin,
        tiler_manager->tiler, queue_array[2].queue,
        nv_video_convert_manager->nvvidconv, nv_osd_manager->nvosd,
        sink_manager->nvvidconv_postosd, sink_manager->caps,
        sink_manager->encoder, sink_manager->rtppay, sink_manager->sink, NULL);

    //            Link the elements together:
    //            file-source -> h264-parser -> nvh264-decoder ->
    //            nvinfer -> nvvidconv -> nvosd -> nvvidconv_postosd ->
    //            caps -> encoder -> rtppay -> udpsink
    if (!gst_element_link_many(  // streammux_manager->streammux,
            SourceBin::nvmultiurisrcbin, nv_video_convert_manager->nvvidconv,
            nv_infer_server_manager->primary_detector,
            nv_tracker_manager->tracker,
            face_nv_infer_server_manager->face_detector,
            // gstds_example_manager->custom_plugin,
            tiler_manager->tiler, nv_osd_manager->nvosd,
            sink_manager->nvvidconv_postosd, sink_manager->caps,
            sink_manager->encoder, sink_manager->rtppay, sink_manager->sink,
            NULL)) {
        g_printerr("Elements could not be linked.\n");
        return false;
    }
}

why is using 9456 fine? could you check if this is the network issue? for exmaple, the port of nvmultiurisrcbin is taken by other processes? 404 means the url can’t be found.

I understand that 404 means the URL can’t be found. However, I only use Prometheus, and I’m certain that none of the other pipeline elements are bound to a specific port or address. I’ve already tested with port 1111 and some other specific ports, It works fine for a simple pipeline. Could you please check my detailed post for more context?

I temporarily disabled Prometheus, but that didn’t change anything — I still get the same error:

terminate called after throwing an instance of 'std::logic_error'
what():  basic_string::_M_construct null not valid
Aborted (core dumped)

This suggests that the DeepStream REST server is receiving an invalid or null property value from the element configuration. When it attempts to construct a std::string from that value, it throws a std::logic_error with the message “basic_string::_M_construct null not valid”. In other words, the REST server is processing the POST request, but during JSON parsing it encounters a nullptr while trying to build a std::string.

could you share the detailed reproduce steps? please use gst-launch-1.0 and curl instead of the custom code.

I already tested a simple pipeline using gst-launch-1.0 together with curl, and it worked fine. I’ve shared that pipeline:

Could you clarify what additional “detailed reproduce steps” you expect? From my perspective, I’ve already included all the relevant details. Are they not sufficient?

could you use gdb to get the crash stack? do you mean, starting your code as server, using one curl cmd fine, but the other crashed? if so, please check if the Json data is fine.

sure!

Let me clarify the situation. My goal is to add a new stream into the pipeline using nvmultiurisrcbin.

I was able to successfully add a stream using either the sample code I shared earlier or a simple gst-launch-1.0 pipeline combined. Both of those approaches worked fine.

However, when I try the same with my more complex pipeline (which I also shared above), I run into the issue. I tested with both curl commands you suggested, and in both cases, I get the same error.

I hope this explanation helps clarify my problem. Below, I’ve included the log from running with gdb.

terminate called after throwing an instance of 'std::logic_error'
what():  basic_string::_M_construct null not valid

Thread 24 "civetweb-worker" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff677fe000 (LWP 45804)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140734929821696) at ./nptl/pthread_kill.c:44
44	./nptl/pthread_kill.c: No such file or directory.
(gdb) bt full
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140734929821696) at ./nptl/pthread_kill.c:44
tid = <optimized out>
ret = 0
pd = 0x7fff677fe000

old_mask = {__val = {0, 0, 0, 0, 2314898798224089137, 3762229861064646688, 7289076169237933879, 2968470289586871081, 0, 0, 0, 0, 2333274052185060197, 1835057137129123700, 9187138739565169161, 9114541635029792639}}
ret = <optimized out>
#1  __pthread_kill_internal (signo=6, threadid=140734929821696) at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=140734929821696, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3  0x00007ffff73ed476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
ret = <optimized out>
#4  0x00007ffff73d37f3 in __GI_abort () at ./stdlib/abort.c:79
save_stage = 1

act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction = 0x0}, sa_mask = {__val = {0 <repeats 16 times>}}, sa_flags = 0, sa_restorer = 0x7ffff75c6860 <stderr>}
sigs = {__val = {32, 0 <repeats 15 times>}}
#5  0x00007ffff7676b9e in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007ffff768220c in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7  0x00007ffff7682277 in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#8  0x00007ffff76824d8 in  () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#9  0x00007ffff7679344 in std::__throw_logic_error(char const*) () at /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x00007ffff409762e in RequestHandler::handle(CivetServer*, mg_connection*) () at ///opt/nvidia/deepstream/deepstream-7.1/lib/libnvds_rest_server.so
#11 0x00007ffff7bca0e3 in CivetServer::requestHandler(mg_connection*, void*) () at /usr/local/lib/libprometheus-cpp-pull.so.1.3
#12 0x00007ffff7bda7d9 in handle_request () at /usr/local/lib/libprometheus-cpp-pull.so.1.3
#13 0x00007ffff7bdbe52 in worker_thread () at /usr/local/lib/libprometheus-cpp-pull.so.1.3
#14 0x00007ffff743fac3 in start_thread (arg=<optimized out>) at ./nptl/pthread_create.c:442
ret = <optimized out>
pd = <optimized out>

unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737488340464, 5161855359760523453, 140734929821696, 0, 140737341814736, 140737488340816, -5162155524108001091, -5161839176148393795}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#15 0x00007ffff74d1850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

I finally found the source of the problem. As I suspected earlier, disabling Prometheus in the project resolved the issue. It seems that Prometheus has a conflict with the DeepStream REST server.

I’m using this version of Prometheus:
And these Prometheus headers:

#include <prometheus/counter.h>
#include <prometheus/exposer.h>
#include <prometheus/gauge.h>
#include <prometheus/histogram.h>
#include <prometheus/registry.h>
1 Like

Thanks for the sharing! Is this still an DeepStream issue to support? Thanks!

I’ve opened new issues in both the NVIDIA forums(Conflict between Prometheus-cpp and DeepStream REST Server) and the Prometheus GitHub repository(Conflict between prometheus-cpp and NVIDIA DeepStream REST server) to highlight this problem for future development of both tools. Please take a look at them.

Noticing the app crashed in CivetServer, please check if the conflict lib is CivetServer. if so , Since /opt/nvidia/deepstream/deepstream-7.1/sources/libs/nvds_rest_server is opensource. you may use the CivetServer Prometheus used to rebuild nvds_rest_server.

I checked the CivetServer libraries as you suggested.

  • In Prometheus:

    • /usr/local/include/prometheus/CivetServer.h

    • /usr/local/include/prometheus/civetweb.h

    • CIVETWEB_VERSION = 1.16

  • In DeepStream:

    • /opt/nvidia/deepstream/deepstream-7.1/sources/includes/CivetServer.h

    • /opt/nvidia/deepstream/deepstream-7.1/sources/includes/civetweb.h

    • CIVETWEB_VERSION = 1.16

So it looks like both DeepStream and Prometheus are using the same CivetWeb version (1.16).

Given that, it doesn’t seem to be a version mismatch issue.

Thanks for the sharing! could you find the conflict lib to narrow down this issue? Or could you work around this issue? for example, wrap Prometheus in a standalone process.

At the moment I’m unable to work on it, but I plan to run some tests in the next few days.

  1. To get more information, could you rebuild all code with debug version to get a new gdb crash stack? Thanks! Here is nvds_rest_server path: /opt/nvidia/deepstream/deepstream-7.1/sources/libs/nvds_rest_server
  2. could you provide a simplified project to reproduce this issue? you may add Prometheus code to the native sample deepstream-server which includes nvmultiurisrcbin.