How to integrate multiple intel Realsense RGB-D frames to Deepstream Pipeline?

I am attempting to detect objects in RGB frames captured simultaneously by four Intel RealSense D405 cameras and to compute the distance between each camera and the detected objects. I plan to use the DeepStream framework because it allows a single model to perform inference on multiple streams in parallel. However, I have not been able to determine how to initialize all four cameras and feed their streams into a single DeepStream pipeline. I explored the DeepStream 3D Depth Camera App example for loading and rendering RGB-D data, but I could not identify how to extend it to initialize multiple cameras or how to pass all RGB and depth frames into the pipeline for inference.

Currently I am doing the following way:

void capture_thread1(CamData *data) {
    rs2::pipeline pipe;
    rs2::config cfg;
    uint64_t timestamp = 0;

    try {
        cfg.enable_device(data->serial);
        // RGBA is preferred by DeepStream's OSD and Tiler
        cfg.enable_stream(RS2_STREAM_COLOR, data->width, data->height, RS2_FORMAT_RGBA8, data->fps);
        pipe.start(cfg);
        std::cout << "SUCCESS: Thread started for PCIe Camera: " << data->serial << std::endl;
    } catch (const rs2::error & e) {
        std::cerr << "RealSense Error [" << data->serial << "]: " << e.what() << std::endl;
        return;
    }

    while (true) {
        // Check if pipeline is in PLAYING state
        GstState state;
        gst_element_get_state(data->appsrc, &state, NULL, 0);
        if (state != GST_STATE_PLAYING) {
            std::this_thread::sleep_for(std::chrono::milliseconds(200));
            continue;
        }

        // Non-blocking frame polling
        rs2::frameset frames;
        if (pipe.poll_for_frames(&frames)) {
            rs2::video_frame color = frames.get_color_frame();
            size_t size = color.get_width() * color.get_height() * 4;

            // Allocate and map buffer
            GstBuffer *buffer = gst_buffer_new_allocate(NULL, size, NULL);
            GstMapInfo map;
            
            if (gst_buffer_map(buffer, &map, GST_MAP_WRITE)) {
                memcpy(map.data, color.get_data(), size);
                gst_buffer_unmap(buffer, &map);

                // Set timing metadata
                GST_BUFFER_PTS(buffer) = timestamp;
                GST_BUFFER_DTS(buffer) = timestamp;
                GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale_int(1, GST_SECOND, data->fps);
                timestamp += GST_BUFFER_DURATION(buffer);

                // Push to AppSrc
                GstFlowReturn ret;
                g_signal_emit_by_name(data->appsrc, "push-buffer", buffer, &ret);
                gst_buffer_unref(buffer);

                if (ret != GST_FLOW_OK) {
                    std::cerr << "Stream stopped for serial: " << data->serial << std::endl;
                    break;
                }
            }
        } else {
            // Prevent 100% CPU usage
            std::this_thread::sleep_for(std::chrono::microseconds(500));
        }
    }
}

however I am encountering several errors such as segmentation fault or nvstreammux: Successfully handled EOS for source_id=1
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy.

Note: I am a complete beginner in Gstreammer and Deepstream.

The Intel RealSense cameras are not standard 2D video cameras, the DS3D interfaces are introduced to handle the none standard multimedia formats data. DeepStream-3D Custom Apps and Libs Tutorials — DeepStream documentation

The DeepStream 3D Depth Camera App is just one of the DS3D samples. The sample does not support multiple DS3D data batch, you can implement it by yourself.