How to integrate multiple intel Realsense RGB-D frames to Deepstream Pipeline?

I am attempting to detect objects in RGB frames captured simultaneously by four Intel RealSense D405 cameras and to compute the distance between each camera and the detected objects. I plan to use the DeepStream framework because it allows a single model to perform inference on multiple streams in parallel. However, I have not been able to determine how to initialize all four cameras and feed their streams into a single DeepStream pipeline. I explored the DeepStream 3D Depth Camera App example for loading and rendering RGB-D data, but I could not identify how to extend it to initialize multiple cameras or how to pass all RGB and depth frames into the pipeline for inference.

Currently I am doing the following way:

void capture_thread1(CamData *data) {
    rs2::pipeline pipe;
    rs2::config cfg;
    uint64_t timestamp = 0;

    try {
        cfg.enable_device(data->serial);
        // RGBA is preferred by DeepStream's OSD and Tiler
        cfg.enable_stream(RS2_STREAM_COLOR, data->width, data->height, RS2_FORMAT_RGBA8, data->fps);
        pipe.start(cfg);
        std::cout << "SUCCESS: Thread started for PCIe Camera: " << data->serial << std::endl;
    } catch (const rs2::error & e) {
        std::cerr << "RealSense Error [" << data->serial << "]: " << e.what() << std::endl;
        return;
    }

    while (true) {
        // Check if pipeline is in PLAYING state
        GstState state;
        gst_element_get_state(data->appsrc, &state, NULL, 0);
        if (state != GST_STATE_PLAYING) {
            std::this_thread::sleep_for(std::chrono::milliseconds(200));
            continue;
        }

        // Non-blocking frame polling
        rs2::frameset frames;
        if (pipe.poll_for_frames(&frames)) {
            rs2::video_frame color = frames.get_color_frame();
            size_t size = color.get_width() * color.get_height() * 4;

            // Allocate and map buffer
            GstBuffer *buffer = gst_buffer_new_allocate(NULL, size, NULL);
            GstMapInfo map;
            
            if (gst_buffer_map(buffer, &map, GST_MAP_WRITE)) {
                memcpy(map.data, color.get_data(), size);
                gst_buffer_unmap(buffer, &map);

                // Set timing metadata
                GST_BUFFER_PTS(buffer) = timestamp;
                GST_BUFFER_DTS(buffer) = timestamp;
                GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale_int(1, GST_SECOND, data->fps);
                timestamp += GST_BUFFER_DURATION(buffer);

                // Push to AppSrc
                GstFlowReturn ret;
                g_signal_emit_by_name(data->appsrc, "push-buffer", buffer, &ret);
                gst_buffer_unref(buffer);

                if (ret != GST_FLOW_OK) {
                    std::cerr << "Stream stopped for serial: " << data->serial << std::endl;
                    break;
                }
            }
        } else {
            // Prevent 100% CPU usage
            std::this_thread::sleep_for(std::chrono::microseconds(500));
        }
    }
}

however I am encountering several errors such as segmentation fault or nvstreammux: Successfully handled EOS for source_id=1
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy.

Note: I am a complete beginner in Gstreammer and Deepstream.

The Intel RealSense cameras are not standard 2D video cameras, the DS3D interfaces are introduced to handle the none standard multimedia formats data. DeepStream-3D Custom Apps and Libs Tutorials — DeepStream documentation

The DeepStream 3D Depth Camera App is just one of the DS3D samples. The sample does not support multiple DS3D data batch, you can implement it by yourself.

I followed the custom apps tutorial and reviewed the source code for the sample depth camera app. To extract color frames from the dataloader’s data map, I created a filter as shown below. However, I am encountering the following compilation error:

error: no matching function for call to ‘ds3d::GuardDataTds3d::abiDataMap::GuardDataT(ds3d::abiRefTds3d::abi2DFrame*)’
29 | GuardDataMap(Args&&… args) : _Base(std::forward(args)…)

I have not been able to resolve this issue. Below is the source code for the filter I implemented:

#include <ds3d/common/impl/impl_datafilter.h>
#include <ds3d/common/hpp/datafilter.hpp>
#include <ds3d/common/hpp/datamap.hpp>
#include <ds3d/common/hpp/frame.hpp>

using namespace ds3d;

class DatamapToColorFrameFilter : public impl::BaseImplDataFilter {
public:
    DatamapToColorFrameFilter() = default;

protected:
    ErrCode startImpl(const std::string&, const std::string&) override
    {
        // Input is a DS3D DataMap coming from a dataloader
        setInputCaps("DataMap");

        // Output is a Frame (ColorFrame)
        setOutputCaps("DS3D::ColorFrame");
        return ErrCode::kGood;
    }

    ErrCode processImpl(
        GuardDataMap datamap,
        OnGuardDataCBImpl outputDataCb,
        OnGuardDataCBImpl inputConsumedCb) override
    {
        static const std::string kColorKey = "DS3D::ColorFrame";

        Frame2DGuard colorFrame;

        // Safely extract the color frame from the datamap
        if (datamap.hasData(kColorKey)) {
            ErrCode c = datamap.getGuardData(kColorKey, colorFrame);
            if (isGood(c) && colorFrame && outputDataCb) {
                // Output the FRAME, not the datamap
                outputDataCb(ErrCode::kGood, colorFrame.abiRef());
            }
        }

        // Signal that the input datamap has been consumed
        if (inputConsumedCb) {
            inputConsumedCb(ErrCode::kGood, datamap.abiRef());
        }

        return ErrCode::kGood;
    }

    ErrCode flushImpl() override { return ErrCode::kGood; }
    ErrCode stopImpl() override { return ErrCode::kGood; }
};

// Factory function
DS3D_EXTERN_C_BEGIN
DS3D_EXPORT_API ds3d::abiRefDataFilter* createDatamapToColorFrameFilter()
{
    return NewAbiRef<abiDataFilter>(new DatamapToColorFrameFilter());
}
DS3D_EXTERN_C_END

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hi, @sijan.karki :
What kind of project are you working for? Is this request business related?

I am using Jetson AGX Orin 64 GB with DeepStream version 7.1.0, jetpack version 6.2.1+b3b, tensorRT version 10.3.0.30. We are developing a research platform having 16 realsense D405 RGB-D cameras which we have to process synchronously.

We are developing a agricultural research platform in a university.