Adding a 5th camera causes error

Hi,

We are developing a system which utilises multiple cameras with a jetson orin NX 16GB on a custom carrier board. We are using Jetpack 36.3.

We have a working system with 4 cameras going into a single MIPI port using GMSL, and one other going into a single MIPI port also over GMSL.

When we run the 5 cameras using Gstreamer live, we have no issues and can see the various live views. However when using Gstreamer to collect the 5 data sets we get the error below for 1 camera. The other 4 cameras collect data as we would expect.


nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:732 NvBufSurfaceFromFd Failed.
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:243 (propagating)

The error always happens and only affects one camera, but it randomly happens to whichever camera is the last to be opened by the system (which attempts to open all 5 up in parallel and so changes from test to test)

I’ve attached the journalctl log.

I’ve increased the clock speeds to maximum, and have tried looking through the nvargus debug messages but can’t find anything that is out of the ordinary. Thoughts on how to debug or thoughts on potential errors appreciated.

journalctl_5cams.txt (20.7 KB)

hello loek.janssen,

may I also confirm which CSI bricks you’ve used for running those cameras.
according to journalctl log. it looks you’ve Sensor could not be opened. failures.
re-cap the error below…
SCF: Error BadParameter: Sensor could not be opened. (in src/services/capture/CaptureServiceDeviceSensor.cpp, function getSourceFromGuid(), line 725)

you may double check DT settings, please refer to developer guide, Module Properties.
please refer to position property settings for six-camera system, and using those top-five properties accordingly.

Hi Jerrychang,

We are using CSI Bricks 1 and 3 (via serial_b and serial_d). 4 Cameras in 1, and 1 camera in 3.

I had a look into that error, but interestingly it seems to happen when we run 4 cameras (which works) against 5 (which doesn’t)

I’ve checked position property, and made sure each one was different, but unfortunately made no difference.

This is the command we use in our code for opening up Gstreamer

std::string ArgusPublisher::gstreamerPipeline() {
return “nvarguscamerasrc sensor-id=” + std::to_string(sensor_id) + " ! "
“video/x-raw(memory:NVMM), width=(int)” + std::to_string(capture_width) + “, height=(int)” + std::to_string(capture_height) + “, framerate=(fraction)” + std::to_string(frame_rate) + "/1 ! "
“nvvidconv flip-method=” + std::to_string(flip_method) + " ! "
"video/x-raw, format=GRAY8 ! "
“appsink”;
}

What is most odd to us is that it works for 4 cameras (and any ordering of them), but not 5.

hello loek.janssen,

just an FYI, both serial_b and serial_d support up-to 2-lane configuration.


let’s narrow down the issue by using fakesink instead of appsink
here’s sample command-line to disable preview and shows frame-rate only,
you may revise the <ID> property accordingly to launch all your cameras for testing,
for instance,
$ gst-launch-1.0 nvarguscamerasrc sensor-id=<ID> sensor-mode=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_<ID> video-sink=fakesink sync=0 -v

Hi JerryChange,

Yep both are connected up in a 2-lane configuration.

So if I use fakesink on the terminal (5 terminals open at once) then the cameras work. When I do live view on the terminal also works.

When I run appsink inside a c++ program however I get the error.

I’ve attached a sample bit of code I have whipped up that works for 4 cameras, and for 5 doesn’t. The error seems to revolve around the line cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);

Once that line is run the mapped entry not found error (in my first post) happens.

#include <opencv2/opencv.hpp>
#include <iostream>
#include <thread>
#include <vector>
#include <filesystem>
#include <chrono>
#include <iomanip>
#include <sstream>

namespace fs = std::filesystem;

void captureAndSaveFrames(int camera_id, const std::string& base_dir) {
    // Create a directory for this camera
    std::string camera_dir = base_dir + "/camera_" + std::to_string(camera_id);
    fs::create_directories(camera_dir);

    // GStreamer pipeline string
    std::string pipeline = "nvarguscamerasrc sensor-id=" + std::to_string(camera_id) +
                           " ! video/x-raw(memory:NVMM),width=(int)1236, height=(int)1032, framerate=(fraction)20/1 ! nvvidconv flip-method=0 ! video/x-raw, format=GRAY8 ! appsink";

    std::cout << "Using GStreamer pipeline for camera " << camera_id << ": " << pipeline << std::endl;

    // Open the camera using the GStreamer pipeline
    cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
    if (!cap.isOpened()) {
        std::cerr << "Error: Unable to open the camera with GStreamer pipeline for camera " << camera_id << std::endl;
        return;
    }

    cv::Mat frame;
    int frameCount = 0;

    while (true) {
        // Capture frame-by-frame
        cap >> frame;
        if (frame.empty()) {
            std::cerr << "Error: Received an empty frame from camera " << camera_id << std::endl;
            break;
        }

        // Save the frame as an image file
        std::string filename = camera_dir + "/frame_" + std::to_string(frameCount) + ".jpg";
        cv::imwrite(filename, frame);

        // Increment the frame count
        frameCount++;

        // For demonstration, let's capture a limited number of frames, say 100
        if (frameCount >= 100) {
            break;
        }
    }

    // Release the camera
    cap.release();
}

int main() {
    // Number of cameras
    const int num_cameras = 5;

    // Create a base directory with a timestamp
    auto now = std::chrono::system_clock::now();
    auto in_time_t = std::chrono::system_clock::to_time_t(now);
    std::stringstream ss;
    ss << std::put_time(std::localtime(&in_time_t), "%Y-%m-%d_%H-%M-%S");
    std::string base_dir = "measurement_" + ss.str();
    fs::create_directories(base_dir);

    // Create a vector of threads
    std::vector<std::thread> threads;

    // Start a thread for each camera
    for (int i = 0; i < num_cameras; ++i) {
        threads.emplace_back(captureAndSaveFrames, i, std::ref(base_dir));
    }

    // Wait for all threads to finish
    for (auto& t : threads) {
        t.join();
    }

    return 0;
}

hello loek.janssen,

thanks for issue narrow down, it’s the bug of the app implementation.


could you please also test with MMAPI,
you may download the package with… $ sudo apt install nvidia-l4t-jetson-multimedia-api
for instance, here’s sample application, 13_argus_multi_camera to enable all cameras together.

Thanks JerryChang,

That sample application seems to work. Can see all 5 cameras displayed.

Does this mean gstreamer is doing something strange? I could try and debug it in the kernel.

Seems to be this section

  if (src->frameInfo->fd)
  {
    int ret = 0;
    NvBufSurface *nvbuf_surf = 0;
    ret = NvBufSurfaceFromFd(src->frameInfo->fd, (void**)(&nvbuf_surf));
    if (ret != 0 || nvbuf_surf == NULL)
      ORIGINATE_ERROR("NvBufSurfaceFromFd Failed.");
    else {
      ret = NvBufSurfaceDestroy(nvbuf_surf);
      if (ret != 0)
        ORIGINATE_ERROR("NvBufSurfaceDestroy Failed.");
    }
  }

Which is subtlely different from the one in the argus_multi_camera

                if (-1 == NvBufSurfaceFromFd(m_dmabufs[i], (void**)(&batch_surf[i])))
                {
                    delete [] batch_surf;
                    ORIGINATE_ERROR("Cannot get NvBufSurface from fd");
                }

hello loek.janssen,

since you’ve verify the streaming, it should be the issue of your app implementation. please dig into the code for debugging.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.