Double frame capturing

Hi, I am using a custom camera attached to a Jetson board. I read the camera using Argus library.
Since the frameRate of the camera is high, the frames are split into two input nodes on the Jetson device, and the device sees inputs from two different camera nodes, but there is only one physical camera.

To capture the frames, I use such a code in the consumer side:

for (int32_t frm = -1; frm < DEFAULT_FRAME_COUNT;  )
{
    for (uint8_t c_i = 0; c_i < 2; c_i++)
    {
        frm++;
        // Acquire the new Bayer frm from the Argus EGLStream and get the CUDA resource.
        CUgraphicsResource bayerResource = 0;
        CUresult cuResult = cuEGLStreamConsumerAcquireFrame(&m_cudaBayerStreamConnection[c_i], &bayerResource, NULL, -1);
        if (cuResult != CUDA_SUCCESS)
        {
            ORIGINATE_ERROR("Unable to acquire an image frm from the EGLStream (CUresult %s)", ArgusSamples::getCudaErrorString(cuResult));
        }
      
        CUeglFrame bayerEglFrame;
        memset(&bayerEglFrame, 0, sizeof(bayerEglFrame));
        cuResult = cuGraphicsResourceGetMappedEglFrame(&bayerEglFrame, bayerResource, 0, 0);
        if (cuResult != CUDA_SUCCESS)
        {
            ORIGINATE_ERROR("Unable to get the CUDA EGL frame (CUresult %s)",
                ArgusSamples::getCudaErrorString(cuResult));
        }

        ///// Using the image data for processes:
        // ---> reinterpret_cast<const uint16_t*>((const void*)bayerEglFrame.frame.pPitch[0]);
        // ....

        
        // Releasing the frame:
        cuResult = cuEGLStreamConsumerReleaseFrame(&m_cudaBayerStreamConnection[c_i], bayerResource, NULL);
        if (cuResult != CUDA_SUCCESS) ORIGINATE_ERROR("Unable to release frame to EGLStream (CUresult %s).", ArgusSamples::getCudaErrorString(cuResult));
            continue;
    }
}

It reads the frame fine, but there is a problem that each time we have to send two consecutive triggers to receive the frames. It seems that the first frame always waits for the second frame to be released into the bayerResource. We have a fpga in the middle that assigns the frameRate to the frame and the frame number we receives are like this:

… 101, 100, 103, 102, 105, 104, …

which shows the even frames stuck behind the odd frames. In my application side I don’t have any discremination for the even and odd frames and triggers, but I always receive two frames together with a messy order.

For example I send triggers every 20 ms, but in the consumer side, I receive the frames in this time order:

40ms, 42ms, 80ms, 83ms, 120ms, 122ms, …
which have the frame numbers as … 101, 100, 103, 102, 105, 104, …

It seems there is nothing I can do on the application side, so I am wondering how to deal with this situation?

Thanks.

hello Hasa,

just for confirmation,
is there a sync mechanism to ensure you’ve those two frames coming together?
if no, that’s expected to receive frames with a messy order.

besides,
may I know what’s the original camera resolution, and its frame-rate for reference?

@JerryChang

  • The res is 4504*4504 with 50 FPS.
  • No there is no sync mechanisam, it’s a common loop to get all frames (from both nodes), however the messy order doesn’t make a problem for me sinceI can re-order them with the fpga frame number. The problem is that I don’t get individual frames, I get only two of them together.

hello Hasa,

it looks there’s bug of capture request, you should treating it as two camera node individually.

may I know the flow diagram, you may debug into this mechanism.

@JerryChang
Well, the basic is the same as jetson MMAPI samples. For the double nodes, I made two sets of streamers, stream settings, captureSessions, …and so on.

Here is the request part:

g_cameraProvider = UniqueObj<CameraProvider>(CameraProvider::create());
iCameraProvider = interface_cast<ICameraProvider>(g_cameraProvider);

cameraDevices.clear();
iCameraProvider->getCameraDevices(&cameraDevices);

//// Initialize each node independently. Having idx as 0 and 1, defining captureSession[idx], iEGLOutputStream[idx] , outputStream[idx], iEGLStreamSettings[idx]:
MakeRequestObj(0, cameraDevices[0]); 
MakeRequestObj(1, cameraDevices[1]);

assert(iEGLOutputStream[0] != nullptr && "iEGLOutputStream[0] is null");
assert(iEGLOutputStream[1] != nullptr && "iEGLOutputStream[1] is null");

CamBayerConsumer bayerConsumer(this, Argus::Size2D<uint32_t>(cam_current_W, cam_current_H));
PROPAGATE_ERROR(bayerConsumer.initialize());
PROPAGATE_ERROR(bayerConsumer.waitRunning());

for (uint8_t c_i = 0; c_i < 2; c_i++)
{
    cout<< "Requesting loop starting..."<< c_i <<"\n";
    if (iCaptureSession[c_i]->repeat(request[c_i].get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request");
}

while(!finish_capturing)
{   
    std::this_thread::sleep_for(std::chrono::seconds(1));
}

The camera settings also has been separately set for both node, e.g.

iEGLStreamSettings[idx]->setPixelFormat(PIXEL_FMT_RAW16);
iEGLStreamSettings[idx]->setResolution(STREAM_SIZE);
//// idx = 0, 1

hello Hasa,

please refer to 13_argus_multi_camera,
you may give it a try to use CaptureHolder::initialize() that creates and initializes libargus resources.

@JerryChang

Initially I had been using sample 13, but I needed the raw bayer image, so I have switched to cudaBayerDemosaic sample. This problem started with cudaBayerDemosaic style. And also the cudaBayerDemosaic sample code only works with one camera, so I kind of mixed the both samples.

hello Hasa,

just for confirmation..
you’re able to capture frames from two different camera nodes without issue with sample 13. such failure, first frame always waits for the second frame, has introduced with cudaBayerDemosaic style, right?

@JerryChang
Yes, exactly.

hello Hasa,

you should treating it as two camera nodes.
please try to revise the code to have two CameraProvider objects, which will create two Argus capture pipelines, and creating capture session with each camera device individually.

1 Like

@JerryChang

Hi, Finally I got some time to test it. I made two seperate provider as below:

/* Initialize the Argus camera provider */
g_cameraProvider_1 = UniqueObj<CameraProvider>(CameraProvider::create());
iCameraProvider_1 = interface_cast<ICameraProvider>(g_cameraProvider_1);
if (!iCameraProvider_1)   std::cout<< "Failed to get ICameraProvider interface\n";

g_cameraProvider_2 = UniqueObj<CameraProvider>(CameraProvider::create());
iCameraProvider_2 = interface_cast<ICameraProvider>(g_cameraProvider_2);
if (!iCameraProvider_2)   std::cout<< "Failed to get ICameraProvider interface\n";

if (logs_on)    std::cout<< "Argus Version: "<< iCameraProvider_1->getVersion().c_str() <<"\n";

cameraDevices.clear();
iCameraProvider_1->getCameraDevices(&cameraDevices);
//// iCameraProvider_2->getCameraDevices(&cameraDevices);
if (cameraDevices.size() == 0)  if (logs_on) std::cout<< "No cameras available.\n";

MakeRequestObj(0, cameraDevices[0]);
MakeRequestObj(1, cameraDevices[1]);

but on the definition of the second provider, I get this error:

(NvCameraUtils) Error InvalidState: Mutex already initialized (in Mutex.cpp, function initialize(), line 41)
(Argus) Error InvalidState:  (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function open(), line 54)
(Argus) Error InvalidState:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 262)
(Argus) Error InvalidState: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 106)
wwArgus Version: w0.98.3 (multi-process)

I checked it with ChatGpt and it says:
“Argus does not support multiple CameraProvider instances in the same process**. The NVIDIA Argus API is designed as a singleton-style service interface, meaning **only one CameraProvider can be created per process”

Other than the CameraProvider, everything else is seperated for the two nodes.

hello Hasa,

here’s gst pipeline to enable dual camera streams, (i.e. sink_A/sink_B)
it’s demonstrated by preview disabled and shows frame-rate only,
for instance,
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=1 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=60/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_A video-sink=fakesink sync=0 nvarguscamerasrc sensor-id=1 sensor-mode=1 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=60/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_B video-sink=fakesink sync=0 -v

as you can see.. there’re two nvarguscamerasrc in the single gst pipeline. (at single instance)

Thank you,

How can I translate that into the C++ code I am making?
Double CameraProvider throws the error and every other class, already have two objects in my code, so what do I need to fix?

hello Hasa,

I assume you could execute above sample gst pipeline to fetch the frames, right?
since you’ve the frames are split into two input nodes, those are two different camera nodes from one physical camera.

Ah sorry, nothing happened with that command:

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=1 ! 'video/x-raw(memory:NVMM),width=4480, height=4504, framerate=60/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_A video-sink=fakesink sync=0 nvarguscamerasrc sensor-id=1 sensor-mode=1 ! 'video/x-raw(memory:NVMM),width=4480, height=4504, framerate=60/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_B video-sink=fakesink sync=0 -v
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B/GstFakeSink:fakesink1: sync = false
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A/GstFakeSink:fakesink0: sync = false
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B.GstGhostPad:sink.GstProxyPad:proxypad1: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B/GstFakeSink:fakesink1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A.GstGhostPad:sink.GstProxyPad:proxypad0: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A.GstGhostPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/Gstnvvconv:nvvconv0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)4480, height=(int)4504, format=(string)NV12, framerate=(fraction)60/1
GST_ARGUS: Creating output stream
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 4480 x 4504 FR = 50.000000 fps Duration = 20000000 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

GST_ARGUS: 4480 x 4504 FR = 43.000000 fps Duration = 23255814 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

GST_ARGUS: 2240 x 2252 FR = 180.000018 fps Duration = 5555555 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

GST_ARGUS: 4480 x 4504 FR = 35.000001 fps Duration = 28571428 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

ARGUS_ERROR: Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute: 903 Frame Rate specified is greater than supported
GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 1 
   Output Stream W = 4480 H = 4504 
   seconds to Run    = 0 
   Frame Rate = 43.000000 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 4480 x 4504 FR = 50.000000 fps Duration = 20000000 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

GST_ARGUS: 4480 x 4504 FR = 43.000000 fps Duration = 23255814 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

CONSUMER: Producer has connected; continuing.
GST_ARGUS: 2240 x 2252 FR = 180.000018 fps Duration = 5555555 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

ARGUS_ERROR: Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute: 1139 InvalidState.
GST_ARGUS: Cleaning up
GST_ARGUS: 4480 x 4504 FR = 35.000001 fps Duration = 28571428 ; Analog Gain range min 1.000000, max 271.000000; Exposure Range min 100000, max 500000000;

ARGUS_ERROR: Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute: 903 Frame Rate specified is greater than supported
GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 1 
   Output Stream W = 4480 H = 4504 
   seconds to Run    = 0 
   Frame Rate = 43.000000 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
ARGUS_ERROR: Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute: 1139 InvalidState.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: CANCELLED
Additional debug info:
Argus Error Status
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:694 NvBufSurfaceFromFd Failed.Execution ended after 0:00:00.370207778
Setting pipeline to NULL ...

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:247 (propagating)
nvbuf_utils: dmabuf_fd -1 mapped entry NOT found
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadExecute:694 NvBufSurfaceFromFd Failed.
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:247 (propagating)
Freeing pipeline ...

(I changed the image size to the actual size 4480*4504 too, and the same, nothing happened)

hello Hasa,

you may also revise the framerate property accordingly per above error logs.

Thank you, now the command seems working:

GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B/GstFakeSink:fakesink1: sync = false
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A/GstFakeSink:fakesink0: sync = false
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A: last-message = rendered: 11, dropped: 0, current: 21.49, average: 21.49
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B: last-message = rendered: 13, dropped: 0, current: 24.40, average: 24.40
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A: last-message = rendered: 22, dropped: 0, current: 21.55, average: 21.52
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_B: last-message = rendered: 24, dropped: 0, current: 21.62, average: 23.05
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_A: last-message = rendered: 33, dropped: 0, current: 21.54, average: 21.53

Now how should I use this into my code? Is there a specific C++ command to async the nodes?

1 Like

hello Hasa,

could you please debug into this.
for instance, is the cameraDevices.size() returning you 2 in your code snippets?


besides,
I think I’ve wrong statements, it should be two CaptureHolder instead of two CameraProvider.
you should use captureHolders[i].get()->initialize(cameraDevices[i], iCameraProvider, EGLRenderer));
and then, create capture session individually.
ICaptureSession *iCaptureSession = interface_cast<ICaptureSession>(captureHolders[j].get()->getSession());

Yes, it reports 2 camera existing.

I don’t have any CaptureHolder class, my sample is based on argus/samples/cudaBayerDemosaic which has
captureSession and EGLstream instead. I already have two objects for each of those classes:

for idx = 0, 1: /// Dual nodes!

    captureSession[idx].reset(iCameraProvider->createCaptureSession(cameraDevice));
    iCaptureSession[idx] = interface_cast<ICaptureSession>(captureSession[idx]);

    if (!iCaptureSession[idx])  printf("Failed to create CaptureSession\n");

    // Create the RAW16 output EGLStream using the sensor mode resolution.
    UniqueObj<OutputStreamSettings> streamSettings(iCaptureSession[idx]->createOutputStreamSettings(STREAM_TYPE_EGL));
    iEGLStreamSettings[idx] = interface_cast<IEGLOutputStreamSettings>(streamSettings);
    if (!iEGLStreamSettings[idx])  printf("Failed to create OutputStreamSettings\n");

    // Create capture request and enable output stream.
    request[idx].reset(iCaptureSession[idx]->createRequest());

    IRequest *iRequest = interface_cast<IRequest>(request[idx]);
    if (!iRequest)  printf("Failed to create Request\n");

    iRequest->enableOutputStream(outputStream[idx].get());

    // Set the sensor mode in the request.
    ISourceSettings *iSourceSettings = interface_cast<ISourceSettings>(request[idx]);

    /// .... Define the settings here, including resolution, exposure,... ///
    
    if (iCaptureSession[idx]->repeat(request[idx].get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request");

hello Hasa,

it looks you’ll have two create requests and output streams separately?
do you still seeing the same failure with such implementation?