Gstreamer jpeg timestamp or metadata Continue

Hi,
Jetpack 4.6 is r32.6.1 and please download source code package from the page:
https://developer.nvidia.com/embedded/linux-tegra-r3261

L4T Driver Package (BSP) Sources

You wrote:

It is multithreading in low-level code for capturing frame data. In nvarguscamerasrc plugin, it allocates single NvBuffer. For further enhancement, you can create multiple NvBuffer for buffering frame data and acquireFrame() can be called immediately.

So where i can set more than one buffer in argus multisession or syncsensor example?

Not gst src.

And i asked you how many buffers uses gst pipeline for 2 sensors.
It is sync or async frame acqusition?
Please answer me.

Hi,
In gstreamer, the two cameras are launched individually. The two cameras are not synchronized. Each camera is one nvarguscamerasrc so one NvBuffer is allocated/used.

The sample demonstrates software sync:

/usr/src/jetson_multimedia_api/argus/samples/syncSensor

If your use-case is dual camera synchronization, you may base on the sample to develop your use-case. Hardware synchronization is not supported.

My cameras setup is HW synchronized.

We are speaking about frame acquire synchronisation argus code. Not frame timestamp/shot sync.

But why multisession used one nvbuffer? It is not async code.

It is sync code. I am waiting every sensor frameAcquire.

Here is problem description:

This code have acquire sensor by sensor. Not in parallel.
Why if capture sessions are separated?

size_t cameraId{};
#pragma omp parallel for schedule(static) num_threads(6) 
            for( cameraId = 0; cameraId < m_camerasNum; ++ cameraId )
            {
                const auto result{ m_iCaptureSessions.at( cameraId )->capture( m_requests.at( cameraId ).get() ) };
                if( result == 0 )
                    REPORT_ERROR( "Capture failed!" );

                Argus::UniqueObj< EGLStream::Frame > frame{ m_iFrameConsumers.at( cameraId )->acquireFrame() };
                EGLStream::IFrame * iFrame{ Argus::interface_cast< EGLStream::IFrame >( frame ) };
                if( not iFrame )
                {
                    std::cout << "Failed to acquire frame! " << cameraId << std::endl;
                } else
                {
                    //                std::cout << cameraId << " " << iFrame->getNumber() << " " << iFrame->getTime() << std::endl;
                    // record
                    {
                        EGLStream::IImageJPEG * iJPEG{ Argus::interface_cast< EGLStream::IImageJPEG >( iFrame->getImage() ) };
                        if( iJPEG )
                        {
                            const auto file{ m_recordPath + "/" + std::to_string( cameraId ) + "/" + std::to_string( m_framesCounter ) + ".jpg" };
                            if( iJPEG->writeJPEG( file.c_str() ) != Argus::STATUS_OK )
                            {
                                std::cout << "Failed to write JPEG: " << file << std::endl;
                            }
                        }

                        m_doCapture = false;
                    }
                }

            } // pragma

This is sync code:

f();
f();
f();

This is async code:

#pragma omp parallel for 
    for( i = 0; i < 3; ++ i )
       f();

Where hidden mister NvBuffer?
In capture session? In producer? In consumer? In eglstream? In provider?

Or may be i should run 6 instances of nvargus-daemon?

One NvBuffer for all or each sensor has own buffer?

Hi,
In the code you shared, NvBuffer is not created. For creating NvBuffer, please call createNvBuffer(). You can create multiple NvBuffer and then call copyToNvBuffer() to re-use the buffers. Your use-case is advancing and default sample may not be enough. Additional customization shall be required.

If i will copy buffer from image it becomes multithreading?

All sessions will be process frames parallel?

auto *iNativeBuffer = Argus::interface_cast<EGLStream::NV::IImageNativeBuffer>(iFrame->getImage());

iNativeBuffer->copyToNvBuffer(m_dmabuf);

Hi,
You may create individual capture thread for each camera, so that the cameras are run in multithreading.

What customization i need? Argus has closed sources, acquireFrame locks thread. Where i should fix acquireFrame code?

Ok, i will try.

But gst pipeline with 2 sensors runs in 2 parallel threadExecute? Isn’t it?

And it pocess frames not parallel.

Hi,
If you run this command:

Sensor-id 0 is one thread and sensor-id 1 is the other thread. The two threads capture frames parallelly and individually.

Yes.

And we goes to the beginning of this story.

I got only 8 fps for 6 sensors.

For gst case and for my cpp case.

Why?

Here is beginning: Write jpeg faster from 6 cameras

Hi,
You may remove nvjpegenc to check if it can achieve 10 fps. The constraint may be in JPEG encoding. And please execute sudo nvpmodel -m 0 and sudo jetson_clocks to run in maximum performance mode. And run VIC engine in maximum clock by following the steps in
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL

The resolution is above 4K so we suggest run the system in maximum performance.

Will try.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.