How to acquire NvBuffer for 4-batched NvBufSurface?

This is a duplicate of this thread, but I hoped maybe Jetson guys could help me out.

So I have this plugin, that extracts images from 4 sensors and sends it down the pipeline as a batch. It’s a libargus based plugin:

GstStructure *config = gst_buffer_pool_get_config (src->pool);
  gst_buffer_pool_config_set_params (config, src->outcaps, sizeof (NvBufSurface), MIN_BUFFERS, MAX_BUFFERS);
  gst_structure_set(config, 
                    "memtype", G_TYPE_INT, NVBUF_MEM_DEFAULT,
                    "gpu-id", G_TYPE_UINT, 0, 
                    "batch-size", G_TYPE_UINT, 4, NULL);
  gst_buffer_pool_set_config (src->pool, config);

// And here we set up array of consumers (I have it as separate threads, but it shouldn't matter for my case.
IFrameConsumer consumers[4] = {0};
// Init code here......


//Now, in the main loop
.....
    int fds[4] = {0};
    ret = gst_buffer_pool_acquire_buffer (src->pool, &buffer, NULL);

    GstMapInfo outmap = GST_MAP_INFO_INIT;

    if (!mapBuffer(outmap, buffer)){
      GST_ERROR_OBJECT(src, "no memory block");
    }

    NvBufSurface* surf = (NvBufSurface *)outmap.data;

    for (int i = 0; i < src->sensors.size(); i++) {
        fds[i] = surf->surfaceList[i].bufferDesc;
    }
// Effectively what we did here was:
// - allocate gst buffer and retrieve NvBufSurface from acquired buffer
// - Extract dma buffers from every batch into fds array for further usage
.....

// Acquire frames from every sensor and store it to buffer
        for (int i = 0; i < 4; i++) {
            UniqueObj<Frame> frame(
                iFrameConsumer->acquireFrame(consumer_wait_time_us * 1000));
            IFrame* iFrame = getIFrameInterface(frame);
            NV::IImageNativeBuffer* iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
            iNativeBuffer->copyToNvBuffer(fds[i]);
        }

Please note this is pseudo code and i don’t have it in one thread, but for the purpose of illustration of my setup, it should work.
So, I’m using NV::IImageNativeBuffer to extract ELGImage from the consumer and I want to store it in my custom dmabuf (that is, I’m not calling iNativeBuffer->createNvBuffer as I want to use dmabuf that is already pre-allocated in gst_buffer_pool.
So given the task to create 4-batched GST_BUFFER based on 4 EGLStreams, does this program makes sense?
For me this code works perfectly fine on JP4 devices (deepstream 5.0 and l4t - 32.4.4. But it doesn’t work on JP5 device (deepstream 6.0 and l4t 35.3.1).
output

You can find the result of execution in gif attached. To generate this image I used the following pipeline on the JP5 device:

gst-launch-1.0 mycustomarguscamerasrc num-buffers=900 num-sensors=4 ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=1920,height=1080,format=I420" ! nvv4l2h265enc ! h265parse ! matroskamux ! filesink location="test.mkv"

Can you help me understand what exactly am I doing wrong here and how can I prepare 4-batched frame for GST_BUFFER?

1 Like

Hi,
Please use nvstreammux plugin for filling the information to NvBufSurface. You may check this config file for deepstream-app:

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source2_csi_usb_dec_infer_resnet_int8.txt

By default it is one v4l2src and one nvarguscamerasrc. You can change to 2 nvarguscamerasrc first. If it works, then change to 4 nvarguscamerasrc.
If you don’t need the inferencing, please disable [primary-gie]

Thanks for reply! I’m a bit surprised that there is no recommendation to use “programmatic” approach to do such transformation and I’m pushed to use nvstreammux element. Can’t I perform this (seemingly basic) operation of copying EGLImage into pre-allocated dmabuf from batched NvBuffer in my own element?
And why did it work in JP4 device (deepstream 5.0 and l4t 32.4.4) and doesn’t work on newer libraries?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.