This is a duplicate of this thread, but I hoped maybe Jetson guys could help me out.
So I have this plugin, that extracts images from 4 sensors and sends it down the pipeline as a batch. It’s a libargus based plugin:
GstStructure *config = gst_buffer_pool_get_config (src->pool);
gst_buffer_pool_config_set_params (config, src->outcaps, sizeof (NvBufSurface), MIN_BUFFERS, MAX_BUFFERS);
gst_structure_set(config,
"memtype", G_TYPE_INT, NVBUF_MEM_DEFAULT,
"gpu-id", G_TYPE_UINT, 0,
"batch-size", G_TYPE_UINT, 4, NULL);
gst_buffer_pool_set_config (src->pool, config);
// And here we set up array of consumers (I have it as separate threads, but it shouldn't matter for my case.
IFrameConsumer consumers[4] = {0};
// Init code here......
//Now, in the main loop
.....
int fds[4] = {0};
ret = gst_buffer_pool_acquire_buffer (src->pool, &buffer, NULL);
GstMapInfo outmap = GST_MAP_INFO_INIT;
if (!mapBuffer(outmap, buffer)){
GST_ERROR_OBJECT(src, "no memory block");
}
NvBufSurface* surf = (NvBufSurface *)outmap.data;
for (int i = 0; i < src->sensors.size(); i++) {
fds[i] = surf->surfaceList[i].bufferDesc;
}
// Effectively what we did here was:
// - allocate gst buffer and retrieve NvBufSurface from acquired buffer
// - Extract dma buffers from every batch into fds array for further usage
.....
// Acquire frames from every sensor and store it to buffer
for (int i = 0; i < 4; i++) {
UniqueObj<Frame> frame(
iFrameConsumer->acquireFrame(consumer_wait_time_us * 1000));
IFrame* iFrame = getIFrameInterface(frame);
NV::IImageNativeBuffer* iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
iNativeBuffer->copyToNvBuffer(fds[i]);
}
Please note this is pseudo code and i don’t have it in one thread, but for the purpose of illustration of my setup, it should work.
So, I’m using NV::IImageNativeBuffer
to extract ELGImage from the consumer and I want to store it in my custom dmabuf (that is, I’m not calling iNativeBuffer->createNvBuffer
as I want to use dmabuf that is already pre-allocated in gst_buffer_pool
.
So given the task to create 4-batched GST_BUFFER based on 4 EGLStreams, does this program makes sense?
For me this code works perfectly fine on JP4 devices (deepstream 5.0 and l4t - 32.4.4. But it doesn’t work on JP5 device (deepstream 6.0 and l4t 35.3.1).
You can find the result of execution in gif attached. To generate this image I used the following pipeline on the JP5 device:
gst-launch-1.0 mycustomarguscamerasrc num-buffers=900 num-sensors=4 ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! "video/x-raw(memory:NVMM),width=1920,height=1080,format=I420" ! nvv4l2h265enc ! h265parse ! matroskamux ! filesink location="test.mkv"
Can you help me understand what exactly am I doing wrong here and how can I prepare 4-batched frame for GST_BUFFER?