• Hardware Platform (Jetson / GPU):Both
• DeepStream Version:5.1
• JetPack Version (valid for Jetson only):4.5
Hi,
I created a pipeline that has two streammux, one for some sources and one for other sources and then I have two detectors , one for first streammux and one for second streammux, that mean these are in parallel, then I want to have custom gstreamer plugin to concatenate these two generated buffer and metadat together, and create batch from them, then feed to next nvinfer plugin.
Hi,
For reference, could you share which Jetson platform and GPU card you use?
The use-case is advanced, we will check with teams. Will update.
@DaneLLL
Jetson nx, GPU 1080 ti,
If possible share nvvstreammux plugin, the source code maybe help me.
Hi,
We have checked this and confirmed it is not supported in current release. One possible solution is to run two separate pipelines. Each pipeline has different detectors, and the identical model executed in next nvinfer plugin.
@DaneLLL ,
Thanks your answer,
Yes I know about your solution, but it’s not efficient way for product, because I have to load each models twice.
Hi,
If the engine file does not exist, it loads model to generate engine file and it may take some time. If you have generated the engine file and set it to nvinfer plugin, it should be running fine. Loading engine file is fasted than loading model.
@DaneLLL
I load the engine file, but when I use two pipeline then I have to load twice engine file, it’s not efficient solution. The best solution is that the engine file is loaded only once.
Hi,
We understand the best solution is to load each engine file once. Will evaluate to support this use-case in future release. We had discussion about how to achieve this on current release, and the conclusion is to run two separate pipelines. Please adapt to use this solution.
@DaneLLL,
Solution one:
But I don’t know the last streammux when batched the buffers, initialize the metadata by default or can pass the all of buffers and metadata upstream toward downstream?
Solution Two:
Write an custom gstreamer plugin like nvstreammux:
Suppose we linked two buffer into custom plugin.
one buffer has 2 streams and its metadata and other has 2 streams and its metata that generated by upstream nvinfer.
My goal is that in this custom plugin, i batched these two buffer into one batched buffer and passed into next nvinfer.
Because the functionality of this custom plugin is sequential, that means in the timestamp 1 the buffer 1 goes into plugin and and in the timestamp 2 the buffer 2 goes into plugin.
In this plugin I want to keep the frame_mata of buffer 1 into list and when the buffer 2 goes into plugin and then append the list of frame_meta of buffer 1 into buffer 2 and then send push buffer.
def chainfunc(
self, pad: Gst.Pad, parent, buffer: Gst.Buffer
) -> Gst.FlowReturn:
batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(buffer))
l_frame = batch_meta.frame_meta_list
while l_frame is not None:
frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
self.frames_meta.append(frame_meta)
l_frame = l_frame.next
if len(self.frames_meta) == 4 :
for frameMeta in self.frames_meta[:2]:
pyds.nvds_add_frame_meta_to_batch(batch_meta, frameMeta)
self.srcpad.push(buffer)
self.frames_meta = []
In general, I have 4 streams , two streams(index=0,1) are for buffer 1 and two streams(index=2,3) are for buffer 2.
In above code I only two previous frame metadata (first two frames meta) append into new batch, I got this error:
Segmentation fault (core dumped)
When I only append last frame meta into batch meta, it’s works correctly, and I get batch_size = 3 (three frame meta)in the next element.
In my opinion, the frame meta is pointer in the c++, and the all of frame meta pointed into same address of memory, and when I want to append the previous frame meta into currect batch meta, I got Segmentation fault (core dumped). right? How I can handle this problem?
Hi,
We have checked the two solutions. Solution 1. does not work in current release. The nvstreammux plugin expects no metadata from input sources, so the metadata cannot be kept.
For solution 2, there is a possible solution. Instead of appending data to the first buffer, you may copy frame metadata of second buffer to first buffer and try:
- Call nvds_acquire_frame_meta_from_pool() from 1st buffer
- Call nvds_copy_frame_meta(), to copy from 2nd buffer to 1st buffer for every frame meta in 2nd buffer