• Hardware Platform (Jetson / GPU) Jetson Nano • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4-b144 • TensorRT Version 7.1.3 • Issue Type( questions, new requirements, bugs) question
Good afternoon,
I have setup a pipeline starting from deepstream_test3_app.
The pipeline is structured as follows:
if (!gst_element_link_many (streammux, pgie, nvvidconv, nvosd, tee, NULL)) {
g_printerr ("Elements could not be linked. Exiting.\n");
return -1;
}
if (!gst_element_link_many (queue1, saveframe, msgconv, msgbroker, NULL)) {
g_printerr ("Elements could not be linked. Exiting.\n");
return -1;
}
// if (!gst_element_link_many (queue2, transform, sink, NULL)) {
if (!gst_element_link_many (queue2, sink, NULL)) {
g_printerr ("Elements could not be linked. Exiting.\n");
return -1;
}
I have 3 video RTSP video stream. I run the PGIE to detect objects.
If any object is detected from any of the source, the frame is saved and a message is sent through AMQP.
“sink” is just a “fakesink” for I do not need to visualize any stream.
Originally deepstream_test3_app was including also a tiler. I do not need it for I do not intend to visualize any stream.
However, when I removed it the “nvosd” started to behave oddly.
With the tiler, nvosd is printing OSD info on the frames correctly.
Without the tiler, nvosd is printing OSD info only on the first frame of the batch.
(in my case I have setup a batch of size 3 for I have 3 sources)
Do you have any idea about why nvosd is not printing OSD info on every frame of the batch?
I believe that the transformation of meta data is still working properly.
The OSD info that I see on the saved frames are not always displayed in the first source, but also in frames for source 2 and source 3. However it is mainly from source 1, that is why I believe that it is displayed only on the first frame of the batch.
As you can see below, the probe that creates mes_meta before the tiler is still working:
As a confirmation of that, I can tell you that i can correctly recognize the source of each frame thanks to the attached metadata later when I compose the messages to be sent in AMQP.
It is only the OSD information that appears only for few frames.
Hey, you are referring the display_meta, right?
I’m confused since you said you don’t need to visualize them, I guess what you need is the meta data attatched by the nvinfer such as the object class or its cordinates, right?
Thanks again for your reply.
Please let me clarify.
I do not need to visualiza all the frame. I am using a fakesink.
However, when a frame is detected to contain an object , it is saved on the disk.
When those frames are saved only a few contains the bounding box and the label.
And the frames that do contains the bounding box and the label are the ones that came from source #1 most of the time. However, in the other few cases also in frames from the other sources I see the bounding box and the label.
This behaviour makes me thinking that nvosd is drawing the bounding box and the label only on the first frame of the batch.
I mentioned the metadata in my earlier post because you might say that metadata are not generated correctly.
I verified that they are generated correctly for I exploit those data afterwards in sending the message via AMQP.
Thus, it is the nvosd that I am not able to understand why it behaves like I described.
Did you find a solution for this? We’re noticing the same thing trying to draw bounding boxes on batched inputs without the tiler. nvosd seems to pick one stream and draws all the bounding boxes for all the feeds on that one stream. How do we need to transform the meta data so that nvosd will draw on the correct feeds?