Pipeline Design Problem : save current frame to disk and send mqtt message upon event

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.1 → (PYTHON)
• JetPack Version (valid for Jetson only) 4.5
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ?

So I’m trying to develop an application with the following pipeline:
→ demuxer(3) → nvvidconvert → nvdsosd - > transformer → glsink
(3src) muxer → pgie → tracker → nvanalytics → tee
. ->msgconv → msgbroker

I placed a probe on the tee’s sink_pad where I use a UUID (generated in the event conditional eg. if condition: id = UUID) and placed it into the payload that later on is consumed by the msgconv and msgbroker successfully. I need to use that exact same id to set the file’s name of that exact frame where the event occurred and save it to disk.

With the shown pipeline, an error of conversion occurs as the function to gather the frame is currently only available for RGBA images. So as shown in the sample apps, I place a nvvidconvert + capfilter with the string mentioned on the sample app. When running the app, an error related to memory allocation rises, but when removing the filter works (without converting the image, so the RGBA condition rises).

So I tried another strategy by placing a probe before the osd after the nvvidconvert to save the image, but I haven’t been able to fetch the metadata… Neither using the frame_meta.frame_user_meta_list, nor the pyds.nvds_acquire_user_meta_from_pool(batch_meta) method. Both appear none all the time when trying to reach the variable containing the UUID.

Last, I tried to place the nvvidconvert → nvdsosd before the tee so the conversion is done before the probe call back but the image annotations (bounding boxes, ROIs, etc) are placed only in one of the 3 sink-displays with the information of all together … eg. a detection’s bounding box of source 0 is placed on source 1.

I’m not sure about what is the pipeline’s design issue here and after reading the documentation I’m left with no ideas…

Thanks for anyone’s help

error.txt (3.8 KB)

[UPDATE]: The error happens when trying to draw on the nvdsosd plugin, if I unlink the osd, the pipeline works ok. This raises the question of the nvvidconvert.

Am using this before the probe callback on the tee’s sink right after nvanalytics.

print("Creating nvvidconv1 \n ")
nvvidconv1 = Gst.ElementFactory.make("nvvideoconvert", "Convertor0")
if not nvvidconv1:
    sys.stderr.write(" Unable to create nvvidconv1 \n")

print("Creating filter1 \n ")
caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM),format=RGBA")
filter1 = Gst.ElementFactory.make("capsfilter", "filter1")
if not filter1:
        sys.stderr.write(" Unable to get the caps filter1 \n")
 filter1.set_property("caps", caps1)

Then the data passes through a couple of queues and the tee…

When arriving at the nvsdosd plugin, I have a regular nvvidconvert without a caps filter. (used to work like that) is designed like that on the sample app that saves the image using OpenCV. Then the app breaks with the error attached to the .txt file.

The difference here is that on the sample app the video buffer passes through a tiler plugin (This might change the video format, I don’t know) making it compatible with the nvvidconvert before de nvdsosd.

What should be the chain of convertors to use in this case?

Which sample do you refer to? Can you provide your complete code?

The code is over 1500 lines of code so I believe it would be misleading.

Am Using fragments of the sample-apps as follow:

First part for rtsp video acquisition, muxing, inference and tracking are a mix of testapp2 and 3.

Then the tee and mqtt is done following testapp4

Demuxing for an independent nvdsosd and glsink for each output, was on trial and error…

The callback probe is the one given on testapp4 with the addition of both aws brocker config and the UUID addition to the payload. This callback probe is where I want to use the img save script on deepstream-imagedata-multistream sample app but as I mentioned before, at that point of the pipeline the vidio is in raw format. So I follow the pipeline design on deepstream-imagedata-multistream sample app to place a nvvidconvert and a capsfilter with the string as shown in the script above and when arriving to the nvdsosd it crashes with the error attached to the .txt file.

Hope this clarifies a bit the pipeline Im working on…

Can you try to simplify the code to reproduce the issue? Or can you modify deepstream-imagedata-multistream sample app to reproduce your issue? It is hard to understand just by words.

shareableFirmware.py (20.0 KB)

Attached PYTHON sample code. Am missing the configs files that should work with regular config files.

Although the error is attached above to error.txt (also attached on this reply)error.txt (3.8 KB)

If this is unclear I could give you private access to the project’s repo via e-mail if this is possible.

Hello?

Hi Fiona.Chen. Have you had the chance to look at the example?

Sorry for the late.
Did this still be an issue?
I use your script, but i can not run it. modified as attached, i can not repro the error as you attached in comment 7, please let us know what’s the difference between us which cause we can not repro your issue.shareableFirmware.py (22.1 KB)

Yes, I solved the issue in an unorthodox manner… not very happy about it but “it works” basically I made the RBGA filter before the callback probe, and draw all the data I wanted to see using opencv (I don’t know how costly is that for latency ) and suppress the nvdsosd.

I tried to use the nvdsosd before the callback probe, but as the streams are not tiled, the annotations are displayed on the first stream of the batch, so all annotations come on stream 1 for example.

Thanks for the response tho!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.