Where is problem of test4 pipline?

hi,
i delete sgie2 and test4 runs no error, but only runs few seconds then exit automatically, i will check codes again.
pls help me for dsexample probelm and it is urgently for my project.
another problem is enc_jpeg from nvifer pad on test5:
i got codes which released patch by you from the forums, but when i make that codes, it shows cuda prop error, how should i do?

the part:
int current_device = -1;
cudaGetDevice(&current_device);
struct cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, current_device);

thank you so much!!!

Hi,

Do you mind sharing more about the problem you are facing currently?

The patch you shared above is trying to get the GPU property.
Do you want the information?

For the resize issue, do you use our default bounding box parser?
Or a customized implementation?

Thanks.

hi,
for GPU property compile problem i have past and forget that pls.
now’s problem is dsexample and enc_image by your patch.
1\the first dsexample, when i enable=1 it shows that 2 fake boxes on menu only, and no pictures to save i used default bounding box parser, but my model is custom yolo4, how should i do? i also checked fourms for more informations, should i have to modify out CV MAT in convert-mat funtions to get dsexample to work ?
2\i used attached patch for deepstream-app’s enc objects to save images by user_meta, but get nothing images, too

deepstream_app.patch (5.8 KB)

hi,
1, i solved my custom test4 to define new classes for sgie, and it can run now without errors of classes. but it breaks with segment error after runs some while, maybe caused by taking images i think and i will check code again.
2, for dsexample, i can take images with test5 now and blocked that 2 fake bboxes on menu by adding out_mat, but it is not good too many images , do you have good idea to get less and my interesting object frames only with bbbox together?
3, enc_image is good way, pls help to check your patch wich i attahced above, how to get it work with test5?

thank you so much!

Hi,

Do you want to lower the bounding box amount?
For this, there are two common solutions.

First, you can increase the threshold value so only the bounding box with high confidence will remain.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#id3

Next, it’s recommended to check if any NMS implementation in your parser.
It will help to merge the overlapped bbox into one for general detection output.

For the patch mentioned above, could you share the original topic with us first?

Thanks.

hi,
this is much helpful for me! thank you so much really!!!
then i may finish dsexample and python-imagedata-multistream as they are same logic pipline to get images.
for jpecenc patch, i got from this topic:
Unable to add pgie_src_pad from deepstream-image-meta-test to deepstream-app,
i have got it work well by custom test5 code last night, can you give a suggestion to get full frame with bbox? can i get that from u_data, too? i think jpecenc should be possible to get any images and upload by kafka to cloud, it seems be more perfect solution for my project after increasing threshold as you mentioned…

thank you

i would like to ask how to set NMS implementation in parser?

thank you

Hi,

You can find an NMS example in some of the public frameworks.
We also have a merged algorithm in the jetson-inference repository:

In dsexample, you can get the full-frame bbox by enabling the corresponding flag:

[ds-example]
...
full-frame=1

Thanks.

hi,
i have an urgent questions to ask help:
i used same setup of config file, video runs smoothly very well on test5, but lost frames and flower screen when runs on python.
what setup should i fix?

and if i want to add rtsp on python test4, can i link like this:
nvosd-tee, queue1-nvosd_posted-… , queue2-msgbroker-…
what pad should i add on src and sink for RTSP’s output? like this one:
tee_msg_pad = tee.get_request_pad(‘src_%u’)

thank you very much!

Hi,

Do you use the same implementation between C++ and python.

If both sources are Deepstream-based, the underlying implementation is the same.
Since Deepstream python is a wrapper for the Deepstream C++ library.

A common cause is some python-based library (ex. OpenCV) that runs slowly and lead to slowness.

Thanks.

hi,
if i use:
pipeline.add(filter1)
pipeline.add(nvvidconv1)
must i add tiler or not? what different with tiler or not?

thank you very much!

opencv it is flower screen so much :(((
if opencv by GPU, it will be faster or not?
if i get frame from nvosd pool and copy to user data then get images, is this possible way?
pls give a better ways to me.

and i have got test4 to work rtsp and images already, the biggest trouble is opencv too slowness right now…

thank you so much for your kindly supports long time!

Hi,

You can find the document for tiler below:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmultistreamtiler.html

The Gst-nvmultistreamtiler plugin composites a 2D tile from batched buffers. The plugin accepts batched NV12/RGBA data from upstream components. The plugin composites the tile based on stream IDs, obtained from NvDsBatchMeta and NvDsFrameMeta in row-major order (starting from source 0, left to right across the top row, then across the next row). Each source frame is scaled to the corresponding location in the tiled output buffer. The plugin can reconfigure if a new source is added and it exceeds the space allocated for tiles. It also maintains a cache of old frames to avoid display flicker if one source has a lower frame rate than other sources.

It should be faster if enabling GPU support in OpenCV.
However, please note that cvmat is a CPU buffer based type so there is still some performance concern.

We do have an example to wrap Deepstream data into OpenCV.
Please check following example for more information:

/opt/nvidia/deepstream/deepstream-6.0/sources/gst-plugins/gst-dsexample

Thanks.

hi,
how can i remove dsexample plugin after NVOSD in deepstream-app or test5?

thank you very much!

Hi,

Do you want to use dsexample in deepstream-app or remove it?
For deepstream-app, you just need to add it into the configure pipeline.

Ex.

[application]
...

[ds-example]
enable=1
processing-width=640
processing-height=480
...

Thanks.

hi,
dsexample plugin runs after nvconverter and before nvosd default at deepstream-app without bbox on objects.
anyway to remove this plugin after nvosd then i can output bbox on frames together.

thank you very much!

sorry, move it after nvosd not remove.

thank you

hi,
what’s mean ip-surface? and what’s surface name for NVOSD?
i may get full images by updating test5 probe, but no bbox on images, how can i add bbox on images? i tried add_display_meta_to_frame but failed to show bboxes, pls give some advice for the work urgently!

thank you very much!

Hi

Maybe you can refer to this sample.
It link a h264encoder to the nvosd so it can save the data with DNN marks as well:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.