Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version 5.1
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only) 460
• Issue Type( questions)
I’m running deepstream-app with dsexample config as follows:
[ds-example]
enable=1
processing-width=640
processing-height=480
full-frame=0
blur-objects=1
unique-id=15
nvbuf-memory-type=3
everything was working properly and i got the detected object blurred image. My problem is when I do this blur is behind both pgie and sige (pgie->sgie->dsexample). But I expect the blur to be done first (pgie->dsexample->gie). After researching, I plan to have 2 solutions
- Change the order of dsexample in the deepstream-app pipeline
- I do the blur right in the gie_primary_processing_done_buf_prod() function in deepstream_app.c
In method 1, I want to ask is there any way or example to do this in create_pipeline function
In method 2. Different from doing blur right in dsexample, like config above I set nvbuf-memory-type=3 so when I print surface->memType I get 3 and easily blur objects and change the whole surface, but when I print surface->memType in gie_primary_processing_done_buf_prod() the result is 0. So I relied on this tutorial How to save frame as jpg and send the filename to kafka in deepstream 5.0 deepstream_test5 - #11 by DaneLLL to convert surface(nv12) to dst_surface( rgba). This way I can get the dst_surface to do the blur but the problem is that this is a separate new variable, so actually when rendering I don’t see the face being blurred like doing it by normal dsexample. There is a way to solve this problem.
Thanks for any support!