Need advice: Image pre-processing between PGIE and SGIE. Custom SGIE output - feature-vector

Hello

I need to implement pipeline like this:

video-input
-> PGIE (detector) 
-> bboxes 
-> image PRE-PROCESSING based on bboxes (alignment, affine transformation) 
-> SGIE (custom) 
-> FEATURE VECTOR (float[128] array)

I’ve done the PGIE and able to obtain bboxes. My questions are:

  1. How to perform image pre-processing (alignment, affine transrormation) on regions obtained from primary detector, before passing them to SGIE. And how to pass transformed regions to secondary engine (not just bboxes and original image)?

  2. How to obtain the result of secondary custom inference engine if it is not a classifier, not a detector, not a segmenter, and its output is a vector of 128 floats? (I will lately compute distance between resulting vector and pre-calculated set of vectors from disk)

Please point me to right direction which I should take to implement such pipeline.
Thanks.

“alignment, affine transrormation” by gpu or cpu ?

output is a vector of 128 floats
You can get any tensor of tensorRT output. Please refer to sample “sources/apps/sample_apps/deepstream-infer-tensor-meta-test”

GPU would be perfect for performance and to avoid unnecessary memory copying. Is there some image manipulation API in DeepStream?

But I’m ready to begin with CPU and OpenCV API to make something viable.

I wonder how such pre-processing should be performed conceptually with DeepStream API and GStreameer pipeline.

How to make transformation? How to point SGIE to transformed images instead of PGIE output bboxes?

Advice for both CPU/GPU based transformation techniques would be great.

Nvidia has NPP library. You can search.

The original gpu buffer is in streammux. nvinfer(pgie, sgie, …), dsexample all get input buffer from streammux.

You can refer to dsexample “sources/gst-plugins/gst-dsexample” to get how to manipulation gpu buffer (copy to cpu, do manipulation and then copy back to gpu). SGIE will get bbox in metadata and this transformed buffer.

Ok, I get the idea.

Thank you very much for your help.

HI @pavel.shvetsov,

Did you achieved pre-processing the bounding boxes and then adding it to the pipeline? If yes, how did you solve it?

Regards

@pablo.vicente I’m sorry but no. I’ve leaved the project and company. I’m not working with Deepstream anymore.

1 Like

any way to get SGIE output tensors in gst-dsexample?

hi there, did the issue solved? I am facing the same problem ?

Not with config file pipeline, I had to make a manual pipeline and also use probes so no need for gst-example anymore.

sorry, im newbie
thansk a lot if you can share me how can i use probes in pipeline? i know to make a manual pipeline i can reference /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps but i dont know use probes? please help me!!