I’ve done the PGIE and able to obtain bboxes. My questions are:
How to perform image pre-processing (alignment, affine transrormation) on regions obtained from primary detector, before passing them to SGIE. And how to pass transformed regions to secondary engine (not just bboxes and original image)?
How to obtain the result of secondary custom inference engine if it is not a classifier, not a detector, not a segmenter, and its output is a vector of 128 floats? (I will lately compute distance between resulting vector and pre-calculated set of vectors from disk)
Please point me to right direction which I should take to implement such pipeline.
Thanks.
output is a vector of 128 floats
You can get any tensor of tensorRT output. Please refer to sample “sources/apps/sample_apps/deepstream-infer-tensor-meta-test”
The original gpu buffer is in streammux. nvinfer(pgie, sgie, …), dsexample all get input buffer from streammux.
You can refer to dsexample “sources/gst-plugins/gst-dsexample” to get how to manipulation gpu buffer (copy to cpu, do manipulation and then copy back to gpu). SGIE will get bbox in metadata and this transformed buffer.
sorry, im newbie
thansk a lot if you can share me how can i use probes in pipeline? i know to make a manual pipeline i can reference /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps but i dont know use probes? please help me!!