Maintain aspect ratio property

Deepstream 6.4 on dGPU

I have a pipeline that looks like
streammux -> pgie -> tracker -> sgie1 -> sgie2 -> sink
sgie2 is a model that takes images as input and generates embeddings as output.
I ran the following tests.
[1] I changed the streammux resolution to R1, then passed Image_One.jpg through the pipeline. I got an embedding E1
[2] I changed the streammux resolution to R2, then passed the same Image_One.jpg through the pipeline. I got an embedding E2.

what I observe is that the L2_distance(E1,E2) is beyond my acceptable limit.
I understand that this can be a model problem as well, but I want to setup all the configurable properties in a way that this above mentioned effect is reduced as much as possible.

So, is there any way to do that ?
can the properties like maintain-aspect-ratio, symmetric-padding be used ?

or you can suggest something else as well for my tests.

thanks

The nvstreammux’s “width” and “height” are used to help the multiple input streams to be combined into one batch. As your case, it is no meaning to set the “width” and “height” different to your input image’s original resolution. Please set the nvstreammux’s “width” and “height” the same as the input image’s original resolution.

Ya, Im aware about that. My question was to understand about how change in resolution of input image/streammux is related to output of embedding generation model.

And I have two different pipelines running at two different resolutions.

Imaging, a scenario where there are 2 cameras, through which I’m getting rtsp. And I’m running two different pipelines [I know that I can fetch both rtsp to the same instance of pipeline, but for this perticular use case, we cant do that]. So, here we are with two instances of pipeline, running with two different cameras with different resolutions.

the pipelines are doing Face Recognition. People who pass through both the cameras are the same.

what I want to do is, minimize the effect of difference in resolution on the embedding generated by model [again, I’m aware that this is majorly model related thing]. I’m asking, what should I do that makes sure that such effects are minimized ?

The model is trained by the image data which only be scaled once. DeepStream has many elements which do scaling such as nvvideoconvert, nvinfer, nvstreammux, … Mathematically, the more scaling on scaling(most scaling algorithms are floating calculations Image scaling - Wikipedia), the more differences in the image compared to the method used in training. The input is different then the output will be different.

okay, got it. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.