Limitation of VIC Configuration failed image scale factor

I read this link. It seemed like that the smallest object for secondary gie to infer has to be at least 1/16 of its input size. However right now I have objects smaller than that I would I like to detect. How can I remove this limitation, or is there a work around?

Many thanks.

My setup is the following:
Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2

Seems like its Jetson hardware limitation. You can run on dGPU to overcome this.

As for how does hardware limits the input size, would you care to explain a little further? Both our pgie and sgie are yolov3.

Currently, the input size of sgie is 320x320 so the smallest input object size is 20x20. We need to detect objects around 30x12, and it’s detectable on our 1080 ti or simply run inference with darknet on a cpu.

Our entire project runs on a Jetson Xavier so changing hardware platform is quite difficult now. Could anyone gives me some advice on how to pass image this small(~30x12) to sgie (i.e.though modifying DS source code maybe?), or is it merely impossible for a Jetson Xavier/Deepstream combination?

Thanks a lot.

You can change scaling-compute-hw to gpu so it can process on smaller object
refer Gst-nvinfer β€” DeepStream 6.1.1 Release documentation

1 Like

Thanks, it worked! Although there seemed to be a drop in inference speed. Would changing from VIC to GPU slow down the performance on a xavier?

I think that depend on your use case, if your GPU is occupied highly by other components, then do this change will slow down the perf