Is it possible to have dynamic batch in inference?
My primary pgie has different number of detections and is it possible to set dynamic batch at secondary pgie?
If so, how to set in config and any updates in onnx model?
My onnx model input shape is -1,24,94,3
1 Like
I have this message in creating engine.
INFO: [FullDims Engine Info]: layers num: 2
0 INPUT kFLOAT input:0 24x94x3 min: 1x24x94x3 opt: 10x24x94x3 Max: 10x24x94x3
1 OUTPUT kFLOAT d_predictions:0 20 min: 0 opt: 0 Max: 0
Is it saying that secondary engine’s input can be varied from 1 to 10?
yes!!