I am using Transfer learning toolkit - v2.0_py3 and training classifier with input_image_size: “3,145,350”. But after generation of engine using below command. tlt-converter final_model_282.etlt -k tlt_encode -c final_model_int8_cache_282.bin -o predictions/Softmax -d 3,145,350 -i nchw -e Age_ep177_tlt7.engine -m 64 -t int8 -b 64 and then using it in Deep-stream application it gives error message.
but with the param -d 3,224,224 during engine file generation I was able to generate engine file successful and it is running fine with my deep-stream application as well.
So what is the reason of that ? can you please suggest where are the gaps.?
More, it is necessary to set input-dims explicitly in your ds config file.
input-dims=c;h;w;0 # where c = number of channels, h = height of the model input, w = width of model input, 0: implies CHW format.
uff-input-blob-name=input_1
output-blob-names=predictions/Softmax #output node name for classification