I have trained a image classification model on keras/tf, conveted it to onnx and used my recogination.py file for inference of my model but results do not match, i thing there is some kind of preprocessing is involved in jetson.inference.imagenet . so for that i looked into imagenet.cpp inside the c folder but didnt understand much. can i know what kind of preprocessing i need to include my model before traing so that result match on keras and jetson nano.
Hi,
When calling Classify(.) function, there is a C++ function called PreProcess(.) as below:
https://github.com/dusty-nv/jetson-inference/blob/master/c/imageNet.cpp#L394
...
if( CUDA_FAILED(cudaTensorNormMeanRGB(image, format, width, height,
mInputs[0].CUDA, GetInputWidth(), GetInputHeight(),
make_float2(0.0f, 1.0f),
make_float3(0.485f, 0.456f, 0.406f),
make_float3(0.229f, 0.224f, 0.225f),
GetStream())) )
...
Thanks.
what exactly are these preprocessing can you help me to do this on keras, that would be helpful , i have already gone through the code you have provided, but dont know what changes i need or is there a way i can remove the preprocessing as my model doesnt have preprocessing just image size 512*512 thats it . thanks
Hi,
Since TensorRT usually expects the float NCHW input, the preprocessing should be required for the format conversion.
If no normalization and mean-subtraction are needed, you can turn it off by setting it to the default value.
For example:
https://github.com/dusty-nv/jetson-inference/blob/master/c/tensorConvert.cu#L199
if( CUDA_FAILED(cudaTensorNormMeanRGB(image, format, width, height,
mInputs[0].CUDA, GetInputWidth(), GetInputHeight(),
make_float2(0.0f, 255.0f),
make_float3(0.0f, 0.0f, 00f),
make_float3(1.0f, 1.0f, 1.0f),
GetStream())) )
Thanks.