Deploy caffe-lenet on TX2

Hi,I deployed a trained caffe-lenet on TX2, and all test image inference result was wrong.(image classification)
Then I found when the input image size less than 128, the inference result was wrong.Was there some parameters shold I revise on TX2 ? Thank you!

Hi, which version caffe are you using? Have you tried TensorRT? It’s the optimal API on Jetson with FP16 acceleration, graph optimizations, and more.

Here is an example of deploying TensorRT/DIGITS on Jetson TX1/TX2 for image classification and vision: [url]https://github.com/dusty-nv/jetson-inference[/url]

I train the LeNet using DIGITS and deploy the trained module on Jetson TX2 following the instruction:https://github.com/dusty-nv/jetson-inference#loading-custom-models-on-jetson.
Here is my trained network , Please take a look,Thank you!

lenet.tar.gz (15 MB)

I think this is due to mismatched pixel mean values between training and inference. Please refer to the following post.

[url]https://devtalk.nvidia.com/default/topic/1023944/loading-custom-models-on-jetson-tx2/#5209641[/url]