Inference with custom model on JetsonNano

I’m having some problem with my Jetson Nano.

  1. Classification
    I trained a mobilenetv2 and mobilenetv1 classification models with keras.
    I converted both models from .h5 to .onnx with OnnxTools and I did inferece modifing “sampleOnnxMNIST” in order to use my models.
    The input dimensions are 224x224 on both models.
    I obtain an inferece time of 48ms for MobileNetV1 and 56ms for MobileNetV2 that are quite differente from the benchmarks declared here: Jetson Benchmarks | NVIDIA Developer

  2. Detection
    I trained a MobileNetV2 SSD detector with Tensorflow.
    I obtained a frozen model .pb and converted it with

Now with sampleUffSSD I’m having these errors:
[TRT] UffParser: Parser error: Conv1_pad_1/Pad/paddings: Invalid weights types when converted. Trying to convert from INT32 To INT8
with this:
if (!parser->parse(uffFile, *network, nvinfer1::DataType::kINT8))
I tried to change the data type to kINT32, kFLOAT or kHAlF, but in these cases en exception occours:
uff/UffParser.cpp:2134: std::shared_ptr UffParser::parsePad(const uff::Node&, const Fields&, NodesMap&): Assertion `nbDims == 4’ failed.

Can anybody please help me?

Thanks.

Hi,

1.
Have you maximized the Nano performance first?

sudo jetson_clocks.sh

As you can see, the MobileNet-v2 of benchmark is using TensorFlow frameworks.
You can reproduce the result with the steps shared here:
https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/

2.
Have you updated the output class size?

static constexpr int OUTPUT_CLS_SIZE = 91;

Thanks.

  1. Yes, I maximezed the Nano performance → with your MobilenetV2 (the sample from the benchmark) the performance are 25/26 ms per inference. With my models the performance are 48ms(MobNetV1) and 56ms(MobNetV2). In my case the input image is 224x224, so less than your 300x300
    I’m doing something wrong?

  2. yes, maybe I’m doing something wrong in the pb to uff conversion?

Thanks

For the classification matter I solved converting my Keras model into pb and then to .uff
Now the inference time is around 20ms

Thanks for your update.

It is good to know it works now.