I’m having some problem with my Jetson Nano.
-
Classification
I trained a mobilenetv2 and mobilenetv1 classification models with keras.
I converted both models from .h5 to .onnx with OnnxTools and I did inferece modifing “sampleOnnxMNIST” in order to use my models.
The input dimensions are 224x224 on both models.
I obtain an inferece time of 48ms for MobileNetV1 and 56ms for MobileNetV2 that are quite differente from the benchmarks declared here: Jetson Benchmarks | NVIDIA Developer -
Detection
I trained a MobileNetV2 SSD detector with Tensorflow.
I obtained a frozen model .pb and converted it with
Now with sampleUffSSD I’m having these errors:
[TRT] UffParser: Parser error: Conv1_pad_1/Pad/paddings: Invalid weights types when converted. Trying to convert from INT32 To INT8
with this:
if (!parser->parse(uffFile, *network, nvinfer1::DataType::kINT8))
I tried to change the data type to kINT32, kFLOAT or kHAlF, but in these cases en exception occours:
uff/UffParser.cpp:2134: std::shared_ptr UffParser::parsePad(const uff::Node&, const Fields&, NodesMap&): Assertion `nbDims == 4’ failed.
Can anybody please help me?
Thanks.