i got my SSD Mobilenet v2 working on Jetson Nano but unfortunately very slow (~2FPS). I used to convert my frozen inference graph (.pb) to ONNX model and called the session.run() function (see script).
I get an onnx runtime warning: "CUDA kernel not supported. Fallback to CPU execution provider for Op type: Conv node name: Conv1/BiasAdd
" so it seems like the ONNX framework does not support this operation.
But does this cause the inference to be that slow?solectrix_inference.py (1.5 KB) inference.onnx (4.1 MB)
Is there any way to optimate the inference in order to get something about 20 FPS?
Thank you and have a great day