I have been stuck for whole weeks figuring out how to inferencing my Custom CNN model into Jetson Nano with CSI Camera (live image classification).
From what I have read on the internet. The Keras model must be converted into ONNX model and then to the TensorRT model for best performance during inferencing.
What I have done :
- Trained my custom CNN model.
- Converted my Keras model into ONNX model.
- Converted my ONNX model into TensorRT model (.pb, but I don’t know if its correct)
My question is:
What is the exact format for the TensorRT model? I have seen various formats such as .trt, .pb, and .uff. May I know which format do I need for inferencing?
Is it possible to just use the ONNX model without converting it to TensorRT model? What about the difference in performance? (fps, accuracy).
Lastly, It would be great if there are any sample codes for inferencing my custom CNN model with Jetson ft CSI Camera. I found a lot of samples only for the transfer learning model.
I have tried to inference my ONNX model based on a tutorial from Jetson AI Fundamentals - S3E3 - Training Image Classification Models - YouTube.
However, I encountered an error written like this:
[TRT] 4: [network.cpp::validate::2919] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load models/HAZIQCNN.onnx
[TRT] failed to load models/HAZIQCNN.onnx
[TRT] imageNet – failed to initialize.
imagenet: failed to initialize imageNet
Any suggestion on what really need to be fix here?. The ONNX model used was converted from my Keras .hdf5 model.
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
- validating your model with the below snippet
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
HAZIQCNN.onnx (4.4 MB)
Hi, here is my ONNX model file.
For check_model.py. It seems everything is fine with my model.
For trtexec. Is is possible to run my ONNX model using trtexec on imagenet --?