How to inference Image Classification using Jetson NANO and CSI Camera for Custom CNN model

Greetings,

I have been stuck for whole weeks figuring out how to inferencing my Custom CNN model into Jetson Nano with CSI Camera (live image classification).

From what I have read on the internet. The Keras model must be converted into ONNX model and then to the TensorRT model for best performance during inferencing.

What I have done :

  1. Trained my custom CNN model.
  2. Converted my Keras model into ONNX model.
  3. Converted my ONNX model into TensorRT model (.pb, but I don’t know if its correct)

My question is:

  1. What is the exact format for the TensorRT model? I have seen various formats such as .trt, .pb, and .uff. May I know which format do I need for inferencing?

  2. Is it possible to just use the ONNX model without converting it to TensorRT model? What about the difference in performance? (fps, accuracy).

  3. Lastly, It would be great if there are any sample codes for inferencing my custom CNN model with Jetson ft CSI Camera. I found a lot of samples only for the transfer learning model.

Thank you.

UPDATE

I have tried to inference my ONNX model based on a tutorial from Jetson AI Fundamentals - S3E3 - Training Image Classification Models - YouTube.

However, I encountered an error written like this:

[TRT] 4: [network.cpp::validate::2919] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load models/HAZIQCNN.onnx
[TRT] failed to load models/HAZIQCNN.onnx
[TRT] imageNet – failed to initialize.
imagenet: failed to initialize imageNet

Any suggestion on what really need to be fix here?. The ONNX model used was converted from my Keras .hdf5 model.

ENVIRONMENT VERSION
JETPACK 4.6.1
[L4T 32.7.1]
TensorRT 8.2.1
cuDNN 8.2.1
CUDA 10.2
VPI 1.2

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

HAZIQCNN.onnx (4.4 MB)

Hi, here is my ONNX model file.

For check_model.py. It seems everything is fine with my model.

For trtexec. Is is possible to run my ONNX model using trtexec on imagenet --?

Hi,

  1. TensorRT model output extension will be given by us. If you’re using TF-TRT it will be .pb as Tensorflow, if we are building a TensorRT engine using other methods, it will be usually given as .engine or .trt extension.

  2. We can use only the ONNX model as well, but if we use TensorRT, it will accelerate inference speed.

  3. We are moving this post to the Jetson Nano forum to get better help on samples.

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.