Inference with jetson nano

I have trained the model in local on Tensorflow and converted that model to .onnx format.
and used these code to inference.
jetson-inference/imagenet-example-python-2.md at master · dusty-nv/jetson-inference · GitHub .

import jetson.inference
import jetson.utils

import argparse

parser = argparse.ArgumentParser()
parser.add_argument(“filename”, type=str, help=“filename of the image to process”)

args = parser.parse_args()

img = jetson.utils.loadImage(args.filename)

net = jetson.inference.imageNet(‘alexnet’,[‘–model=/home/sudhir/Downloads/AJAX/ResNet18.onnx’,‘–input_blob=input_0’,‘–output_blob=output_0’,‘–labels=/home/sudhir/Downloads/AJAX/label.txt’])

class_idx, confidence = net.Classify(img)

class_desc = net.GetClassDesc(class_idx)

print(“image is recognized as ‘{:s}’ (class #{:d}) with {:f}% confidence”.format(class_desc, class_idx, confidence * 100))

and was getting error like these.

failed to parse ONNX model ‘/home/sudhir/Downloads/AJAX/ResNet18.onnx’
[0m [0;31m[TRT] device GPU, failed to load /home/sudhir/Downloads/AJAX/ResNet18.onnx
[0m [0;31m[TRT] failed to load /home/sudhir/Downloads/AJAX/ResNet18.onnx
[0m [0;31m[TRT] imageNet – failed to initialize.

how can i inference my own .onnx model on jetson nano

Hi,

Since jetson_inference uses TensorRT as inference engine, could you try your model with TensorRT first?

$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model]

Thanks.

i have executed $ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] this command and it has passed and also saves the .trt model now how can i inference the .trt model

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.