Need information about what all can be done using jetpack

Description

Wanted to know following information
Is it possible to do following using JetPack

  1. Create an ONNX file from the pretrained model
  2. Convert the ONNX file to tensor RT
  3. Perform inference on the NVIDIA board

Please let me know.

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

We are moving this post to the Jetson related forum to get better help.

Dear @trivedi.nagaraj,

  1. Create an ONNX file from the pretrained model

Jetpack does not provide any script as such to convert your model to ONNX. We expect the model to be in ONNX format before using TensorRT framework

  1. Convert the ONNX file to tensor RT

We have trtexec tool and TensorRT APIs for ONNX → TRT conversion. Please check TRT samples (https://github.com/NVIDIA/TensorRT/tree/release/8.6/samples/sampleOnnxMNIST)

  1. Perform inference on the NVIDIA board

Please check GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. of it helps.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.