Hi all, like the way there is a sample code is present for MNIST hand written digit classification/identification (sample.py and model.py) present in the directory
/usr/src/tensorrt/samples/python/network_api_pytorch_mnist/
In a similar way I want the sample code for object detection and semantic segmentation on jetson nano board.
All these are required for experimenting how the inference of a pretrained model can be accelerated on the NVIDIA GPU
The sample code should be in following flavors
Flavor1 - It should be able to convert any pre-trained model (not being built/created by NVIDIA) to an onnx format and then to tensorrt and perform the inference
Flavor2 - It should create a model like the one present in the directory /usr/src/tensorrt/samples/python/network_api_pytorch_mnist/, get the trained model’s weights, create the network, build the engine and perform inference
If I get sample code for these two requirements then it will be helpful to me to conduct further experiment on this.
2. TensorRT uses the ONNX model as an intermediate model format.
Most public training frameworks, like PyTorch or TensorFlow, support converting the trained model into ONNX.
You can also check jetson-inference for the transfer learning on Jetson: