How to use .trt file for inference on jetson nano

I have converted my model to onnx format and then converted that to .trt
now i want to use that for inference how can I use it for the inference on jetson nano? please guide

Thank You

Hi,

Please noticed that TensorRT engine is not portable.
So you will need to generate the file directly on the Jetson Nano.

Here is the code sample for running a serialized file:

nvinfer1::ICudaEngine* engine = infer->deserializeCudaEngine(engine_stream, engine_size);
nvinfer1::IExecutionContext* context = engine->createExecutionContext();
...
context->enqueue(1, mBindings, mStream, NULL);

You can find more details in our /usr/src/tensorrt/samples/sampleUffFasterRCNN sample.

Thanks.

Thank you,

I have used onnx2trt on the jetson nano itself to generate .trt.
I was able to load .trt engine, but now I want to make inference with it but not getting how ?

import os
import tensorrt as trt

def load_engine(trt_runtime, engine_path):
with open(engine_path, ‘rb’) as f:
engine_data = f.read()
engine = trt_runtime.deserialize_cuda_engine(engine_data)
return engine

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
trt_runtime = trt.Runtime(TRT_LOGGER)
trt_engine_path = “engine.trt”
trt_engine = load_engine(trt_runtime, trt_engine_path)

if trt_engine is not None:
print(“Success”)
else:
print(“Failed”)

Result : Success

but how to go further for inference?
trained model is of object detection
please guide

Hi,

You can find a detection example in the following path:

/usr/src/tensorrt/samples/python/uff_ssd/detect_objects.py

Thanks.