Unable to inference a trt model in jetson nano/ xavier nx


I am new to Deep stream. I had created a custom model for dog and cat detection of detectnet_v2. I had converted etlt file to trt.int8 file . I had tried to find some resource in google but unable to find any helpful resource.


I want to do inference in jetson devices. can anyone guide me to some resources which i can use to do inference.


Jetson Nano
Jerson Xavier NX

This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link



Please noted that Nano doesn’t support INT8 operation due to hardware limitations.
You will need a Xavier/XavierNX for this feature or use fp32/fp16 mode instead.

More, since TensorRT is not portable, you will need to convert the TensorRT engine on the Jetson directly.
You can find a step-by-step document below:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.