Unable to inference a trt model in jetson nano/ xavier nx

Description

I am new to Deep stream. I had created a custom model for dog and cat detection of detectnet_v2. I had converted etlt file to trt.int8 file . I had tried to find some resource in google but unable to find any helpful resource.

resnet18_cat_dog.etlt
resnet18_car_dog.trt.int8

I want to do inference in jetson devices. can anyone guide me to some resources which i can use to do inference.

Environment

Jetson Nano
Jerson Xavier NX

Hi,
This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link

Thanks!

Hi,

Please noted that Nano doesn’t support INT8 operation due to hardware limitations.
You will need a Xavier/XavierNX for this feature or use fp32/fp16 mode instead.

More, since TensorRT is not portable, you will need to convert the TensorRT engine on the Jetson directly.
You can find a step-by-step document below:
https://docs.nvidia.com/tao/tao-toolkit/text/deepstream_tao_integration.html

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.