API usage error of torch2trt on Jetson Orin nano

Hi.
I’m facing trouble to use torch2trt. When I run this code, it will stop.

code
model=torchvision.models.efficientnet_v2_s(weights=weights)
model.eval()
model=model.to(device)
model_trt=torch2trt(model,,fp16_mode=True,int8_mode=True,max_batch_size=1)

result
[TRT] [E] 3: [builderConfig.cpp::canRunOnDLA::493] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builderConfig.cpp::canRunOnDLA::493, condition: dlaEngineCount > 0)

I guess torch2trt or tensorRT try to use DLA which Orin nano doesn’t have.
How can I set “No use DLA”?

Thank you for your time.

Environment

TensorRT Version: 8.5.2.2
GPU Type: Jetson Orin Nano (Developer kit)
Nvidia Driver Version: JetPack 5.1.1
CUDA Version: 11.4.315
CUDNN Version: 8.6.0

Operating System + Version: JetPack 5.1.1
Python Version (if applicable): 3.8.10
PyTorch Version (if applicable): 2.0.0+nv23.5
**torch2trt Version **: 0.4.0

Hi,

Please use dla=False when calling torch_trt:
Below is a related topic for your reference:

Thanks.

Thank you for your support.

I added dla=False but same error appears.
model_trt=torch2trt(model,,fp16_mode=True,int8_mode=True,max_batch_size=1,dla=False)

Did I add at the wrong place?

Hi,

What is the device_types value in your use case?

Thanks.

Hi,

I’m sorry. I didn’t get what you mean because of lacking knowledge.
How can I check the device_types value?

Thank you.

Hi,

You can find some info in the below link:

Could you try if the following command can work?

model_trt = torch2trt(model, [data], default_device_type=trt.DeviceType.GPU, fp16_mode=True, ...)

Thanks.

Hi,

import torch
from torch2trt import torch2trt,trt
import torchvision

device='cuda'
weights=torchvision.models.segmentation.FCN_ResNet50_Weights.DEFAULT
model=torchvision.models.segmentation.fcn_resnet50(weights=weights).to(device).eval()

input_size=[1,3,256,256]
x=torch.zeros(input_size).to(device)
model_trt=torch2trt(model,[x],default_device_type=trt.DeviceType.GPU, fp16_mode=True,int8_mode=True,max_batch_size=1,dla=False)

It doesn’t work. Same error occurred.

Thank you.

Hi,

Thanks for the feedback.

We are checking this issue internally.
Will share more information with you later.

Hi,

We are trying to reproduce this issue in our environment.

It seems that you have installed torch2trt on JetPack 5.x.
Could you share how do you install it?

We got another topic that reporting the tool is not working on JetPack 5.

Thanks.

Hi,
Sorry for late reply. I installed tensorRT and torch2trt by following way.

sudo dpkg -i nv-tensorrt-local-repo-l4t-8.5.2-cuda-11.4_1.0-1_arm64.deb
sudo cp /var/nv-tensorrt-local-repo-l4t-8.5.2-cuda-11.4/* -keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get install tensorrt
sudo apt-get install python3-libnvinfer-dev
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
sudo python3 setup.py install

Thank you for your support.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.