[TRT] [E] 3: [builderConfig.cpp::canRunOnDLA::493] Error Code 3: API Usage Error on Jetson orin Nano

Hi, when I run this code, it can not work:

Code:

data = torch.zeros((1, 3, HEIGHT, WIDTH)).cuda()
model_trt = torch2trt(model, [data], dla=False, fp16_mode=True, max_workspace_size=1<<25)
torch.save(model_trt.state_dict(), OPTIMIZED_MODEL)

Error:

[TRT] [E] 3: [builderConfig.cpp::canRunOnDLA::493] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builderConfig.cpp::canRunOnDLA::493, condition: dlaEngineCount > 0

I have already tried add “dla=false”, “default_device_type=trt.DeviceType.GPU” in line model_trt =…, but it still can not work.
How should i solve it?

Thank you for your concern!

Environment

TensorRT Version: 8.5.2.2
GPU Type: Jetson Orin Nano (Developer kit)
CUDA Version: 11.4.315
CUDNN Version: 8.6.0

Operating System + Version: JetPack 5.1.1
Python Version : 3.8.10
PyTorch Version : 1.11.0
torch2trt Version : 0.12.0

Hi,

Orin Nano doesn’t have DLA so you will need to turn off the DLA configuration.

Which model are you using?
Could you share the “model” info or a runable script with us to reproduce?

Thanks.

The model is import from trt_pose resnet50_baseline_att.

Thank you!

Hi,

Could you share a simple reproducible script with us?
Does the same error occur using some built-in TorchVision model?

Thanks.

Hello!

This is our ongoing project, and we are currently attempting to run it on different platforms.
#--------------------------------------------------------------------------------------------------------------------------

Code:

import json
import trt_pose.coco
import torch

WIDTH, HEIGHT = 256, 256
MODEL_WEIGHTS = ‘xxxx.pth’

with open(‘xxxx.json’, ‘r’) as f:
pose = json.load(f)
topology = trt_pose.coco.coco_category_to_topology(pose)

import trt_pose.models
num_parts, num_links = len(pose[‘keypoints’]), len(pose[‘skeleton’])
model = trt_pose.models.resnet50_baseline_att(num_parts, 2 * num_links, num_upsample = 3).cuda().eval()
model.load_state_dict(torch.load(MODEL_WEIGHTS))

import torch2trt
OPTIMIZED_MODEL = f’{MODEL_WEIGHTS[:-4]}_trt.pth’
data = torch.zeros((1, 3, HEIGHT, WIDTH)).cuda()
model_trt = torch2trt.torch2trt(model, [data], fp16_mode=True, max_workspace_size=1<<25)
torch.save(model_trt.state_dict(), OPTIMIZED_MODEL)
#--------------------------------------------------------------------------------------------------------------------------

I have previously executed the same program on other platforms, such as Xavier NX and Jetson Nano, and there were no issues.

Thank you!

Hi,

Do you have the .json and .pth files can share with us?

Thanks.

Hello!

Can I email you privately?

Thank you!

Hi,

Sorry for the late update.
You can send it through private message.

Thanks.

Hi,

Confirmed that we have receive the material to reproduce this issue.
We will try to reproduce it and let you know the following.

Thanks.

OK!
Thank you!

Hi,

Could you share the steps to install dependencies with us as well?

We try to build trt_pose with python3 setup.py install
But there is still something missing when running your test code.

# python3 topic_268292.py 
Traceback (most recent call last):
  File "topic_268292.py", line 5, in <module>
    import trt_pose.coco
  File "/home/topic_268292/../trt_pose/trt_pose/coco.py", line 9, in <module>
    import trt_pose.plugins
ModuleNotFoundError: No module named 'trt_pose.plugins'

Thanks.

Hello!

Are you referring to the installation steps for trt_pose?

We followed the steps below for installation:

#----------------------trt_model-------------------------------#
git clone GitHub - NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT
cd ~/trt_pose
sudo python3 setup.py install
#------------------------------------------------------------------#

Thank you.

Hi,

We do run the command to install trt_pose but there is no trt_pose.plugins module.

Have you applied any modifications before installing?
Is the latest branch used?

Thanks.

Hello!

I didn’t make any modifications before installing…
But I installed other packages before insatlling trt_pose.

I’m not sure if it’s the latest branch.
I used the command mentioned above to install trt_pose on different platforms.

I’ll send our install command file to you privately.

Thank you!

Hi,

Thanks for sharing the detailed steps.
We will try it again and let you know the following.

Thanks.

Hi,

Thanks for your helping.
We meet a known issue and are able to fix the issue after applying the WAR.

It seems that you are trying to convert the model with torch2trt.
Since Pytorch already supports TensorRT compiling, could you try it with torch_tensorrt instead?

Thanks.

Hello!

Can you provide me with the installation command for torch-tensorrt?
I’ve tried various installation commands, but they all resulted in errors.

Thank you!

Hi,

You can use our container directly.

Thanks.

Hello!

I want to know if it’s necessary to use torch-tensorrt.
Is it because torch2trt cannot run in newer environments like Orin?

If we need to use torch-tensorrt but the command doesn’t install properly, we might need to resort to using a container.
We hope to achieve the same results without making too many changes to the methods.

Thank you!

Hi,

torch-tensorrt is integrated into PyTorch so it’s expected to be better compatible with a SOTA Torch model.
torch2trt is a standalone library for Jetson devices so it has been verified on Jetson.

If torch2trt is a better candidate for you, we can check if the trt_pose can work with torch2trt.

Thanks.