[TRT] [E] 3: [builderConfig.cpp::canRunOnDLA::493] Error Code 3: API Usage Error on Jetson orin Nano

Hello!

That requires your help to check, as we’ve successfully used torch2trt to run trtpose on Jetson Nano and Xavier NX.
Additionally, we’ve attempted using a container. However, after executing the container image, we’ve had trouble accessing local directories within the container.

Here are the commands I’ve used:

Pull the container image:

docker pull dustynv/torch_tensorrt:r35.4.1

Run the container:

./run.sh dustynv/torch_tensorrt:r35.4.1

Connect your files to the container:

./run.sh -v /home/user:/path/in/container $(./autotag torch_tensorrt)

Thank you!

Hi,

We are checking the error with torch2trt.
Get back to you soon.

Thanks.

Hi,

Just want to give you an update.

Confirmed that we can reproduce the same error on Orin Nano.
We are discussing this with our internal team to get more info.

Thanks.

Hi,

Please apply the following change to torch2trt and build/install it again.

diff --git a/torch2trt/torch2trt.py b/torch2trt/torch2trt.py
index ae94d6b..4c2c326 100644
--- a/torch2trt/torch2trt.py
+++ b/torch2trt/torch2trt.py
@@ -372,7 +372,7 @@ class NetworkWrapper(object):
             device_type = self._ctx.current_device_type()
             self._ctx.builder_config.set_device_type(layer, device_type)
             orig_device_type = device_type
-            if not self._ctx.builder_config.can_run_on_DLA(layer) and device_type == trt.DeviceType.DLA:
+            if device_type == trt.DeviceType.DLA and not self._ctx.builder_config.can_run_on_DLA(layer):
                 if self._ctx.torch2trt_kwargs['gpu_fallback']:
                     device_type = trt.DeviceType.GPU  # layer will fall back to GPU

Thanks.

After these modifications, the original code is now running successfully.

Thank you very much for your help. Appreciate it !

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.