Jetson Orin Nano having difficulties to run models on DLA(s)


I have a Jetson Orin Nano 8GB dev kit with all sw packages installed:

  1. Python: 3.8.*
  2. Tensorflow: 2.12.*
  3. Torch: 2.0.0+nv23.05
  4. tensorrt, onnx, …:

then I follow jetson_dla_tutorial/ at master · NVIDIA-AI-IOT/jetson_dla_tutorial · GitHub with my Jetson Orin Nano board, I can get everything working if I use GPU only. as long as I try to use the DLA(s) (the fact is Orin Nano has 2 DLAs) in step 3:
python3 data/model_bn.onnx --output=data/model_bn.engine --int8 --dla_core=0 --gpu_fallback --batch_size=32

[06/17/2023-07:52:58] [TRT] [E] 2: [optimizer.cpp::getFormatRequirements::3103] Error Code 2: Internal Error (Assertion !n->candidateRequirements.empty() failed. No supported formats for {ForeignNode[/cnn/cnn.0/Conv…/cnn/cnn.11/Relu]})

[06/17/2023-07:52:58] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Traceback (most recent call last):

File “”, line 90, in


TypeError: a bytes-like object is required, not ‘NoneType’

So, it seems the DLAs don’t work on Jetson Orin Nano, is there anything else to get the DLA working here? also jetson_benchmarks/benchmark_csv/orin-nano-benchmarks.csv at master · NVIDIA-AI-IOT/jetson_benchmarks · GitHub configures every single mode run on GPU (device = 1), does this confirm we can not use DLA(s) on Jetson Orin Nano?


Orin Nano board doesn’t have DLA.
Please check Orin spec below:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.