Description
I have a Jetson Orin Nano 8GB dev kit with all sw packages installed:
- Python: 3.8.*
- Tensorflow: 2.12.*
- Torch: 2.0.0+nv23.05
- tensorrt, onnx, …:
then I follow jetson_dla_tutorial/QUICKSTART.md at master · NVIDIA-AI-IOT/jetson_dla_tutorial · GitHub with my Jetson Orin Nano board, I can get everything working if I use GPU only. as long as I try to use the DLA(s) (the fact is Orin Nano has 2 DLAs) in step 3:
python3 build.py data/model_bn.onnx --output=data/model_bn.engine --int8 --dla_core=0 --gpu_fallback --batch_size=32
…
[06/17/2023-07:52:58] [TRT] [E] 2: [optimizer.cpp::getFormatRequirements::3103] Error Code 2: Internal Error (Assertion !n->candidateRequirements.empty() failed. No supported formats for {ForeignNode[/cnn/cnn.0/Conv…/cnn/cnn.11/Relu]})
[06/17/2023-07:52:58] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Traceback (most recent call last):
File “build.py”, line 90, in
f.write(engine)
TypeError: a bytes-like object is required, not ‘NoneType’
…
So, it seems the DLAs don’t work on Jetson Orin Nano, is there anything else to get the DLA working here? also jetson_benchmarks/benchmark_csv/orin-nano-benchmarks.csv at master · NVIDIA-AI-IOT/jetson_benchmarks · GitHub configures every single mode run on GPU (device = 1), does this confirm we can not use DLA(s) on Jetson Orin Nano?