Hi guys, I hope I’m posting in the correct category.
Our team just bought a Jetson Orin NX board. We managed to flash Jetpack 5.1 on it. We are running a Docker image that has CUDA 11.4 and Ubuntu 20.04 LTS.
Generic questions ahead, please be aware.
Now we want to play with NVDLA. However, when we look at the Github , both HW and SW have their last commit 4/5 years ago. I wonder if the code there still works? If so, should we clone from both HW and SW repositories and compile them on our board?
Some other documents state that we should use a virtual platform to use compiler and runtime library of NVDLA, however, the question is we already have bought the board, does it make sense to use a virtual platform to use the Deep Learning Accelerator?
Finally, we cloned from Jetson Inference, the repo is being continuously maintained. The binaries are even prebuilt for Jetson Orin NX’s CPU architecture (ARM 64), so it is easy to run and test the models provided by Jetson Inference. We only need to pull this docker image for our board dustynv/jetson-inference:r35.2.1.
That said, although the models from Jetson Inference are certainly using the GPU (I checked with sudo tegrastats) I’m not sure if they are using the DLAs, since we cannot monitor them using tegrastats, even though the prebuilt binaries are dependent on nvdla_runtime.so when we check with ldd (excuse me it’s not the exact name, I’m not on my working laptop).
What are your suggestions on learning how to use NVDLA with our own models? And how can you be sure that the DLAs are working if we cannot monitor them?
I might have missed some documentation, please let me know in that case.
Thank you for the reply. The simplest way worked!
I managed to run the model and the command indeed shows the DLA as active when running.
I will mark the question as solved. However, I still have another question.
So, NVDLA is a hardware that is configurable if I understand well. But currently, only TensorRT configures it, right? I assume that you are also familiar with Jetson Inference, do you know if there is a way to use NVDLA with that framework? Since their binaries are linked against libnvdla_runtime.so.
If we want to run our own model with DLAs we need to convert the model to use TensorRT and run it with --useDLACore=X is this correct?
Jetson inference also deploys the engine with TensorRT so you can also config the deployment to DLA hardware:
Yes, the --useDLACore=0 and --useDLACore=1 can be used to control the deployment on DLA0 and DLA1.
But if Orin NX 8GB is used, there is only one DLA available.