General Question about jetson Xavier NX

Hello.

1- In the jetson xavier nx , we have three accelerator (GPU/DLA1/DLA2) for using deep learning model, I want to know we can run three separate model on each accelerator simultaneously?
2- For running the AI models on DLA, we need to change the codes that we use for using GPU for jetson nano, GPU desktop? using of DLA for running deep model need difference codes? There two accelerator can be run any deep learning framework ? even TensorRT?
3- These two DLA only support INT8 ? or can support FP16? What about GPU? Can be support INT8 or only support FP16?
4- For running the inference codes how I can set the DLA1 to run model1 and DLA2 to run model2 and GPU to run model3?
5- The general structural of working with this device is similar with jetson nano or have large difference?

until you receive a more qualified answer, maybe the answers of this post I made bring you some light in some of your questions .

I’m quite sure GPU supports FP16 and INT8.
I’m quite sure DLAs only can run TensorTR models INT8 optimized
ISP,DLAs,7way VPU,x2 PCIe, make it quite different than jetson Nano even for camera support, I think you should look for AGX Xavier to get usable things.

Thanks a lot,
I want to know how i can assign one model to DLA1 and model 2 to DLA2?
In the multi-GPU Desktop, we can assign the specific model to GPU0 with the below command : or we also can set with os package.

export CUDA_VISIBLE_DEVICES=0

For GPU=1:

export CUDA_VISIBLE_DEVICES=1

we also can be use this:

os.environ[“CUDA_VISIBLE_DEVICES”]=“0” # for GPU0

For DLAs and GPU of Jetson xavier Nx How I can specific like the above?

Hi,

1. Suppose yes.
But please noticed that DLA is hardware-based inference engine which limited the support operation range.
It’s recommended to check if your model can be fully deployed on the DLA first.

2. DLA can be enabled directly with TensorRT API:

nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(logger);
builder->setFp16Mode(true);
builder->setDLACore(0);        // or builder->setDLACore(1) for DLA1
...
nvinfer1::IRuntime* infer = nvinfer1::createInferRuntime(gLogger);
infer->setDLACore(0);          // or infer->setDLACore(1) for DLA1

3. FP16 and INT8.
4. Please check answer no.2.
5. Similar

Thanks.

Hi, LoveNvidia

Please noticed that DLA is a hardware process rather than GPU.
So the export command won’t assign the inference job to DLA but GPU.

Currently, DLA must be triggered from the TensorRT API.
Detail can be found in our document here:
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic

Thanks.

Is there python bindings of TensorRT api to can assign DLA by a python?

Thanks a lots, @AastaLLL

Hi @AastaLLL
Is it possible to set DLACore to 0/1 with tensorflow-tensorrt integration api?
which version of tensorRT needed at least for DLA?

Hi @AastaLLL
For using Tensor cores of jetson xaiver nx, How can I use these cores? for DLA I have to use only TensorRT, For tencor cores also need onlt TensorRT? Is it possible to run the models with these codes directly like GPU?