1- In the jetson xavier nx , we have three accelerator (GPU/DLA1/DLA2) for using deep learning model, I want to know we can run three separate model on each accelerator simultaneously?
2- For running the AI models on DLA, we need to change the codes that we use for using GPU for jetson nano, GPU desktop? using of DLA for running deep model need difference codes? There two accelerator can be run any deep learning framework ? even TensorRT?
3- These two DLA only support INT8 ? or can support FP16? What about GPU? Can be support INT8 or only support FP16?
4- For running the inference codes how I can set the DLA1 to run model1 and DLA2 to run model2 and GPU to run model3?
5- The general structural of working with this device is similar with jetson nano or have large difference?