How to Execute both Deep Learning Accelerator(DLA) and GPU at the same time in python



If there are any people here who could answer my question about DLA, that would be greatly appreciated.

How to set up/schedule the TensorRT model for parallel execution on Both DLAs, and GPU. but does it succeed? What additional APIs or methods are there if not?

Best Wishes,


Hope the following doc may help you. If not we would like to move this post to the Jetson related forum to get better help.

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Also check out the DLA github page for samples and resources: Recipes and tools for running deep learning workloads on NVIDIA DLA cores for inference applications.

We have a FAQ page that addresses some common questions that we see developers run into: Deep-Learning-Accelerator-SW/FAQ