I want to run a tensorflow program on Jetson, without going through TensorRT. I want to run it on both iGPU and DLA. Is it possible to run a tensorflow program natively on DLA, or do I need to go through TensorRT workflow that? If so, how do I switch between iGPU and DLA? Looking for an export CUDA_VISIBLE_DEVICE equivalent
Hi,
DLA is a deep learning accelerator rather than GPU so it is not available in CUDA device.
The only software API for DLA is TensoRT.
Currently, there are two approaches to use DLA.
1. High level: use TensorRT API
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#dla_topic[/url]
2. Low level: call DLA driver directly.
[url]https://github.com/nvdla/sw[/url]
Thanks.