I am currently convt. Pytorch models to Onnx and using trtexec for trt engine generation to do inference. ./trtexec --onnx=resnet18.onnx --saveEngine=resnet18.trt --fp16 --useDLACore=0 --allowGPUFallback
While generating engine I am able to check my dla0 is active by below command cat /sys/devices/platform/host1x/158c0000.nvdla0/power/runtime_status → active
Platform : Xavier AGX
TensorRT : 8.0.2
CUDA : 10.2
Is there any way to check DLA utilization similar to GPU (by tegrastats)?
How can I know that “infererce.ipny” script inference is being done on (DLA + GPU) for images.
Have uploaded script, onnx model, images, label.txt and trt engine files.tar.xz (74.5 MB)