When converting onnx to engine using trtexec, i set --useDLACore=1; But in inference, the DLA0 is active but DLA1 is suspend?

when trtexec:
i set --useDLACore=1, and watch -n 1 “cat /sys/devices/platform/host1x/158c0000.nvdla1/power/runtime_status”, show active ;
we save the engine; and inference it:
watch -n 1 “cat /sys/devices/platform/host1x/158c0000.nvdla1/power/runtime_status”, show suspend; but show activate in watch -n 1 “cat /sys/devices/platform/host1x/158c0000.nvdla0/power/runtime_status”,; In inference, how to set the DLA id in python TensorRT API?

Hi,

You can specify the DLA Core when creating the TensorRT runtime:
https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/Runtime.html#tensorrt.Runtime

Thanks.

1 Like

OK, Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.