Hi,
I have seen mixed response for if enabling DLA is possible through the Python API on the Jetson Xavier. Is there any documentation or examples available?
If not is there a future release that will add this?
Thanks.
Hi,
I have seen mixed response for if enabling DLA is possible through the Python API on the Jetson Xavier. Is there any documentation or examples available?
If not is there a future release that will add this?
Thanks.
Hi,
YES. It is possible.
Please set the DLA_core
information in IBuilderConfig:
https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/infer/Core/NetworkConfig.html#tensorrt.IBuilderConfig
Thanks,
Great, thanks for that. Are there any provided examples with this integrated?
Thanks.
Hi @AastaLLL,
How would one go about maxmising the number of streams a AGX can handle by utilising the DLA’s.
For instance we are bench marking yolov3, we skip frames for inference to maintain the streams fps, but this is using say a batch of 6 streams on the gpu.
We now would like to increase it to 9 streams, but the gpu is struggling with the throughput.
Can we add another pgie and specify DLA to handle the last 3 streams for instance
Regards Andrew
Hi Andrew,
Please help to open a new topic for your issue. Thanks