Is there a plan to support DLA on the next TensorRT version?

From TensorRT 10, “implicit quantization,” including IInt8Calibrator APIs, has been deprecated.

In NVIDIA Jetson AGX Orin, DLA supports both FP16 and INT8 precision. To use INT8, implicit quantization is required. Also, FP16 is very slow on most of the CNN-based networks with DLA on NVIDIA Jetson AGX Orin because of an FP19 issue.

Since TensorRT 10 deprecates implicit quantization, TensorRT 11 will stop supporting implicit quantization.

For this reason, DLA is not beneficial for both supported precision. (INT8: implicit quantization is no longer supported, FP16: slow with convolution operations)

Because of this issue, it looks like DLA will no longer be supported in the next TensorRT version. Therefore, is there a plan to support DLA on the next TensorRT version or on the next Jetson embedded platforms (e.g., AGX Thor)?

1 Like

Hi,
Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi,

We need to check this with our internal team and see what info can be shared currently.
Will get back to you later.

Thanks.

Are there any updates or news related to this issue?

Thanks

Hi,

Sorry for the late update.

You may need to stay on TensorRT 10 if DLA support is required.
The TensorRT team is currently working on DLA explicit quantization support.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.