TensorRT run DLA on Xavier

Hi, we have run /tensorrt/samples/sampleINT8API/sampleINT8API.cpp on GPU and DLA, both can be successfully executed.

So, we try replace my onnx model, it’s like MobilenetV2, it can be successfully executed on GPU.

But when we enable DLA is faled.

log information is as follows:

&&&& RUNNING TensorRT.sample_int8_api # ./../../bin/sample_int8_api
[01/13/2020-13:59:47] [I] Please follow README.md to generate missing input files.
[01/13/2020-13:59:47] [I] Validating input parameters. Using following input files for inference.
[01/13/2020-13:59:47] [I]     Model File: /home/nvidia/xavierRT/tensorrt/bin/data/int8_api/AIRD_0203_notanh-simplify-pruning.onnx
[01/13/2020-13:59:47] [I]     Image File: /home/nvidia/xavierRT/tensorrt/bin/data/int8_api/0009_23.ppm
[01/13/2020-13:59:47] [I]     Reference File: /home/nvidia/xavierRT/tensorrt/data/int8_api/reference_labels.txt
[01/13/2020-13:59:47] [I]     Dynamic Range File: /home/nvidia/xavierRT/tensorrt/bin/data/int8_api/FRD_dynamic_range.txt
[01/13/2020-13:59:47] [I] Building and running a INT8 GPU inference engine for /home/nvidia/xavierRT/tensorrt/bin/data/int8_api/AIRD_0203_notanh-simplify-pruning.onnx
******set dlaCore
[01/13/2020-13:59:48] [I] Setting Per Layer Computation Precision
[01/13/2020-13:59:48] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[01/13/2020-13:59:48] [I] Setting Per Tensor Dynamic Range
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 62) [Shuffle] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 63) [Constant] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 64) [Constant] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 65) [Matrix Multiply] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 68) [Identity] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 69) [Identity] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 70) [Constant] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 71) [Constant] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 72) [Matrix Multiply] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Default DLA is enabled but layer (Unnamed Layer* 74) [Softmax] is not supported on DLA, falling back to GPU.
[01/13/2020-13:59:48] [W] [TRT] Calibrator is not being used. Users must provide dynamic range for all tensors that are not Int32.
[01/13/2020-13:59:48] [W] [TRT] DLA allows only same dimensions inputs to Elementwise.
[01/13/2020-13:59:48] [W] [TRT] Internal DLA error for layer (Unnamed Layer* 66) [ElementWise]. Switching to GPU fallback.
[01/13/2020-13:59:48] [F] [TRT] Assertion failed: begin >= 0 && end <= dims.nbDims && begin <= end
../builder/symbolicDims.cpp:502
Aborting...

[01/13/2020-13:59:48] [E] [TRT] ../builder/symbolicDims.cpp (502) - Assertion Error in axisRange: 0 (begin >= 0 && end <= dims.nbDims && begin <= end)
[01/13/2020-13:59:48] [E] Unable to build cuda engine.
&&&& FAILED TensorRT.sample_int8_api # ./../../bin/sample_int8_api

What is the problem?

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow version
o TensorRT version
o If Jetson, OS, hw versions

Also, if possible please share the script and model file to reproduce the issue.

Thanks

I try Clip(0~6) layer is replace by Relu, is not work.

And, remove globalAveragePool layer, is can be run,.

So, the GlobalAveragePool is not support on DLA?

Hi, device is Xavier:
OS: ubuntu18.04
Jetpack: 4.3
TensorRT: 6.0.1
CUDA: 10.0
cudnn: 7.6.3

Hi, is not GlobalAveragePool issue.

We re-find the layer that might be the problem, the result is:

This link model can be success.
https://drive.google.com/open?id=179IBitNnI4pGYm4iFwdUoExDdN25H4Sv

But this link model can’t be success:
https://drive.google.com/open?id=1uM5w-8wvu8iFKKg6tHTCoIA5Z_s5Ne0d

Only difference between last Relu layer.

Please help, thank!

Moving this to the Xavier forum so the Jetson team can take a look

Hi,

We need to reproduce this issue before giving a further suggestion.
Would you mind to share your customized model with us?

Thanks.

Hi qmara781128,

Is this still an issue to support? Or please share your customized model with us then we can help to provide suggestions.

Sorry i’m late, recently busy with other things.

The two models are linked as follows:

model can’t be success
https://drive.google.com/open?id=1yCTBhE7U_Eh-vssPWfpxks7Sh6axaXeO

model can be success
https://drive.google.com/open?id=1htFPCRrxcANOFPRErb5NtOTlAvrfU5rD

Hi,

Sorry for the late update.

This error can be reproduced in our environment.
We already pass this issue to our DLA team.
Will update more information with you once we got a feedback.

Thanks.

Hi,

This issue is fixed in TensorRT version 7.1.
This package will be available in our next JetPack 4.4. release.

Thanks.