Cannot create DLA engine using trtexec on Xavier

Hi,

I am trying to enable DLA core using trtexec.

But I got some error messages when using sample mnist onnx.

/usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --saveEngine=/usr/src/tensorrt/data/mnist/mnist.trt --fp16 --workspace=2048 --useDLACore=0 --allowGPUFallback
nvidia@xavier:~$ /usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --saveEngine=/usr/src/tensorrt/data/mnist/mnist.trt --fp16 --workspace=2048 --useDLACore=0 --allowGPUFallback
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --saveEngine=/usr/src/tensorrt/data/mnist/mnist.trt --fp16 --workspace=2048 --useDLACore=0 --allowGPUFallback
[06/28/2022-16:19:41] [I] === Model Options ===
[06/28/2022-16:19:41] [I] Format: ONNX
[06/28/2022-16:19:41] [I] Model: /usr/src/tensorrt/data/mnist/mnist.onnx
[06/28/2022-16:19:41] [I] Output:
[06/28/2022-16:19:41] [I] === Build Options ===
[06/28/2022-16:19:41] [I] Max batch: 1
[06/28/2022-16:19:41] [I] Workspace: 2048 MB
[06/28/2022-16:19:41] [I] minTiming: 1
[06/28/2022-16:19:41] [I] avgTiming: 8
[06/28/2022-16:19:41] [I] Precision: FP32+FP16
[06/28/2022-16:19:41] [I] Calibration:
[06/28/2022-16:19:41] [I] Safe mode: Disabled
[06/28/2022-16:19:41] [I] Save engine: /usr/src/tensorrt/data/mnist/mnist.trt
[06/28/2022-16:19:41] [I] Load engine:
[06/28/2022-16:19:41] [I] Builder Cache: Enabled
[06/28/2022-16:19:41] [I] NVTX verbosity: 0
[06/28/2022-16:19:41] [I] Inputs format: fp32:CHW
[06/28/2022-16:19:41] [I] Outputs format: fp32:CHW
[06/28/2022-16:19:41] [I] Input build shapes: model
[06/28/2022-16:19:41] [I] Input calibration shapes: model
[06/28/2022-16:19:41] [I] === System Options ===
[06/28/2022-16:19:41] [I] Device: 0
[06/28/2022-16:19:41] [I] DLACore: 0(With GPU fallback)
[06/28/2022-16:19:41] [I] Plugins:
[06/28/2022-16:19:41] [I] === Inference Options ===
[06/28/2022-16:19:41] [I] Batch: 1
[06/28/2022-16:19:41] [I] Input inference shapes: model
[06/28/2022-16:19:41] [I] Iterations: 10
[06/28/2022-16:19:41] [I] Duration: 3s (+ 200ms warm up)
[06/28/2022-16:19:41] [I] Sleep time: 0ms
[06/28/2022-16:19:41] [I] Streams: 1
[06/28/2022-16:19:41] [I] ExposeDMA: Disabled
[06/28/2022-16:19:41] [I] Spin-wait: Disabled
[06/28/2022-16:19:41] [I] Multithreading: Disabled
[06/28/2022-16:19:41] [I] CUDA Graph: Disabled
[06/28/2022-16:19:41] [I] Skip inference: Disabled
[06/28/2022-16:19:41] [I] Inputs:
[06/28/2022-16:19:41] [I] === Reporting Options ===
[06/28/2022-16:19:41] [I] Verbose: Disabled
[06/28/2022-16:19:41] [I] Averages: 10 inferences
[06/28/2022-16:19:41] [I] Percentile: 99
[06/28/2022-16:19:41] [I] Dump output: Disabled
[06/28/2022-16:19:41] [I] Profile: Disabled
[06/28/2022-16:19:41] [I] Export timing to JSON file:
[06/28/2022-16:19:41] [I] Export output to JSON file:
[06/28/2022-16:19:41] [I] Export profile to JSON file:
[06/28/2022-16:19:41] [I]
----------------------------------------------------------------
Input filename:   /usr/src/tensorrt/data/mnist/mnist.onnx
ONNX IR version:  0.0.3
Opset version:    8
Producer name:    CNTK
Producer version: 2.5.1
Domain:           ai.cntk
Model version:    1
Doc string:
----------------------------------------------------------------
[06/28/2022-16:19:42] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/28/2022-16:19:42] [E] Cannot create DLA engine, 0 not available
[06/28/2022-16:19:42] [E] Engine creation failed
[06/28/2022-16:19:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/usr/src/tensorrt/data/mnist/mnist.onnx --saveEngine=/usr/src/tensorrt/data/mnist/mnist.trt --fp16 --workspace=2048 --useDLACore=0 --allowGPUFallback

I also tried googlenet, and the same error occurred.

/usr/src/tensorrt/bin/trtexec --deploy=/usr/src/tensorrt/data/googlenet/googlenet.prototxt --output=prob --verbose --useDLACore=0 --allowGPUFallback 

I found that two DLA’s status always shows “suspended”
Is there any way to ture it to “active”?

DLA 0
Enabled: enabled
Control: auto
Status: suspended
Usage: 0

DLA 1
Enabled: enabled
Control: auto
Status: suspended
Usage: 0

Device infomation

NVIDIA Jetson AGX Xavier [16GB]
 L4T 32.5.0 [ JetPack 4.5 ]
   Ubuntu 18.04.6 LTS
   Kernel Version: 4.9.201-tegra
 CUDA 10.2.89
   CUDA Architecture: 7.2
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 CUDNN: 8.0.0.180
 TensorRT: 7.1.3.0
 Vision Works: 1.6.0.501
 VPI: ii libnvvpi1 1.0.12 arm64 NVIDIA Vision Programming Interface library
 Vulcan: 1.2.70

Thanks.

It looks like you are using Jetson AGX Xavier.
I’m moving your topic to the Jetson board first.

Hi,

There are some newer software available for Xavier already.
Would you mind upgrading your environment to JetPack 4.6.2 or JetPack 5.0.1 DP first?

Thanks.

Hi AastaLLL,

Due to the product’s environment is fixed, so is there any way not to updated the JetPack version?

It seems like the error is occurred by the old JetPack verion.

Thanks.

Thanks for the feedback.

We are going to reproduce this issue with JetPack 4.5.
Will share more information with you later.

Hi,

We have tried your command on a JetPack 4.5 environment.
It can work without issue. Please find the attachment for the output log.
JP-4.5_trtexec.log (56.7 KB)

Thanks.

Hi,

Thanks for the response.
I still can’t run the result like yours.

[06/30/2022-15:34:26] [E] Cannot create DLA engine, 0 not available
[06/30/2022-15:34:26] [E] Engine creation failed
[06/30/2022-15:34:26] [E] Engine set up failed

Could you cat the DLA’s status?
Is it cause by the DLA’s status is suspended?

Thanks for your help.

Hi,

The status of your DLA looks good to us.
DLA stays on the ‘suspended’ mode and only becomes ‘active’ when a layer is inferencing.

In case there are some broken libraries in your environment.
Is it possible to reflash the system with JetPack 4.5 and try it again?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.