cuDLAStandaloneMode load model report error

Hi,
I convert /usr/src/tensorrt/data/mnist/mnist.onnx to engine.

trtexec --onnx=mnist.onnx --saveEngine=mnist_engine.trt --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --fp16

[10/13/2022-20:09:10] [I] Finish parsing network model
[10/13/2022-20:09:10] [I] [TRT] ---------- Layers Running on DLA ----------
[10/13/2022-20:09:10] [I] [TRT] [DlaLayer] {ForeignNode[Conv_0…MaxPool_5]}
[10/13/2022-20:09:10] [I] [TRT] ---------- Layers Running on GPU ----------
[10/13/2022-20:09:11] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +534, GPU +505, now: CPU 1148, GPU 10840 (MiB)
[10/13/2022-20:09:11] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +85, GPU +89, now: CPU 1233, GPU 10929 (MiB)

all layers can run on DLA.

however, while I run

./cuDLAStandaloneMode mnist_engine.trt 1.pgm
The file size = 48054
Device created successfully
Error in cudlaModuleLoadFromMemory = 7

$ jetson_release
‘DISPLAY’ environment variable not set… skipping surface info

  • Jetson AGX Orin
    • Jetpack UNKNOWN [L4T 35.1.0]
    • NV Power Mode: MAXN - Type: 0
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 11.8.89
    • cuDNN: 8.4.1.50
    • TensorRT: 8.4.1.5
    • OpenCV: 4.4.0 compiled CUDA: YES
    • VPI: 2.1.6
    • Vulkan: 1.3.203

what’s the problem?

Can you try with below command to generate DLA loadable?
trtexec --onnx=mnist.onnx --fp16 --useDLACore=0 --saveEngine=./mnist.bin --inputIOFormats=fp16:chw16 --outputIOFormats=fp16:chw16 --buildOnly --safe --verbose

[10/14/2022-11:11:25] [E] Safety is not supported because safety runtime library is unavailable.
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # trtexec --onnx=mnist.onnx --fp16 --useDLACore=0 --saveEngine=./mnist.bin --inputIOFormats=fp16:chw16 --outputIOFormats=fp16:chw16 --buildOnly --safe --verbose

Did you installed tensorrt-safe? Try with “sudo apt-get update” then “sudo apt-get install tensorrt-safe”?

sudo apt-get install tensorrt-safe
Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package tensorrt-safe

https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/8.4.1/local_repos/nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604_1-1_arm64.deb

I download the .deb file from NVIDIA website and dpkg -i

not installed by apt install

I try the following command.

trtexec --deploy=/usr/src/tensorrt/data/mnist/mnist.prototxt --output=prob --useDLACore=0 --fp16 --inputIOFormats=fp16:chw16 --outputIOFormats=fp16:chw16 --saveEngine=./mnist.bin

./cuDLAStandaloneMode ./mnist.bin 1.pgm
The file size = 1167095
Device created successfully
Error in cudlaModuleLoadFromMemory = 7

You can try to walkaround “Safety is not supported because safety runtime library is unavailable” via:
cd /lib/aarch64-linux-gnu
sudo ln -s libnvinfer.so.8.4.1 libnvinfer_safe.so
sudo ln -s libnvinfer.so.8.4.1 libnvinfer_safe.so.8

trtexec --deploy=/usr/src/tensorrt/data/mnist/mnist.prototxt --output=prob --useDLACore=0 --fp16 --inputIOFormats=fp16:chw16 --outputIOFormats=fp16:chw16 --saveEngine=./mnist.bin --buildOnly --safe

it works. thanks.