Trtexec crashing on execution with converted LPRNet engine file on Jetson AGX

• Hardware (Jetson AGX - Jetpack 4.6 – Nvidia Volta → Ubuntu 18.04)
• Network Type (LPRNet)
• TLT Version: TAO Toolkit 3.0-21.11
• Tensorrt version: 8.0.1
• Issue: trtexec crashing on execution with converted LPRNet engine file on Jetson AGX

• How to reproduce the issue?
I trainer lprnet model successfully and exported it to get etlt on tao 3.0-21.11

Then to run the model on Jetson AGX I used trtexec binary from the official repo or tensorrt (TensorRT/samples/trtexec at main · NVIDIA/TensorRT · GitHub) and executed this command further:

./trtexec --loadEngine=/home/koireader/work/server_1/docs/examples/jetson/concurrency_and_dynamic_batching/tao/models/lprnet/lprnet.plan --batch=1

and met this error:


&&&& RUNNING TensorRT.trtexec # ./trtexec --loadEngine=/home/koireader/work/server_1/docs/examples/jetson/concurrency_and_dynamic_batching/tao/models/lprnet/lprnet.plan --batch=1
[12/23/2021-14:13:36] [I] === Model Options ===
[12/23/2021-14:13:36] [I] Format: *
[12/23/2021-14:13:36] [I] Model:
[12/23/2021-14:13:36] [I] Output:
[12/23/2021-14:13:36] [I] === Build Options ===
[12/23/2021-14:13:36] [I] Max batch: 1
[12/23/2021-14:13:36] [I] Workspace: 16 MiB
[12/23/2021-14:13:36] [I] minTiming: 1
[12/23/2021-14:13:36] [I] avgTiming: 8
[12/23/2021-14:13:36] [I] Precision: FP32
[12/23/2021-14:13:36] [I] Calibration:
[12/23/2021-14:13:36] [I] Refit: Disabled
[12/23/2021-14:13:36] [I] Safe mode: Disabled
[12/23/2021-14:13:36] [I] Save engine:
[12/23/2021-14:13:36] [I] Load engine: /home/koireader/work/server_1/docs/examples/jetson/concurrency_and_dynamic_batching/tao/models/lprnet/lprnet.plan
[12/23/2021-14:13:36] [I] Builder Cache: Enabled
[12/23/2021-14:13:36] [I] NVTX verbosity: 0
[12/23/2021-14:13:36] [I] Tactic sources: Using default tactic sources
[12/23/2021-14:13:36] [I] Input(s)s format: fp32:CHW
[12/23/2021-14:13:36] [I] Output(s)s format: fp32:CHW
[12/23/2021-14:13:36] [I] Input build shapes: model
[12/23/2021-14:13:36] [I] Input calibration shapes: model
[12/23/2021-14:13:36] [I] === System Options ===
[12/23/2021-14:13:36] [I] Device: 0
[12/23/2021-14:13:36] [I] DLACore:
[12/23/2021-14:13:36] [I] Plugins:
[12/23/2021-14:13:36] [I] === Inference Options ===
[12/23/2021-14:13:36] [I] Batch: 1
[12/23/2021-14:13:36] [I] Input inference shapes: model
[12/23/2021-14:13:36] [I] Iterations: 10
[12/23/2021-14:13:36] [I] Duration: 3s (+ 200ms warm up)
[12/23/2021-14:13:36] [I] Sleep time: 0ms
[12/23/2021-14:13:36] [I] Streams: 1
[12/23/2021-14:13:36] [I] ExposeDMA: Disabled
[12/23/2021-14:13:36] [I] Data transfers: Enabled
[12/23/2021-14:13:36] [I] Spin-wait: Disabled
[12/23/2021-14:13:36] [I] Multithreading: Disabled
[12/23/2021-14:13:36] [I] CUDA Graph: Disabled
[12/23/2021-14:13:36] [I] Separate profiling: Disabled
[12/23/2021-14:13:36] [I] Skip inference: Disabled
[12/23/2021-14:13:36] [I] Inputs:
[12/23/2021-14:13:36] [I] === Reporting Options ===
[12/23/2021-14:13:36] [I] Verbose: Disabled
[12/23/2021-14:13:36] [I] Averages: 10 inferences
[12/23/2021-14:13:36] [I] Percentile: 99
[12/23/2021-14:13:36] [I] Dump refittable layers:Disabled
[12/23/2021-14:13:36] [I] Dump output: Disabled
[12/23/2021-14:13:36] [I] Profile: Disabled
[12/23/2021-14:13:36] [I] Export timing to JSON file:
[12/23/2021-14:13:36] [I] Export output to JSON file:
[12/23/2021-14:13:36] [I] Export profile to JSON file:
[12/23/2021-14:13:36] [I]
[12/23/2021-14:13:36] [I] === Device Information ===
[12/23/2021-14:13:36] [I] Selected Device: Xavier
[12/23/2021-14:13:36] [I] Compute Capability: 7.2
[12/23/2021-14:13:36] [I] SMs: 8
[12/23/2021-14:13:36] [I] Compute Clock Rate: 1.377 GHz
[12/23/2021-14:13:36] [I] Device Global Memory: 31928 MiB
[12/23/2021-14:13:36] [I] Shared Memory per SM: 96 KiB
[12/23/2021-14:13:36] [I] Memory Bus Width: 256 bits (ECC disabled)
[12/23/2021-14:13:36] [I] Memory Clock Rate: 1.377 GHz
[12/23/2021-14:13:36] [I]
[12/23/2021-14:13:38] [I] [TRT] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 505, GPU 14943 (MiB)
Segmentation fault (core dumped)

What could be the cause of this?

Thank you!

Could you try 8.0 version of trtexec ?TensorRT/samples/trtexec at release/8.0 · NVIDIA/TensorRT · GitHub

CUDA Version 10.2

Issue: make failing for tensorrt version 8.0 cloned

Description: cmake succeeds but make fails for trt cloned from the link https://github.com/NVIDIA/TensorRT/tree/release/8.0/samples/trtexec

Following are the logs for make command:

plugin/CMakeFiles/nvinfer_plugin.dir/build.make:1909: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContext.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContext.cu.o] Error 1
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:1951: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormVarSeqlenKernelHFace.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormVarSeqlenKernelHFace.cu.o] Error 1
In file included from /home/koireader/work/TensorRT/plugin/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelHFace.cu:21:0:
/home/koireader/work/TensorRT/plugin/common/common.cuh:21:10: fatal error: cub/cub.cuh: No such file or directory
#include <cub/cub.cuh>
^~~~~~~~~~~~~
compilation terminated.
plugin/CMakeFiles/nvinfer_plugin_static.dir/build.make:1993: recipe for target 'plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelHFace.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelHFace.cu.o] Error 1
In file included from /home/koireader/work/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu:27:0:
/home/koireader/work/TensorRT/plugin/common/common.cuh:21:10: fatal error: cub/cub.cuh: No such file or directory
#include <cub/cub.cuh>
^~~~~~~~~~~~~
compilation terminated.
In file included from /home/koireader/work/TensorRT/plugin/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelMTron.cu:19:0:
/home/koireader/work/TensorRT/plugin/common/common.cuh:21:10: fatal error: cub/cub.cuh: No such file or directory
#include <cub/cub.cuh>
^~~~~~~~~~~~~
compilation terminated.
plugin/CMakeFiles/nvinfer_plugin_static.dir/build.make:1937: recipe for target 'plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormKernel.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormKernel.cu.o] Error 1
In file included from /home/koireader/work/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu:27:0:
/home/koireader/work/TensorRT/plugin/common/common.cuh:21:10: fatal error: cub/cub.cuh: No such file or directory
#include <cub/cub.cuh>
^~~~~~~~~~~~~
compilation terminated.
CMakeFiles/Makefile2:1387: recipe for target 'plugin/CMakeFiles/nvinfer_plugin_static.dir/all' failed
make[1]: *** [plugin/CMakeFiles/nvinfer_plugin_static.dir/all] Error 2
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:2007: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelMTron.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormInt8InterleavedKernelMTron.cu.o] Error 1
In file included from /home/koireader/work/TensorRT/plugin/skipLayerNormPlugin/skipLayerNormKernel.cu:22:0:
/home/koireader/work/TensorRT/plugin/common/common.cuh:21:10: fatal error: cub/cub.cuh: No such file or directory
#include <cub/cub.cuh>
^~~~~~~~~~~~~
compilation terminated.
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:1937: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormKernel.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormKernel.cu.o] Error 1
plugin/CMakeFiles/nvinfer_plugin.dir/build.make:2021: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o' failed
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o] Error 1
CMakeFiles/Makefile2:1334: recipe for target 'plugin/CMakeFiles/nvinfer_plugin.dir/all' failed
make[1]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
Makefile:155: recipe for target 'all' failed
make: *** [all] Error 2

Followed the link of a similar issue posted here: Cannot successfully pass the "make" step for TensorRT 7 which was downloaded on Github

As mentioned in this link: Editing /usr/include/cudnn.h makes no difference.

Replaced /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1 from the link here https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/TRT-OSS/Jetson

Followed all steps as mentioned on issue How to export model using tlt-converter for Jetson Nano to set up TRT OSS.

Still, the above-mentioned error keeps coming up while executing the make command

More, could you delete “--batch=1” when you run trtexec?

Also, in Jetpack 4.6, there is a prebuilt trtexec. You can use it directly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.