How can I debugging Tensor RT?

Jetson Xavier
Jetpack 4.3. DP
Ubuntu 18.04
gpu_arch: 7.2
cuda: 10.0.326
cudnn: 7.6.3.28-1+cuda10.0
trt: 6.0.1.5-1+cuda10.0

Hello,

I am using Tensor RT in C++.

When I execute a trt engine, segmentation fault in function ‘execute’.

Using system profiler, I can find that delay occurs in ‘shuffle layer’ but I don’t know what’s happened.

It shows only segmentation fault without any information.

Segmentation fault (core dumped)

How can I debug in tensorRT function?

Hi,

For debugging the issue, you can set the ILogger to INFO mode and use supported profiling tools to help investigate bottlenecks.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#initialize_library
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#profiling-tools

You can also use “trtexec” command line tool for benchmarking & generating serialized engines from models.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#command-line-programs

If possible, could you please share the sample script and model along with reproduction setups?

Thanks

1 Like