Seg fault when converting model to INT8, DLA

I have a particular model that causes a seg fault when converting to INT8 for DLA on Jetson Xavier. The model works fine when converting to INT8 using GPU. My other models convert fine to GPU and DLA.

I am using the C++ API. I can share the model if needed.

Environment

TensorRT Version: Jetpack 4.5.1
GPU Type: Jetson Xavier AGX

Hi, Please refer to the below links to perform inference in INT8

Thanks!

I guess my question is that with 1 particular model I have, it seg faults when I convert it to INT8 targeting DLA. The model does not segfault when I target GPU.

I am able to convert my other models to both INT8 and GPU without problem.

Hi @jseng,

We recommend you to go through TensorRT user guide talks about working with DLA and running sample.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic

If you need further assistance, we request you to post your query on Jetson AGX Xavier forum. You may get better help here.

Thank you.