Hi, so i wanted to convert a pytorch transofrmer model to a tensorrt engine on the jetson. First i converted it to an onnx 17 model, tested it and it ran perfectly on my laptop using TensorRT 8.6.1 (I need LayerNormalization) with cuda 11.3. However when i try to do the same on the jetson i get some cuda errors, please see the jetson log. Currently i’m stuck and need some guidance on how to get it working. The only difference i could find was that i use cuda 12.0 instead of 11.3 on the jetson. I also ran the cuda 12.0 samples with success on the jetson. For reference i have put up the laptop build log too.
From what i could read TensorRT 8.6 is the only version that supports LayerNormalization (so TensorRT 8.5 is not an option if that is the case). I downloaded the TensorRT 8.6 GA for ARM SBSA for ubuntu 20.04 and CUDA 12.0, but i guess that is not compatible with the Jetson?
Furhtermore i saw an article that it was possible to upgrade to CUDA12 on the jetson and it seems to work so far: