Tlt-converter fails when converting yolov4_resnet18 on Xavier AGX

I’ve been following the guide on training and deploying object detection models with the tlt jupyter notebook examples. I’m following the yolo_v4 resnet18 example.
The training and export work fine, but when I try converting the model to TensorRT on the Xavier AGX it fails:

$ ./tlt-converter -k KEY -d 3,640,640 -o BatchedNMS -c cal.bin -t int8 -i nchw yolov4_resnet18_epoch_080.etlt
[ERROR] UffParser: Validator error: FirstDimTile_2: Unsupported operation _BatchTilePlugin_TRT
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

The Xavier board is updated to jetson 4.5 using the SDK manager and I’m using the appropriate conversion script for jetson 4.5.
The conversion works if using the PC that did the training.

Thank you in advance.

Did find a workaround by downloading from deepstream_tlt_apps/TRT-OSS/Jetson/TRT7.1 at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub
and moving it:

$ wget
$ sudo mv /usr/lib/aarch64-linux-gnu/
$ sudo ln -s /usr/lib/aarch64-linux-gnu/  /usr/lib/aarch64-linux-gnu/
$ sudo ln -s /usr/lib/aarch64-linux-gnu/  /usr/lib/aarch64-linux-gnu/

Though not sure if that is the correct approach?

For YOLOv3, we need the batchTilePlugin and batchedNMSPlugin plugins from the TensorRT OSS build. See YOLOv3 — Transfer Learning Toolkit 3.0 documentation

For the step, please strictly follow deepstream_tlt_apps/TRT-OSS/Jetson/TRT7.1 at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub
There is one reference I ever posted in a topic.
Failling in building sample from TLT-DEEPSTREAM - #15 by Morganh