Tao-deploy on Orin AGX CLI Error

I already tried to use the tao converter following your post
How to run tlt-converter - Intelligent Video Analytics / TAO Toolkit - NVIDIA Developer Forums

I dowloaded the binary but it does not run on Jetson Orin with Jetpack 5.1.1 since the binary is based on tensorRT 7.x

I tried the following workaround create symbolic links for libnvinfer.so.7, libnvinfer_plugin.so.7, libnvparsers.so.7 linking to the .so.8 version available on Orin. Now the binary works but when I run it with the following

./tlt-converter resnet34_peoplenet_pruned.etlt -k tlt_encode -c PeopleNet_trt.txt -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,544,960 -i nchw -e peoplenet_int8.engine -m 64 -t int8 -b 64

I got the following errors

[INFO] [MemUsageChange] Init CUDA: CPU +221, GPU +0, now: CPU 249, GPU 3728 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +303, GPU +427, now: CPU 574, GPU 4174 (MiB)
[ERROR] 2: [logging.cpp::decRefCount::65] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
Segmentation fault (core dumped)

I got the same error also using fp32 and with different batch size. The calibration file is taken from officiel repository

nvidia-tao/tao_deploy/specs/PeopleNet at main · NVIDIA-AI-IOT/nvidia-tao · GitHub

Any suggestion?