Fp32 precision support on Jetson AGX Orin

when I profile the resnet50_fp32.onnx on Jetson AGX Orin device, I can follow with instructions here: GitHub - NVIDIA-AI-IOT/jetson_benchmarks: Jetson Benchmark

It works with precision int8 and fp16
case 1: sudo python3 benchmark.py --model_name resnet50 … --precision int8
case 2: sudo python3 benchmark.py --model_name resnet50 … --precision fp16

but it failed with precision fp32
case 3:sudo python3 benchmark.py --model_name resnet50 … --precision fp32

So, question here: does Nvidia Trt engine support fp32 precision support? if it does support, how do I pass the precision parameter?

–noTF32 Disable tf32 precision (default is to enable tf32, in addition to fp32)
–fp16 Enable fp16 precision, in addition to fp32 (default = disabled)
–int8 Enable int8 precision, in addition to fp32 (default = disabled)