Problem:
Running the same c++ program with the same uff file on both platforms. TensorRT 5.0RC on Xavier got UFFParser error while TensorRT 4.0GA on TX2 was working fine. (The same code and file work on TensorRT 5.0.0.10 and TensorRT 4.0.1.6 on 1080Ti as well)
When running with FP32:
UFFParser: Parser error: test_net/concat_2: Concat operation axis is out of bounds for layer test_net/concat_2
When running with FP16:
ERROR: UFFParser: Parser error: test_net/BatchNorm_11/moving_variance: Weight 76611.585938 is outside of [65504.000000, -65504.000000].
Questions:
-
Also tried to generate the uff file with version 0.5.1 uff but still got the same error for both FP32 and FP16 on Xavier. Should/(can) I downgrade my Xavier to JetPack 3.3 with TensorRT 4.0GA? Or should I modify the network?
-
For the FP16, does the FP16 option rescale the weights in the network? If not, do I need FP16 training for Xavier to make FP16 inference work?
Thanks!
Jetson Xavier (Jetpack 4.0EA)
Linux distro and version 18.04
CUDA version 10.0
CUDNN version 7.3
Tensorflow version 1.11
TensorRT version 5.0RC
Jetson TX2 (Jetpack 3.3)
Linux distro and version 16.04
CUDA version 9.0
CUDNN version 7.1.5
Tensorflow version 1.9
TensorRT version 4.0GA
1080Ti
Linux distro and version 16.04
nvidia driver version 390.48
CUDA version 9.0
CUDNN version 7.1.3
Tensorflow version 1.8
TensorRT version 5.0.0.10 and 4.0.1.6