I purchased TX2 and Xavier respectively,
Xavier’s performance is not as fast as TX2, so please inquire if there is a way to increase the performance.
We performed 3 different deep learning model inferences in Tx2 and Xaver respectively.
The first reasoning is that Xavier tends to be slow.
TX2 First inference : 9.4 s
TX2 Second inference : 530 ms
Xavier First inference : 13 s
Xavier Second inference : 510 ms
On the Spec, Xavier should be much faster, but it’s strange.
Please confirm it.
- Package version
- TX2
Jetpack 4.3
cuda 10.0
cudnn 7.6
tensorflow 1.15.2+nv20.2.tf1
tensorrt 6.0.1.10 - Xavier
Jetpack 4.3
cuda 10.0
cudnn 7.6
tensorflow 1.15.0+nv20.1.tf1
tensorrt 6.0.1.10
- Using model
Using tiny UNet-industrial
DeepLearningExamples/TensorFlow/Segmentation/UNet_Industrial at master · NVIDIA/DeepLearningExamples · GitHub