Recently i get brand new Nvidia Xavier 16Gb (thx to NVidia cost policy update ^_^)
now, strait to the point
Nvidia Xavier 16GB, CUDA 10.0 JetPack 4.2.2 Deepstream 4.0.1
YoloV3 weights & config file downloaded with prebuild.sh shell script
Check YoloV3 performance (compare to jetson TX2) via default (deepstreamSDK) deepstream-app utility.
IT takes almost 5-6 MINUTES to build tensorRT engine (at least with this message processing log hangs during loading). Then, mpeg stream which set in cfg file are playing with correct speed (about 18 FPS) with correct detection. Please note, all performance configs and utils are set to MAX performance.
So question is, why yoloV3 network engine build costs that much time? (is it correct?)
YoloV2 builds much faster (about 2 minutes, but also has less layers)
Despite jetson Tx2 has almost 4-6 times less perfomance for video inferring by yoloV3 (according my tests) it took almost the same time to build network (6-8 minutes)