i am currently running several Object Detection APIs on the Jetson Tx2 to figure out which is Realtime-Detection able.
Two examples are Googles API with Tensorflow (https://github.com/tensorflow/models/tree/master/research/object_detection)
I changed it a little bit to run it as a python script with onboard or webcam as input.
and Yolo on Darknet (https://pjreddie.com/darknet/yolo/)
I speed up the jetson with:
sudo nvpmodel -m 0 sudo ./jetson_clocks.sh
and my Performances are:
Tensorflow with SSD_Mobilenet: 4 Fps
Darknet with Tiny-Yolo: 17.5 Fps
Farknet with Yolo-v2: 2.7 Fps
tegrastats gives me:
RAM 4393/7851MB (lfb 356x4MB) CPU [43%@2035,25%@2035,15%@2035,38%@2035,40%@2035,40%@2035] BCPU@35C MCPU@35C GPU@41C PLL@35C AO@35.5C Tboard@28C Tdiode@34.5C PMIC@100C email@example.comC VDD_IN 12282/12517 VDD_CPU 2059/2064 VDD_GPU 4727/4803 VDD_SOC 1601/1595 VDD_WIFI 0/69 VDD_DDR 2812/2808
Tensorflow gives me:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005 pciBusID: 0000:00:00.0 totalMemory: 7.67GiB <b>freeMemory: 2.00GiB</b> 2017-12-20 10:16:28.963403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
the “freeMemory” value varies up to 4GiB, but it is never more than that, what does that value mean? Why is it so little? How can i free more memory and assign it to the object detection task?
Those Fps are not really slow, but fast is something different. So how is it possible that the Jetson is used in autonomous cars? I was expecting much more speed. My Dell Laptop with a Nvidia GTX 1050 is twice as fast on these test scenarios.
So am i doing something wrong? How can i increase Performance in terms of Fps?
Thank you in advance!