the “freeMemory” value varies up to 4GiB, but it is never more than that, what does that value mean? Why is it so little? How can i free more memory and assign it to the object detection task?
Those Fps are not really slow, but fast is something different. So how is it possible that the Jetson is used in autonomous cars? I was expecting much more speed. My Dell Laptop with a Nvidia GTX 1050 is twice as fast on these test scenarios.
So am i doing something wrong? How can i increase Performance in terms of Fps?
Do you run TensorFlow with config.gpu_options.per_process_gpu_memory_fraction = xx?
This configuration will limit the allocated amount of GPU memory. You can get more information here:
I got the same situation, run tensorflow inference with ssd_mobilenet_v1 model provided by google, I only got 4 fps on video, anyone got any idea how to improve the inference speed?
@D_pz i am currently working on Jetson Tx2 with Googles Object Detection API.
I created a github repo to work with it.
Should work for you too. Would be nice if you try it out or contribute!
The problem with detectnet is it’s not really intended for multiple objects. Sure you can do 2 or maybe 3, but I haven’t seen anything past that.
If a person needs to pick an Object detect network to detect multiple objects with a Jetson TX2 then what do they pick?
Assuming they want something reasonably fast (approx 15fps) with a reasonable resolution (640x480)?
I don’t see anything within the NVidia Digits → NVidia TX2 workflow that’s really meant for it.
In the list of things to try out there is an SSD, or Faster R-CNN. But, neither of those have been shown to operate faster than 5fps on the TX2. At least to my knowledge.
There is Yolo, but it’s my understanding one is giving up on accuracy.
This is interesting, thank you AastaLLL for investigating.
But for my understanding this can’t be the only reason, because i updated the config of the tf.session() of my code to let it allow GPU Memory Growth.
While the performance stays the same, the Model only uses around 300MB of Ram and the GPU and CPU usage is still at the same lvl as before.
This is what makes me wonder, neither the GPU Memory, nor the GPU Freq, nor the CPU is maxed out at any time.
So where is the bottleneck? Why doesn’t the jetson just use more of its power?
Hi Everyone, does anyone know how to increase YOLO FPS on Tx2? When I ran YOLO v2 on my laptop I was able to achieve about 25 FPS but when I am running it on my Tx2 I can only achieve 6-7 FPS. can anyone explain why there is so much difference?
What is the GPU frequency and number of CUDA cores and architecture of the GPU in your laptop?
How does that compare to the TX2 specs?
Also note that the TX2 is aimed at 12 Watts total across CPU + GPU (give or take,) which is probably much less than your laptop is using.