Jetson Tx2 ruining Siamese network slow

Hello, Dear community.
I want to share the problem I am facing now.
Jetson TX2:

  • Jetpack4.5
  • Orbitty carrier
  • Tensorflow 2.4.0
  • Keras 2.4.0
  • python 3.6


  • Ubuntu 18.04
  • Tensorflow 2.4.0
  • keras 2.4.0
  • python 3.6

In the host, I have created a siamese network, and I have trained. For the siamese network, I am using VGG19, and I am doing fine-tuning. The problem comes in the jetson TX2. When I run the facial recognition python script (including the model from the host), the face detection with OpenCV runs a little bit slow, but when it has to recognize the face, the jetson tx2 runs too slow.
To help me to fix this problem, what do you need to know? Why the siamese network inference run too slow in jetson tx2?


Could you share which GPU do you use on the host?

Please noted that VGG19 is a relatively heavy model.
In our experience, TX2 for VGG19 is around 23 fps:

What performance you observed on the host and TX2?

Hi dear @AastaLLL
In the host machine, I am using the GTX 1060i.
Obviously, in the host, the performance is excellent and fast.
What is your recommendation? Shall I move to vgg16?

How can I improve the performance of the jetson tx2 using vgg19? or shall I move to vgg16?
Is the performance too slow because I am using a siamese network?

I have tried with the inception model but still is too slow. I am running a siamese network.


Do you get similar performance as the table shared above? ~23 fps for VGG19?
Or you can check if the maximal performance has been enabled.

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

If the performance cannot meet your requirement.
You can either use a smaller model or upgrade to other Jetson devices like Xavier or XaiverNX.


How can I see the performance?
I just run

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

but I did not get any answer. May I know if there is an implementation of siamese network for jetson?


The script adjusts the device clocks to the maximum.
You will need to run the network again to get the detailed fps.

Which frameworks do you use for inferencing on TX2?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.