Siamese Network in Jetson TX2

Hello, dear community.
This is a general question.
Since I am working with the Jetson TX2 for a long time, I was running a siamese model with the inception model but it is running too slow. So, is there any example of running a siamese model in the Jetson TX2?
Thank you so much.

Hi,

May I know which frameworks you use?
If an ONNX-based model is available, you can try it with TensorRT to get some acceleration.

Thanks.

Hi @AastaLLL
Thank you for your reply.
I am using Tensorflow+Keras as a framework, and I upload the inception model weights.
You can take a look at the code here.
Before using the inception model weights I was using the VGG16 model and the inference in the jetson tx2 was slow as the inception model.

Hi,

Would you mind sharing the detailed score you got from TX2 with us?
Please noted that you can maximize the device performance with the following commands:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

Based on our previous testing, we can get 10fps with VGG on Nano.
So the fps of TX2 is expected to be higher than 10 fps if fully optimized.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.