How to improve facenet performance on TX2 EVB

We compared the performance of facenet on TX1 EVB and TX2 EVB and found Tx2 had less accuracy and longer latency.

The result was different from what we expected. Do we miss any?
Is there a way to improve it?

We used the same one camera module on Tx1 and TX2.
And found the camera preview image of Tx2 was more vague then Tx1.

Here are some information about our tests.
1.command : ./detectnet-camera facenet
2. Tx2 JetPace 3.0
3. Tx1 Jetpack 2.3.1

Thank you,

Hi,

For maximize the performance on tx2, please:

  1. Set nvpmodel to max-N
  2. Run jeston_clock.sh

For accuracy, please make sure you compile jetson_inference with correct architecture. (sm_62 for tx2)

Hi,

Thank you for your prompt reply.

After set tx2 nvpmodel to max-N and run jetson_clock.sh.
The fps increased 2-3 fps but accuracy looked no improve.

Could you explain more about the compilation selection?
how to make sure select sm_62?

Plus, we realized the preview quality might affect to the accuracy.
Is there a way to adjust the preview quality?
Did anyone compare the preview quality on TX1 and TX2?

Thanks

Hi,

Make sure there is sm_62 architecture in your CMakeList:

For accuracy, please check this topic:
https://devtalk.nvidia.com/default/topic/993552/jetson-tx1/detection-result-difference-between-jetson-inference2-3-and-digits5-1/post/5097211/#5097211

I have set the Makefile.config CUDA_ARCH := -gencode arch=compute_63,code=sm_62 ,set NV Power Mode MAX-N sudo ./jetson_clock.sh ,but it did not work when I used the camera.Is there any other useful advice for me to improve the TX2 performance to 8.0fps (8.0fps is a value I’ve seen from other webpage) Thank you

Hello hnlyxacj, have you tried JetPack 3.1 yet? It includes TensorRT 2.1 and is supposed to improve performance for single-batch inference.

Also if you have problems with sample FaceNet model, you may be interested to train your own using FDDB or a similar facial detection database.