This is my first time seeking deep learning help on forums but Im desperate so plz help out!
- I decided to create a face recognition system and deploy it on two edge devices. For this purpose, I used the FaceBoxes model for face detection and FaceNet model for creating 128-D embeddings on the detected faces. For classification, I used the MLP classifier which I trained on Google Colab.
- I took the trained Colab MLP model and deployed on Jetson Nano and Jetson TX2. All the major packages (Python, OpenCV, Tensorflow, Numpy etc) used the same versions on both devices. Even the Jetpack on both devices was same (4.4.1).
- The recognition results on each device, individually, were constant. Like if I ran face recognition on a video on jetson nano, it would always give the same accuracy : 98%.
- Same for Jetson TX2, constant accuracy result: 99%.
- BUT I have to justify in my course, why do the two devices show different accuracy results on the SAME TEST VIDEO, using the SAME MODEL, trained on COLAB.
Unfortunately, I am not a hardware expert. I thought maybe it could be a difference of quantization or FP16/FP32 or something. But I dont even know what these terms mean. So some help in justifying why the accuracies are different on the two platforms, would be HIGHLY APPRECIATED.
Please guide me.
BTW, I used the sci-kit library for my implementation of the MLP classifier. And I used Tensorflow 2.3.1 for running the models.