Jetson Nano Live Sign Language Understanding

Hello Jetson family. I have been looking for great live recognition projects and found an excellent existing, non-Nano, project on GitHub here: GitHub - loicmarie/sign-language-alphabet-recognizer: Simple sign language alphabet recognizer using Python, openCV and tensorflow for training Inception model (CNN classifier). While trying to migrate this to the Nano, I learned a few things about training in the cloud and deploying at the edge. You can find my full blog post here: IT in Context: Cloud Machine Learning: Train in the Cloud, Deploy at the Edge Enjoy.

This is great content! One small issue, you may need to update your links as they seem to include an extra “.” and do not properly resolve. I was able to figure it out, but hopefully this does not remain an issue for others.

Thank you very much toolboc. Let me see if I can fix that.

Hi Dennis,

This is really great. Could you have trained on the Nano itself or is there a size constraint with generating the model? It would be interesting to see the estimated time on the Nano vs the laptop and VM.



Almost. One can get DIGITS running on the NANO, but CUDA (GPU library) is not ported to NANO yet, so models cannot be created.


Hi Dennis,
Great sharing. I tried this non-NANO project on NANO. And the result is far from satisfying, the FPS is sluggish. kindly be so kind to share the downloaded model. Thank you


I have since reformatted my Nano as a JetBot. The accuracy of that model was disappointing to me, but the process of training in the cloud and deploying at the edge was educational. I think I stored a copy of the model here:

Dennis Faucher