This is my project developed during my Deep Learning in Intelligent Video Analytics and Computer Vision Workshop in IIUM. The purpose of developing this project is to help normal people to communicate with hearing-impaired people.
Generally, this project will translate Sign Language Alphabet using a webcam on Jetson Nano by implementing Image Classification where every hand posture will be classified to its own letter class.
For future working, i would like to make my project to translate gesture instead of fixed hand posture :D
This is my GitHub link : GitHub - Khairulamireen/Sign-Language-Translator-using-Jetson-Nano