I bit off so much more than I can chew with this project, but I was hoping I could get some help here. I’ve been trying to use the Jetson Nano to be able to read and understand American Sign Language, but clearly I’m stuck. I am trying to see if there’s already a pre-trained segmentation model for hands specifically (I haven’t been able to find one) or how I would go about creating one. This is my first project with a Jetson Nano, and my first major coding endeavor with Python. I know I bit off more than I can chew, but is there anyone out there who knows how I could find/create a trained segmentation model for human hands specifically?
I tried using DIGITS but can’t seem to be able to functionally use it, seeing as I can’t quite figure out how to use Docker to make a container. PLEASE HELP