Using DIGITS to create a new segmentation model????

I bit off so much more than I can chew with this project, but I was hoping I could get some help here. I’ve been trying to use the Jetson Nano to be able to read and understand American Sign Language, but clearly I’m stuck. I am trying to see if there’s already a pre-trained segmentation model for hands specifically (I haven’t been able to find one) or how I would go about creating one. This is my first project with a Jetson Nano, and my first major coding endeavor with Python. I know I bit off more than I can chew, but is there anyone out there who knows how I could find/create a trained segmentation model for human hands specifically?

EDIT:
I tried using DIGITS but can’t seem to be able to functionally use it, seeing as I can’t quite figure out how to use Docker to make a container. PLEASE HELP

Hi,

Sorry that we don’t have a segmentation model directly for HAND.
But you can re-train a new one with the steps shared in following tutorial:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md

In general, there are two steps required.

1. Train a segmentation model on a desktop GPU:
It’s still recommended DIGITs for the training jobs for the good visualize function.
Instead of docker, it’s also available to setup the DIGITs on your environment directly:
https://github.com/NVIDIA/DIGITS/blob/master/docs/BuildDigits.md

2. Copy the model into Jetson and run it with jetson_inference,

Thanks.