I am working on a project that is meant to spray paint on a canvas. Here is a prototype working using only an esp32 link .
What I want to do is to add two cameras with two different scopes. The first camera has to detect actions made by a person using something like pose detection, and extract the skeleton produced while doing the action. The second camera has to detect the canvas and resize the draw on a way that can fit inside. Once both are ready the draw can begun. How can I proceed? Can I just use the nano or should I go with the agx? What should I use Jupiter note? And how should I don’t the movement? Using an arduino? This is my first project with nano and I am not very familiar on how to implement something like this.
The robot movement are based on a sphere due to be a pan/tilt, and the usual scenario is to place a canvas in front of it. The outcome should be that based on the calibration done the machine is able to measure the canvas, resize the picture, and spray. If the canvas is too big or too small a preview of the outcome should be thrown. Asking if I should continue.
I tried with deepstream but it is difficult to implement, instead with jetson inference has been quicker and easier. Now that I have posenet.py working how do i recognize actions? Is there a way to extrapolate the skeleton from the image?