Project help. On how to implement ai on simple drawing machine

I am working on a project that is meant to spray paint on a canvas. Here is a prototype working using only an esp32 link .

What I want to do is to add two cameras with two different scopes. The first camera has to detect actions made by a person using something like pose detection, and extract the skeleton produced while doing the action. The second camera has to detect the canvas and resize the draw on a way that can fit inside. Once both are ready the draw can begun. How can I proceed? Can I just use the nano or should I go with the agx? What should I use Jupiter note? And how should I don’t the movement? Using an arduino? This is my first project with nano and I am not very familiar on how to implement something like this.

The robot movement are based on a sphere due to be a pan/tilt, and the usual scenario is to place a canvas in front of it. The outcome should be that based on the calibration done the machine is able to measure the canvas, resize the picture, and spray. If the canvas is too big or too small a preview of the outcome should be thrown. Asking if I should continue.

Hi,
We suggest install DeepStream SDK and try this sample:
GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.

If you can run it successfully but the performance(framerate) does not meet expectation, you may consider to use AGX Xavier or Xavier NX.

I tried with deepstream but it is difficult to implement, instead with jetson inference has been quicker and easier. Now that I have posenet.py working how do i recognize actions? Is there a way to extrapolate the skeleton from the image?

Hi @romeofilippo95, you can see a code sample here for extracting the keypoints (you can also extract the links to form the skeleton): https://github.com/dusty-nv/jetson-inference/blob/master/docs/posenet.md#working-with-object-poses

jetson-inference doesn’t support behavior/action recognition, but for gestures there’s an example of this in the trt_pose_hand project.

1 Like

So in order to extract this kind of data I should use deepstream or deeplabcut

Yes, I would recommend one of those packages that already support it if you don’t wish to integrate it yourself with jetson-inference.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.