Jetson Nano or similar device to deploy computer vision system

Hi there, this is the first time I’m posting on the forum.

I’ve been working on a research project, where we are using a combination of computer vision techniques…object detection, pose recognition and activity recognition from videos.
So far, we’ve been fetching the video stream remotely and running the computer vision pipeline on our servers, but we’re planning to switch to local processing.
The models we’re using are mostly implemented using GluonCV, an MXNet-based computer vision library, and on their website there are a few examples on how to implement those models on the Jetson Nano. So, I was wondering if we could use it for the processing.

We need the device where the system is running to connect to a camera, run the computer vision system and upload results to a Firestore database.
The system is written in Python, using libraries like MXNet, GluonCV, openCV, on Ubuntu 18.04.
Currently, the video frames are collected at 25 FPS, resized to (683x512), and fed to the object detection and pose estimation algorithms.
Once these two algorithms detect certain conditions, some short clips are saved and analyzed by an inflated 3D-CNN for activity recognition.
We use multiprocessing, so that the various components can run asynchronously. There aren’t great storage requirements, since the short video clips are deleted after they’ve been analyzed by the activity recognition module (so, we might need up to 100 MB for that).
On my laptop, the whole system needs at most (when activity recognition is running) around 4GB of RAM, and more or less 85% of the CPU (i7-8550U).
The RAM requirement should be flexible though, meaning that the algorithm takes as much as it’s available, but it should work with less memory (as little as 1 GB).

Could you please give me a suggestion, on whether the Jetson Nano (presumably, the 4 GB version) would be suitable to run the system? I seem to understand the device runs an Ubuntu version, so that most of the code could be run as it is?
Alternatively, could you please suggest me another device that could work?

Clearly, if you need any other information about the system I have to deploy, please ask. Thanks!

Hi,

Do you have a GPU on your desktop environment?

Suppose MXNet runs the recognition model on GPU.
It’s recommended to check if Nano’s GPU can meet your requirement first.
Some benchmark results for your reference: https://developer.nvidia.com/embedded/jetson-benchmarks

Below is a similar use case for pose recognition.

You can export the MXNet model into an ONNX format.
And change default openpose model path into your own.

Thanks.

Sorry, I forgot to mention it. No, my desktop doesn’t have a GPU, the numbers I gave in my original post were obtained on my laptop.

Hi,

Based on your memory usage, the model may not work on Nano.
But the real status still need a verification.

Maybe you can check our XavierNX:
https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-xavier-nx/

It can give you more flexibility on the memory usage and computing power.
Thanks.