Hi, I’m a student who is entering into IMB’s Call For Code challenge. My idea requires using an embedded AI device to run some AI models, in parallel. Unfortunately the price of these devices is absurd and so I’m trying to get the best value for the money that I have.
I’ve reasoned that the nano original is not powerful enough to run multiple models which include segmentation and object detection. Thus I settled on the Xavier NX, as it explicitly states that it can do this. Now I noticed that the NX has some GPU cores disabled, compared to the industry version and so I was wondering if I will be able to enable these cores?
Also if I go with the 8gb NX, which jetpack am I supposed to install, would this be the latest version 6? Having looked at the Hello AI World on Github, it says that everything in the Jetson AI Lab runs on Orin “(and in some cases Xavier)”. So I’m worried that some of these computer vision models won’t run on the NX and following this I’m not sure that jetpack 6 will either.
Thank you for any help clearing this all up, it’s very much appreciated.
For Xavier NX, the JetPack 5.1.3 is the latest version, not able to use JetPack 6.
Also, there is no our Xavier NX devkit for sell, you may need to purchase from our ecosystem partner with their carrier board.
Yeah I’ll be purchasing it from Ebay so I wanted to double check that it would be suitable before doing so.
After asking GPT to clarify the meaning of the MLPerf results I’m to believe that a result of 1786.91 object detection small, means that a 30fps camera running on that model, would leave a processing headroom of 1756.91?