I have a couple of questions concerning the expected performance of this device:
I am running a neural network on the original Jetson nano which takes roughly 800ms per image.
Now, I know that the Orin NX 16GB is advertised to have 100 TOPS of processing power, several orders of magnitude the processing power of the Nano, but I also know that performance cant be calculated linearly.
My question is, what performance should I expect from the Orin Nx 16gb on the same task? If I where to run the two instances of this task simultaneously , will I be able to achieve sub 250ms speeds per image on each instance? Can this also be achieved on the Orin Nx 8Gb ?
My other question concerns training a model on the Jetson Orin Nx.
I know that these systems are optimised for inference and perform poorly on training tasks.
However, if training needs to be done occasionally, what performance should I expect? I have a network being trained on 1050ti 4GB taking about 4-6h to complete, is there a way to know how would the Nx behave in such a scenario ? how long would it take?
I know that my questions are a little oblivious, but we’re working on a project and want to make sure that the chosen hardware is enough.
Performance usually depends on the type of task.
The behavior of IO-bound inference (ex. small kernel convolution) will be very different compared to computation-boud inference.
So it’s recommended to give it a try directly.
But we do have some benchmark results for Orin NX for your reference: