Geforce RTX 3090 versus Jetson AGX Xavier for inference in AI

Hi everyone,

We are currently studying what AI could bring to our device in terms of image processing. We currently have a PC into it, but without enough computational power (no GPU).

In this context, could someone explain me the difference between the Geforce RTX (say 3090) and Jetson AGX Xavier for inference?

  • performances are not expressed in the same units (TFLOPS vs TOPS)
  • power consumption seems drastically different
  • Jetson AGX Xavier seems cheaper

Is Jetson Xavier more suited to embedding in a device (due to lower consumption)? Is one of the two better for training AI, and the other one more suited to inference? I am a little bit lost…

Warm regards

Hi,
This looks like a Jetson issue. Please refer to the below samlples in case useful.

For any further assistance, we recommend you to raise it to the respective platform from the below link

Thanks!

Thank you for your answer, I will review the proposed links and post my question on the Jetson & Embedded Systems forum if needed.