We are currently studying what AI could bring to our device in terms of image processing. We currently have a PC into it, but without enough computational power (no GPU).
In this context, could someone explain me the difference between the Geforce RTX (say 3090) and Jetson AGX Xavier for inference?
- performances are not expressed in the same units (TFLOPS vs TOPS)
- power consumption seems drastically different
- Jetson AGX Xavier seems cheaper
Is Jetson Xavier more suited to embedding in a device (due to lower consumption)? Is one of the two better for training AI, and the other one more suited to inference? I am a little bit lost…